We present a collection of emotional video clips that can be used in ways similar to static images (e.g., the International Affective Picture System, IAPS; Lang, Bradley, & Cuthbert, 2008). The Database of Emotional Videos from Ottawa (DEVO) includes 291 brief video clips (mean duration = 5.42 s; SD = 2.89 s; range = 3–15 s) extracted from obscure sources to reduce their familiarity and to avoid influencing participants’ emotional responses. In Study 1, ratings of valence and arousal (measured with the Self Assessment Manikins from IAPS) and impact (Croucher, Calder, Ramponi, Barnard, & Murphy, 2011) were collected from 154 participants (82 women; mean age = 19.88 years; SD = 2.83 years), in a between-subjects design to avoid potential halo effects across the three ratings (Saal, Downey, & Lahey, 1980). Ratings collected online in a new set of 124 students with a within-subjects design (Study 2) were significantly correlated with the original sample’s. The clips were unfamiliar, having been seen previously by fewer than 2% of participants on average. The ratings consistently revealed the expected U-shaped relationships between valence and arousal/impact, and a strong positive correlation between arousal and impact. Hierarchical cluster analysis of the Study 1 ratings suggested seven groups of clips varying in valence, arousal, and impact, although the Study 2 ratings suggested five groups of clips. These clips should prove useful for a wide range of research on emotion and behaviour.

For decades, psychology and neuroscience have benefitted greatly from using standardized sets of static emotional images (e.g., the International Affective Picture System; IAPS; Lang et al., 2008) to learn about the nature of emotion and its influences on perception, cognition, and behaviour. Yet, the visual world is dynamic; real-life objects and scenes often involve motion. Thus, moving images (‘movies’ or ‘video clips’) can arguably provide greater ecological validity than static images, approaching closer to the real-world demands placed on visual perception and cognition. The current paper provides researchers with a new collection of emotional video clips, which can be used in ways similar to static image collections.

In addition to ecological validity, moving images present other potential advantages over static images. Motion is a powerful cue to object identity (Chen, Han, Hua, Gong, & Huang, 2003; Johansson, 1973; Regan, 1986; Ullman, 1979). Moving images have been argued to be ‘behaviourally urgent’ and to capture attention (for reviews, see Rauschenberger, 2003; Theeuwes, 2010) and boost physiological arousal more easily than static images (Detenber & Simons, 1998). Perhaps for these reasons, moving images can be more emotionally powerful than static images (Courtney, Dawson, Schell, Iyer, & Parsons, 2010). In addition, moving images are easier to remember than static images (Candan, Cutting, & DeLong, 2016;Ferguson, 2014; Ferguson, Homa, & Ellis, 2016; Matthews, Benjamin, & Osborne, 2007; Matthews, Buratto, & Lamberts, 2010) and textual stories (Baggett, 1979; Candan et al., 2016). This means that video clips may be particularly useful in memory studies in which researchers want to keep participants’ performance off the floor, for example, when measuring memory using a rigorous test such as free recall, probing for details, or waiting a long time (i.e., days, weeks, or months) between study and test.

Despite these advantages, researchers predominantly use static images to study emotion. This may be due in part to the limited availability of standardized emotional video clips (e.g., Bos, Jentgens, Beckers, & Kindt, 2013; Carvalho, Leite, Galdo-Álvarez, & Gonçalves, 2012; Gabert-Quillen, Bartolini, Abravanel, & Sanislow, 2015;Gross & Levenson, 1995; Samson, Kreibig, Soderstrom, Wade, & Gross, 2016; Schaefer, Nils, Sanchez, & Philippot, 2010; for a catalog of currently available videos, see Gilman et al., 2017). Importantly, these previous film databases have primarily included scenes from well-known feature films, such as The Hangover and Wall-E for positive clips, and The Silence of the Lambs and The Shawshank Redemption for negative clips. Past exposure to the clips can influence participants’ emotional responses (e.g., habituation to negative scenes; Gabert-Quillen et al., 2015), eye movements (Hannula, 2010), attention (Hutchinson & Turk-Browne, 2012; Kuhl & Chun, 2014), and declarative memory (Bransford & Johnson, 1972; Craik & Lockhart, 1972; Robertson & Köhler, 2007; Westmacott, Black, Freedman, & Moscovitch, 2004).

In light of these considerations, we present the Database of Emotional Videos from Ottawa (DEVO). It includes 291 video clips (3 to 15 s in duration) that are novel to most viewers, to minimize the confounding effects of familiarity. The clips portray a variety of scenes (e.g., human interactions, animals, nature, food/drink) obtained from motion pictures and amateur videos online. We provide ratings of valence, arousal, and impact (collected between subjects to avoid halo effects; Saal et al., 1980) from 82 women and 72 men in Study 1, and from 124 participants (collected within-subjects, on the advice of a reviewer) in Study 2. For the ratings of valence and arousal we used the method from the IAPS (Lang et al., 2008). The rating of impact came from Croucher et al. (2011), who have argued that it plays a more important role than the traditional concept of arousal does in emotion’s effects on attention and memory. Because we aim to use these video clips in studies of attention and memory, we examined the degree to which each clip’s rating on impact was correlated with its rating on arousal.

Another useful piece of information to have when using an emotional stimulus set regards the extent to which stimuli can be clustered or grouped together, especially along the three dimensions of emotion that we collected (i.e., valence, arousal, and impact). For this reason, we categorized video clips based on the three ratings using k-means and hierarchical cluster analyses. This will allow researchers to consider all three dimensions when selecting videos from the database.

Methods

Video Clip Selection

Sources

One hundred and fifty-four clips were selected from Canadian or foreign movies and documentaries, generally excluding Hollywood and internationally-known films. The remaining 137 clips were selected from online sources (YouTube, Vimeo, or other), depicting real-life events (e.g., cliff jumping or a natural disaster) or scripted scenes (e.g., person presenting a cake or chopping onions). For 78 sources, a minimum of two clips were extracted, generally one neutral and one emotional, to provide a tighter comparison between emotional and neutral stimuli (because certain video sources may contain unique attributes due to filming conditions and/or post-production effects). In these cases, effort was taken to minimize potential overlap in background context and characters. The clips were typically nominated by one team member and agreed upon by an additional member or more. Effort was made to find clips that would populate all quadrants of the valence-arousal/impact space, focusing particularly on finding positive valence-high arousal/impact clips. Each clip was assigned a unique number whereby the first value referred to a given source and the decimal referred to the clip number from that source (for sources where more than one clip was extracted). Full information about the sources [title, format (DVD or online), date it was released or uploaded online] is provided in the DEVO spreadsheet. Further information about the hue, saturation, and value of the clips is available in the appended files: Text S1.xlsx. and Text S2.pdf.

For access to the stimuli, email moviesstudyuo@gmail.com or patrick.davidson@uottawa.ca. All data, including at participant level, are available in the DEVO spreadsheet.

Themes

The clips depict a variety of themes including emotional and neutral human interactions, animals, nature, and food/drink. In addition, each clip was scored by two separate researchers in terms of (see DEVO spreadsheet): people presence (1 = yes; 0 = no), animal presence (1 = yes; 0 = no), and presence of food and/or drink, including drugs (1 = yes; 0 = no).

Specifications

Videos were copied from an original DVD source (when available) or downloaded from an online source in the highest available resolution using DVDFab9 software. An initial set of 90 clips were trimmed to 3 seconds in duration using DVDFab9 software and saved in .wmv format (codec: windows media video and audio professional). The remaining 201 clips varied in duration (mean = 6.5 s; SD = 2.9 s; range = 3 s to 15.14 s), and were trimmed and saved in .avi format using Filmora software1 (codec: H.264). Care was taken to minimize cinematographic effects, such as zooming, modified frame speed, or changing viewpoints, as these may momentarily disrupt visual processing (Hirose, 2010; Shimamura, Cohn-Sheehy, Pogue, & Shimamura, 2015; Shimamura, Cohn-Sheehy, & Shimamura, 2014; Smith & Henderson, 2008). Emotional clips were selected to include the peak emotional aspect of the longer segment/scene (when taken from a longer segment/scene), as identified by two researchers. All sound was removed using Filmora software.

Video Clip Validation

We collected ratings of each video clip in the laboratory.

Participants

Data from six participants were excluded from the analysis (four had an e-prime runtime error, one was self-exclusion, and one participant responded the same value to all questionnaires). This resulted in a final sample of 154 adults (82 women, 72 men; mean age2 = 19.88 ± 2.83SD; mean years of education = 13.27 ± 1.67SD) who rated all 291 video clips. Participants were randomly assigned to either the valence, arousal, or impact rating condition (see Table 1). Participants were recruited from the University of Ottawa undergraduate research pool or from the Ottawa community using newspaper and social media ads. University students were given course credit for their participation and community participants were paid $10. The study was approved by the University of Ottawa research ethics board (#H08-14-25). All participants provided informed consent at the outset.

Table 1

Participant demographics by Condition in Study 1.

ValenceArousalImpact

 
Number of Men: Women 23 M: 25 W 25 M: 27 W 24 M: 30 W 
Mean Age in Years (SD19.73 (2.73) 19.94 (2.95) 19.94 (2.85) 
Mean Years of Education (SD13.27 (1.60) 13.40 (1.67) 13.13 (1.75) 
ValenceArousalImpact

 
Number of Men: Women 23 M: 25 W 25 M: 27 W 24 M: 30 W 
Mean Age in Years (SD19.73 (2.73) 19.94 (2.95) 19.94 (2.85) 
Mean Years of Education (SD13.27 (1.60) 13.40 (1.67) 13.13 (1.75) 
Procedures

Each trial began with a white screen displayed for 5 s, which allowed the upcoming video clip to buffer. The videos were presented one at a time in a pseudo-random order (that is, to prevent E-prime from crashing, we randomly assigned clips to one of three blocks, and randomized the order of presentation of clips within each block) at their original frame speed. At the offset of each video, participants were given as much time as needed to rate their subjective feeling of valence, arousal, or impact, depending on their condition assignment. Valence and arousal were measured using the Self Assessment Manikins from IAPS (Lang et al., 2008). Valence was rated from 1 (happy) to 9 (unhappy). Arousal was rated from 1 (excited) to 9 (calm), contrary to the IAPS which is from 9 (excited) to 1 (calm), to maintain coherence between the spatial organization of the keyboard (1–9) and the pictorial representation of arousal which goes from excited (left) to calm (right). Impact was also measured from 1 (no impact) to 9 (intensive impact), as per Croucher et al. (2011). For exact instructions, see Appendix C. Participants were given a printed copy of the instructions from the original research papers to refer to during the course of the experiment. The between-subjects design was used to avoid potential halo effects across the three ratings that may occur when participants provide multiple ratings consecutively (for a review, see Saal et al., 1980). As such, participants viewed all videos and rated each one on the same dimension. Following each self-report rating, participants were given 2 s to respond yes to the following question: “Have you seen this video clip before?” The next trial would then begin with the white screen.

Participants viewed the videos in 3 blocks of 97 clips each, allowing for a break in between each block. They were tested either individually or in a room with up to three other participants, and were always monitored by the experimenter to ensure there were no adverse effects to viewing the emotional scenes. All participants were given three practice trials at the start of the experiment, with one negative, positive, and neutral clip, to familiarize themselves with the rating protocol as well as the emotional nature of the clips.

Statistical Analyses

First, we provide the ratings and familiarity scores for each video clip. Then we performed k-means and hierarchical cluster analyses on the mean ratings of valence, arousal, and impact for each video clip (averaged across all participants in the given condition), using SPSS Statistics 24 software. Two different clustering techniques were used to examine the underlying organization of the video clips in the database.3 These clustering techniques seek to identify groups (or ‘clusters’) of relatively homogenous video clips that have maximum heterogeneity between one another. The advantage of these multivariate techniques is that they are able to calculate the similarity in ratings between clips, while considering all three dimensions simultaneously.

The first technique, k-means clustering, allows the user to input the desired number of outputted clusters. For the current database, a solution of three clusters was chosen because researchers commonly select stimuli based on their assigned valence category (positive, negative, neutral). To compute a three-cluster solution, k-means first establishes k random cluster means (in this casek = 3), and then assigns each video clip to the nearest cluster mean (Morissette & Chartier, 2013). The mean for each cluster of videos is then calculated, and clips are reassigned to the cluster mean that is closest in value. Reiterations continue until the classification of the video clips remains stable (i.e., the cluster memberships no longer change when the cluster means are recalculated).

The second technique, hierarchical clustering, identifies clusters of homogenous cases when the total number of clusters is unknown. Agglomerative hierarchical clustering begins by calculating the distance (here, the Squared Euclidean distance) between all video clips, after which the two clips with the smallest distance are joined together to form a cluster. The process continues whereby at each step either a new cluster is formed by joining two clips together or a clip joins a previously merged cluster (the distance between clusters was calculated using the average linkage between-groups method; for further details on the procedures, seeYim & Ramdeen, 2015). The clustering process continues until all clips form one large cluster; it must therefore be stopped before the clusters become too heterogeneous. The cut-off point can be determined by looking at the outputted agglomeration schedule, which lists the cases and clusters that are merged at each stage of the process as well as their relative heterogeneity (as indicated by a coefficient value; Yim & Ramdeen, 2015). The clustering was therefore cut-off before the first large increase in coefficients, to ensure that the groups of clips remained relatively homogeneous. From this, the total number of clusters was obtained and the different ratings for each cluster were compared. Summary information, including a description of the clips and their hierarchical cluster assignment, is provided in Table 2.

Table 2

Description and hierarchical cluster assignment of the DEVO based on Study 1.

ClusterClipSource TitleDescriptionDuration in ms

 
1.1 The Red Tent Nurse smiling and helping patient 15140 
2.1 Квартирный вопрос Men smiling playing a video game 9040 
2.2 Квартирный вопрос Man giving a girl flowers 13050 
4.6 Tumba y Tumbao People receiving communion 7070 
6.1 Craft Woman eating 11010 
6.2 Craft People taking a bow 9240 
6.3 Craft People dancing and singing 12070 
7.2 Northern Skirts Girl outside waving her hands 5140 
8.1 Making Of Man breakdancing 6140 
9.1 Instructions Not Included Father and child 4070 
9.2 Instructions Not Included Swimming in water 7000 
9.3 Instructions Not Included Ocean landscape 10210 
9.4 Instructions Not Included Men in a bathroom 12040 
9.7 Instructions Not Included Man and dog on beach 4020 
10.1 Halfaouine: Child of the Terraces Woman dancing 8120 
11.1 A Citizen, a Detective & a Thief People talking on stairs 9190 
15.1 Thirteen Days of Looking at a Hummingbird Baby birds 4120 
17.1 Pistoia Zoological Garden – Part II Lemur hiccuping 4010 
17.2 Pistoia Zoological Garden – Part II Animal burrowing 4190 
17.3 Pistoia Zoological Garden – Part II Wolf walking 9140 
17.4 Pistoia Zoological Garden – Part II Bear walking 3020 
17.5 Pistoia Zoological Garden – Part II Tigers 5170 
20.1 Ice Cream Truck Couple eating ice cream 4020 
21.2 The flow of time (on Iceland) Waterfalls 3140 
21.3 The flow of time (on Iceland) Waterfalls over rocks 3170 
21.6 The flow of time (on Iceland) Waterfalls 4170 
22.3 Dolphin Races!!!! Dolphins jumping out of water 3180 
22.5 Dolphin Races!!!! Boat cruising in water 4080 
24.1 Wedding Preview: Noah and Rachael Bride walking down the aisle 4090 
24.3 Wedding Preview: Noah and Rachael Bride and groom 3000 
26.1 Jessica & Kevin at Meadowood, Napa Valley Bride and groom kissing 3010 
29.1 Sasha;) Funny little story Girl blowing bubbles 3180 
29.2 Sasha;) Funny little story Girl sleeping 4210 
30.1 Food Spreading cheese on bread 3010 
30.2 Food Cutting food 3030 
30.4 Food Spoonful of dessert 5120 
35.2 Viktor Arvidsson & Kevin Fiala Shootout Skills Hockey shootout 6020 
38.1 Cro-Mags (Full Set) Rock band performing 7140 
40.1 Vimeo Burrito Eating Contest 2013 Cash box 3180 
40.2 Vimeo Burrito Eating Contest 2013 Men eating burritos 5080 
42.1 How to make a pizza Chopping onions 5050 
42.2 How to make a pizza Stirring sauce in a pan 6170 
43.2 Nuuva Recepies: Creme Brulee Eating dessert 11100 
44.1 Nuuva cooking Crepes Making crepes 5010 
44.2 Nuuva cooking Crepes Sprinkling sugar on crepes 6070 
44.3 Nuuva cooking Crepes Woman eating food 8200 
52.3 FIRST KISS NYC Couple kissing 4240 
56.1 Swing Girls Group of friends clapping 9070 
56.2 Swing Girls Group of friends clapping 6020 
57.1 Barbie Doll Cake How to decorate a Barbie Doll/Princess Cake with icing Barbie doll cake 5020 
59.1 Klown Family eating pancakes 4070 
60.1 Hera Pheri People dancing 5200 
60.4 Hera Pheri People dancing 6200 
61.1 Soni De Nakhre Partner 2007 HD 1080p BluRay Music Video Men dancing 8110 
61.2 Soni De Nakhre (4.16) People dancing 5140 
70.3 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Lizard in grass 4000 
70.4 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Birds on pond 6060 
71.1 Komodo Dragons by Camera Backflip dive into water 4020 
71.2 Komodo Dragons by Camera Man diving off boat 4170 
79.2 American Meth 2008 Methamphetamine full documentary ! Toddler opening a fridge 9170 
102.1 Finland-Into the wild land Bear walking 8240 
102.3 Finland-Into the wild land Bear chewing 4030 
102.4 Finland-Into the wild land Fall landscape 6000 
108.2 Royal Python Hunt Mouse 3020 
111.3 Looking for Alexander Man and son in a car 3000 
112.3 Saint Rita Couple and their newborn 3000 
115.3 Unité 9: Season 1 People in prison 3000 
116.3 The 3 Little Pigs Couple kissing 3000 
117.3 7 Days Father kissing daughter 3000 
118.3 5150 Elm’s Way Couple talking 3000 
119.4 Cadavres Couple lying in bed 3000 
120.3 Welcome to the Sticks Couple kissing 3000 
121.3 Father and Guns People in car 3000 
122.3 The five of us Women playing in water 3000 
123.3 Far Side of the Moon People singing and playing piano 3000 
124.3 Funkytown Couple dancing in disco 3000 
125.2 Les invincibles Man getting dressed 3000 
125.3 Les invincibles Couple kissing 3000 
126.3 Seraphin: Heart of Stone Couple in a forest 3000 
127.3 Everything is fine Girls playing thumb war 3000 
127.4 Everything is fine People in water 3000 
128.5 Bully Young boy smiling 3000 
129.3 Bakhita: From slave to saint Priest, woman and kids smiling 3000 
130.5 Lac Mystère Couple kissing 3000 
131.2 Le baiser du barbu Couple kissing 3000 
132.2 Les Lavigueur: La vraie histoire Two people smiling 3000 
134.3 Love’s Abiding Joy Couple dancing 3000 
139.1 Nez Rouge Girl kissing santa 3000 
1.2 The Red Tent Couple riding in the snow smiling 13210 
12.1 Harlee – 1st Birthday – Cake Smash! Baby eating some cake 8160 
14.1 Dolphins – Trailer [HD] Dolphin jumping out of water 4080 
14.2 Dolphins – Trailer [HD] Dolphins and dogs 6160 
14.3 Dolphins – Trailer [HD] Man swimming around a dolphin 4030 
18.1 Day One Puppy yawning 6160 
19.1 Meet the sloths Baby sloth 3020 
19.2 Meet the sloths Baby sloths 4130 
23.1 Orion Smiling Baby boy 5160 
24.2 Wedding Preview: Noah and Rachael Bride and groom kissing 4200 
25.1 August and Sara Married in San Luis Obispo Bride and groom kissing 4190 
27.1 Shea and Kendra Married on the Central Coast Bride and groom kissing 3130 
28.1 Bethany Smiling! Baby girl 5120 
33.1 Vincents Birth Story Mom holding newborn 6020 
35.1 Viktor Arvidsson & Kevin Fiala Shootout Skills Hockey shootout 5120 
36.1 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 11010 
36.2 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 10020 
36.3 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 6220 
36.5 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 10240 
50.1 fireworks Fireworks 7080 
52.2 First Kiss NYC Couple kissing 4140 
58.1 Chocolate caramel peanut bomb – How to cook that dessert Chocolate caramel ball 9060 
72.2 Deer attacks a dog and a cat Deer approaching fawn and cat 5180 
133.3 Love Comes Softly Woman laughing 3000 
140.1 Earth Polar bears 3000 
2.3 Квартирный вопрос (Kvartirnyĭ vopros) People talking in an office 9040 
2.5 Квартирный вопрос (Kvartirnyĭ vopros) People talking in an office 9090 
4.4 Tumba y Tumbao Man taking a taxi 11150 
4.5 Tumba y Tumbao Man running away outside 8240 
5.1 Footnote Social gathering 4120 
5.2 Footnote Man putting shoes on 3010 
5.3 Footnote Man putting on ear protectors 6190 
6.4 Craft Man with a phone 10160 
7.8 Northern Skirts Looking outside a train window 5030 
8.5 Making Of Man walking on a roof 6190 
9.6 Instructions Not Included People getting out of a truck 3030 
10.2 Halfaouine: Child of the Terraces Women drinking 5210 
10.6 Halfaouine: Child of the Terraces Boy walking outside 5100 
10.7 Halfaouine: Child of the Terraces Hands tossing salad 3240 
30.3 Food Pouring honey 3010 
46.1 More Low Light Smoking… Man lighting a cigarette 9070 
47.2 Cigarette Person smoking 5110 
56.5 Swing Girls Girls playing the trombone 9030 
60.6 Hera Pheri Men hovering at a window 5120 
60.7 Hera Pheri Group of men and police officers 8120 
64.6 Children of Men Man getting out of bed 9170 
65.3 11 Flowers People walking in school court yard 6020 
66.4 The Peacekeepers Soldier directing traffic 8190 
67.3 Days of Glory Soldiers walking 5130 
67.4 Days of Glory Soldier putting a photo in a shirt 5200 
68.4 Amour Couple entering a home 6090 
70.5 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Bird walking across a pond 5070 
91.2 Dothan, AL – Woman wanders into traffic Car driving 3020 
98.3 Martin Acupuncture Man jogging with his dog 4130 
99.3 Couleuvre tachetée – Capsule #3 (Milk Snake) Piece of land 5230 
102.2 Finland-Into the wild land Bird on a tree 3080 
102.5 Finland-Into the wild land Bird on a pole 5010 
105.1 the fridge Looking through a fridge 5020 
110.2 Sur le seuil Man walking out of a hospital 3000 
110.3 Sur le seuil Couple in a café 3000 
111.2 Looking for Alexander Two women eating dinner 3000 
112.2 Saint Rita Woman opening a window 3000 
113.3 Sans elle Woman walking along a boardwalk 3000 
114.3 Shake Hands with the Devil People dancing 3000 
115.2 Unité 9, Season 1 People sitting outside a building 3000 
115.4 Unité 9, Season 1 Woman walking at work 3000 
116.2 The 3 Little Pigs Family getting ready in the morning 3000 
117.2 7 Days People in a corner store 3000 
118.2 5150 Elm’s Way Students leaving school 3000 
119.2 Cadavres Two people eating 3000 
119.3 Cadavres Man approaching a house 3000 
120.2 Welcome to the Sticks People eating at a table 3000 
121.2 Father and Guns Two men shaving 3000 
122.2 The five of us People at a market 3000 
123.1 Far Side of the Moon Boss yelling at employee 3000 
123.2 Far Side of the Moon Person with a walker in a hospital 3000 
124.2 Funkytown Family in a kitchen 3000 
126.2 Seraphin: Heart of Stone Man receiving a letter 3000 
127.2 Everything is fine People talking at a gas station 3000 
128.3 Bully Parents in a school office 3000 
128.4 Bully Young boy smiling 3000 
129.2 Bakhita: From slave to saint Braiding hair 3000 
130.3 Lac Mystère Man doing a puzzle 3000 
130.4 Lac Mystère Man lying down and talking 3000 
131.1 Le baiser du barbu Woman using her phone and computer 3000 
133.2 Love Comes Softly Girl cleaning up hay 3000 
134.1 Love’s Abiding Joy Woman looking at hand 3000 
134.2 Love’s Abiding Joy Woman eating soup 3000 
135.2 La reine rouge Man sweeping a hallway 3000 
136.2 Heartbeats People sorting through clothes 3000 
136.3 Heartbeats Two people sitting down for tea 3000 
137.1 February 15, 1839 People walking in snow 3000 
2.4 Квартирный вопрос (Kvartirnyĭ vopros) Men in dark alleyway 8240 
6.5 Craft Woman crying 6020 
7.5 Northern Skirts Girl crying 3210 
7.6 Northern Skirts Man kicking a car window 5020 
7.7 Northern Skirts Woman in hospital bed 7010 
8.3 Making Of Street chase 8190 
8.4 Making Of Teenagers in police office 9040 
9.5 Instructions Not Included Girl crying 6190 
10.4 Halfaouine: Child of the Terraces Man hitting boy’s feet 4050 
10.5 Halfaouine: Child of the Terraces Cutting intestines 5110 
56.4 Swing Girls Boy vomits into a tuba 10030 
60.5 Hera Pheri Men fighting 3110 
64.1 Children of Men Man with a hostage 7020 
64.2 Children of Men Man being abducted 12020 
64.3 Children of Men People looking out of a bus window 6020 
64.7 Children of Men Person smoking 6170 
66.2 The Peacekeepers Emergency vehicles on a street 8040 
68.1 Amour Dead woman on a bed 8130 
70.1 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Lizard eating bug 5150 
71.3 Komodo Dragons by Camera Komodo dragons 3200 
73.1 Leopard & Hyena Leapoard eating an animal carcass 10060 
76.1 Plastic Soup Garbage in water 6220 
76.2 Plastic Soup Garbage in water 3180 
76.3 Plastic Soup Garbage in water 3170 
78.1 Crocodile Attacks Elephant at Watering Hole Crocodile biting an elephant’s trunk 12140 
79.1 American Meth 2008 Methamphetamine full documentary ! Toddler on ice 11010 
80.1 Teenage Heroin Epidemic Injected drugs via needle 6070 
80.3 Teenage Heroin Epidemic Sick feet 4030 
80.4 Teenage Heroin Epidemic Sick feet 3020 
80.6 Teenage Heroin Epidemic Man injecting drugs 3020 
83.1 Gopro Rammed Into Wasp Nest! Wasp nest 4060 
84.2 Remus SharkCam: The hunter and the hunted Shark biting an underwater camera 10020 
89.1 Anchorage police dash camera captures light pole collision Car accident 6130 
91.1 Dothan, AL – Woman wanders into traffic Woman jaywalking 9220 
92.1 Tennessee bank robber shootout! Cop shooting at a car windshield 11170 
93.1 Random Stop Man yelling at someone 9170 
93.2 Random Stop Shooting 8080 
94.2 VCU Riot after 2011 Final Four Police walking through a riot 3180 
94.3 VCU Riot after 2011 Final Four Group of armed policemen 3170 
97.1 Raw Video: Black Forest Fire House on fire 9040 
98.1 Martin Acupuncture Acupuncture needle 5190 
99.1 Couleuvre tachetée – Capsule #3 (Milk Snake) Snake slithering across rocks 9100 
99.2 Couleuvre tachetée – Capsule #3 (Milk Snake) Snake hissing 3020 
100.1 NatureAlive By Penjo Baba Ants crawling on plastic 7030 
100.2 NatureAlive By Penjo Baba Centipede crawling on ground 9180 
104.2 SAMSARA food sequence Chicken slaughter house 4080 
108.1 Royal Python Hunt Snake hissing 4090 
109.1 Historic Flash Flood in Zion National Park (The Narrows) Rushing water 6000 
109.2 Historic Flash Flood in Zion National Park (The Narrows) Rushing water 13010 
111.1 Looking for Alexander Woman crying 3000 
112.1 Saint Rita Man on the ground 3000 
113.1 Sans elle Two women on a beach 3000 
113.2 Sans elle Woman with ropes 3000 
114.1 Shake Hands with the Devil Army station 3000 
115.1 Unité 9: Season 1 Woman locked in a room 3000 
116.1 The 3 Little Pigs Man with a police officer 3000 
117.1 7 Days Man lying on the ground 3000 
118.1 5150 Elm’s Way Person fell off bicycle 3000 
119.1 Cadavres Woman holding a gun 3000 
120.1 Welcome to the Sticks Man stumbling outside 3000 
121.1 Father and Guns Men shot in chest 3000 
124.1 Funkytown Man injecting a needle 3000 
125.1 Les invincibles Employee in a chicken slaughter house 3000 
126.1 Seraphin: Heart of Stone Man with a dead horse 3000 
127.1 Everything is fine Man crying 3000 
128.1 Bully Couple at a grave 3000 
128.2 Bully Girl crying 3000 
130.2 Lac Mystère Man reaching for a canoe 3000 
133.1 Love Comes Softly Girl crying 3000 
136.1 Heartbeats Woman getting attacked 3000 
137.2 February 15, 1839 Man being hung 3000 
138.1 La vie après l’amour Man at the dentist 3000 
4.2 Tumba y Tumbao Man attacking woman 5120 
7.4 Northern Skirts Couple physically fighting 4180 
11.2 A Citizen, a Detective & a Thief Man striking with a whip 11140 
64.5 Children of Men Armed forces intervention 7170 
66.1 The Peacekeepers People with an animal carcass 8160 
66.3 The Peacekeepers Men being held down by armed forces 12020 
67.2 Days of Glory Man at a cemetary 10070 
80.2 Teenage Heroin Epidemic Man injecting a needle into his arm 3140 
82.1 Mass Slaughter Of Pilot Whales In The Faroe Islands. Warning. Graphic. Injured whales 5110 
82.2 Mass Slaughter Of Pilot Whales In The Faroe Islands. Warning. Graphic. Whale hunting 4080 
90.1 1368–15 48 Pct Robbery Group beating up someone 9030 
93.3 Random Stop Shooting 11010 
95.1 Cận cảnh xe buyyts bị nước lũ cuốn trôi School bus drifting in flood 9220 
96.1 Enjoying a 20 year old crystal pepsi (warning: vomit alert) Man throwing up 13220 
104.1 Samsara food sequence Chicken farm 9230 
104.3 Samsara food sequence Chicken farm 8010 
110.1 Sur le seuil Man with stomach bleeding 3000 
114.2 Shake Hands with the Devil Dead woman 3000 
122.1 The five of us Woman being pulled from a car 3000 
129.1 Bakhita: From slave to saint Woman getting attacked 3000 
130.1 Lac Mystère Two men fighting 3000 
132.1 Les Lavigueur: La vraie histoire Man vomiting 3000 
135.1 La reine rouge Man being stabbed 3000 
4.1 Tumba y Tumbao Couple dancing at a club 6140 
4.3 Tumba y Tumbao Man eating ice cream 12190 
7.3 Northern Skirts Couple drinking alcohol 10160 
11.3 A Citizen, a Detective & a Thief Man kicked out of an apartment 9020 
32.1 We Were Soldiers (5/9) Movie CLIP – Arriving in North Vietnam (2002) HD Flying hellicopters 8030 
48.1 Avalanche Skier POV Helmet Cam Burial & Rescue in Haines, Alaska People skiing 9010 
49.1 Marina Bay Sands Skypark BASE Jump. Singapore 2012. Base jumping 3200 
49.2 Marina Bay Sands Skypark BASE Jump. Singapore 2012. Base jumping 3100 
51.1 Intimacy, by Angela Groen & Rob Bahou Couple kissing 4180 
52.1 First kiss nyc Couple kissing 3190 
54.2 EJT Massage for men Person caressing man 8070 
55.2 EJT Massage for women Couple hugging after massage 10010 
98.2 Martin Acupuncture Acupuncture needles on back 3210 
37.1 GoPro HD: Avalanche Cliff Jump – TV Commercial – You in HD Ski cliff jump 5070 
84.1 Remus SharkCam: The hunter and the hunted Shark biting an underwater camera 14030 
88.1 Base Jump Chute Failure, Miracle Save! Parachute accident 11090 
ClusterClipSource TitleDescriptionDuration in ms

 
1.1 The Red Tent Nurse smiling and helping patient 15140 
2.1 Квартирный вопрос Men smiling playing a video game 9040 
2.2 Квартирный вопрос Man giving a girl flowers 13050 
4.6 Tumba y Tumbao People receiving communion 7070 
6.1 Craft Woman eating 11010 
6.2 Craft People taking a bow 9240 
6.3 Craft People dancing and singing 12070 
7.2 Northern Skirts Girl outside waving her hands 5140 
8.1 Making Of Man breakdancing 6140 
9.1 Instructions Not Included Father and child 4070 
9.2 Instructions Not Included Swimming in water 7000 
9.3 Instructions Not Included Ocean landscape 10210 
9.4 Instructions Not Included Men in a bathroom 12040 
9.7 Instructions Not Included Man and dog on beach 4020 
10.1 Halfaouine: Child of the Terraces Woman dancing 8120 
11.1 A Citizen, a Detective & a Thief People talking on stairs 9190 
15.1 Thirteen Days of Looking at a Hummingbird Baby birds 4120 
17.1 Pistoia Zoological Garden – Part II Lemur hiccuping 4010 
17.2 Pistoia Zoological Garden – Part II Animal burrowing 4190 
17.3 Pistoia Zoological Garden – Part II Wolf walking 9140 
17.4 Pistoia Zoological Garden – Part II Bear walking 3020 
17.5 Pistoia Zoological Garden – Part II Tigers 5170 
20.1 Ice Cream Truck Couple eating ice cream 4020 
21.2 The flow of time (on Iceland) Waterfalls 3140 
21.3 The flow of time (on Iceland) Waterfalls over rocks 3170 
21.6 The flow of time (on Iceland) Waterfalls 4170 
22.3 Dolphin Races!!!! Dolphins jumping out of water 3180 
22.5 Dolphin Races!!!! Boat cruising in water 4080 
24.1 Wedding Preview: Noah and Rachael Bride walking down the aisle 4090 
24.3 Wedding Preview: Noah and Rachael Bride and groom 3000 
26.1 Jessica & Kevin at Meadowood, Napa Valley Bride and groom kissing 3010 
29.1 Sasha;) Funny little story Girl blowing bubbles 3180 
29.2 Sasha;) Funny little story Girl sleeping 4210 
30.1 Food Spreading cheese on bread 3010 
30.2 Food Cutting food 3030 
30.4 Food Spoonful of dessert 5120 
35.2 Viktor Arvidsson & Kevin Fiala Shootout Skills Hockey shootout 6020 
38.1 Cro-Mags (Full Set) Rock band performing 7140 
40.1 Vimeo Burrito Eating Contest 2013 Cash box 3180 
40.2 Vimeo Burrito Eating Contest 2013 Men eating burritos 5080 
42.1 How to make a pizza Chopping onions 5050 
42.2 How to make a pizza Stirring sauce in a pan 6170 
43.2 Nuuva Recepies: Creme Brulee Eating dessert 11100 
44.1 Nuuva cooking Crepes Making crepes 5010 
44.2 Nuuva cooking Crepes Sprinkling sugar on crepes 6070 
44.3 Nuuva cooking Crepes Woman eating food 8200 
52.3 FIRST KISS NYC Couple kissing 4240 
56.1 Swing Girls Group of friends clapping 9070 
56.2 Swing Girls Group of friends clapping 6020 
57.1 Barbie Doll Cake How to decorate a Barbie Doll/Princess Cake with icing Barbie doll cake 5020 
59.1 Klown Family eating pancakes 4070 
60.1 Hera Pheri People dancing 5200 
60.4 Hera Pheri People dancing 6200 
61.1 Soni De Nakhre Partner 2007 HD 1080p BluRay Music Video Men dancing 8110 
61.2 Soni De Nakhre (4.16) People dancing 5140 
70.3 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Lizard in grass 4000 
70.4 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Birds on pond 6060 
71.1 Komodo Dragons by Camera Backflip dive into water 4020 
71.2 Komodo Dragons by Camera Man diving off boat 4170 
79.2 American Meth 2008 Methamphetamine full documentary ! Toddler opening a fridge 9170 
102.1 Finland-Into the wild land Bear walking 8240 
102.3 Finland-Into the wild land Bear chewing 4030 
102.4 Finland-Into the wild land Fall landscape 6000 
108.2 Royal Python Hunt Mouse 3020 
111.3 Looking for Alexander Man and son in a car 3000 
112.3 Saint Rita Couple and their newborn 3000 
115.3 Unité 9: Season 1 People in prison 3000 
116.3 The 3 Little Pigs Couple kissing 3000 
117.3 7 Days Father kissing daughter 3000 
118.3 5150 Elm’s Way Couple talking 3000 
119.4 Cadavres Couple lying in bed 3000 
120.3 Welcome to the Sticks Couple kissing 3000 
121.3 Father and Guns People in car 3000 
122.3 The five of us Women playing in water 3000 
123.3 Far Side of the Moon People singing and playing piano 3000 
124.3 Funkytown Couple dancing in disco 3000 
125.2 Les invincibles Man getting dressed 3000 
125.3 Les invincibles Couple kissing 3000 
126.3 Seraphin: Heart of Stone Couple in a forest 3000 
127.3 Everything is fine Girls playing thumb war 3000 
127.4 Everything is fine People in water 3000 
128.5 Bully Young boy smiling 3000 
129.3 Bakhita: From slave to saint Priest, woman and kids smiling 3000 
130.5 Lac Mystère Couple kissing 3000 
131.2 Le baiser du barbu Couple kissing 3000 
132.2 Les Lavigueur: La vraie histoire Two people smiling 3000 
134.3 Love’s Abiding Joy Couple dancing 3000 
139.1 Nez Rouge Girl kissing santa 3000 
1.2 The Red Tent Couple riding in the snow smiling 13210 
12.1 Harlee – 1st Birthday – Cake Smash! Baby eating some cake 8160 
14.1 Dolphins – Trailer [HD] Dolphin jumping out of water 4080 
14.2 Dolphins – Trailer [HD] Dolphins and dogs 6160 
14.3 Dolphins – Trailer [HD] Man swimming around a dolphin 4030 
18.1 Day One Puppy yawning 6160 
19.1 Meet the sloths Baby sloth 3020 
19.2 Meet the sloths Baby sloths 4130 
23.1 Orion Smiling Baby boy 5160 
24.2 Wedding Preview: Noah and Rachael Bride and groom kissing 4200 
25.1 August and Sara Married in San Luis Obispo Bride and groom kissing 4190 
27.1 Shea and Kendra Married on the Central Coast Bride and groom kissing 3130 
28.1 Bethany Smiling! Baby girl 5120 
33.1 Vincents Birth Story Mom holding newborn 6020 
35.1 Viktor Arvidsson & Kevin Fiala Shootout Skills Hockey shootout 5120 
36.1 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 11010 
36.2 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 10020 
36.3 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 6220 
36.5 GoPro: Endless Barrels – GoPro of the Winter 2013–14 powered by Surfline Surfing 10240 
50.1 fireworks Fireworks 7080 
52.2 First Kiss NYC Couple kissing 4140 
58.1 Chocolate caramel peanut bomb – How to cook that dessert Chocolate caramel ball 9060 
72.2 Deer attacks a dog and a cat Deer approaching fawn and cat 5180 
133.3 Love Comes Softly Woman laughing 3000 
140.1 Earth Polar bears 3000 
2.3 Квартирный вопрос (Kvartirnyĭ vopros) People talking in an office 9040 
2.5 Квартирный вопрос (Kvartirnyĭ vopros) People talking in an office 9090 
4.4 Tumba y Tumbao Man taking a taxi 11150 
4.5 Tumba y Tumbao Man running away outside 8240 
5.1 Footnote Social gathering 4120 
5.2 Footnote Man putting shoes on 3010 
5.3 Footnote Man putting on ear protectors 6190 
6.4 Craft Man with a phone 10160 
7.8 Northern Skirts Looking outside a train window 5030 
8.5 Making Of Man walking on a roof 6190 
9.6 Instructions Not Included People getting out of a truck 3030 
10.2 Halfaouine: Child of the Terraces Women drinking 5210 
10.6 Halfaouine: Child of the Terraces Boy walking outside 5100 
10.7 Halfaouine: Child of the Terraces Hands tossing salad 3240 
30.3 Food Pouring honey 3010 
46.1 More Low Light Smoking… Man lighting a cigarette 9070 
47.2 Cigarette Person smoking 5110 
56.5 Swing Girls Girls playing the trombone 9030 
60.6 Hera Pheri Men hovering at a window 5120 
60.7 Hera Pheri Group of men and police officers 8120 
64.6 Children of Men Man getting out of bed 9170 
65.3 11 Flowers People walking in school court yard 6020 
66.4 The Peacekeepers Soldier directing traffic 8190 
67.3 Days of Glory Soldiers walking 5130 
67.4 Days of Glory Soldier putting a photo in a shirt 5200 
68.4 Amour Couple entering a home 6090 
70.5 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Bird walking across a pond 5070 
91.2 Dothan, AL – Woman wanders into traffic Car driving 3020 
98.3 Martin Acupuncture Man jogging with his dog 4130 
99.3 Couleuvre tachetée – Capsule #3 (Milk Snake) Piece of land 5230 
102.2 Finland-Into the wild land Bird on a tree 3080 
102.5 Finland-Into the wild land Bird on a pole 5010 
105.1 the fridge Looking through a fridge 5020 
110.2 Sur le seuil Man walking out of a hospital 3000 
110.3 Sur le seuil Couple in a café 3000 
111.2 Looking for Alexander Two women eating dinner 3000 
112.2 Saint Rita Woman opening a window 3000 
113.3 Sans elle Woman walking along a boardwalk 3000 
114.3 Shake Hands with the Devil People dancing 3000 
115.2 Unité 9, Season 1 People sitting outside a building 3000 
115.4 Unité 9, Season 1 Woman walking at work 3000 
116.2 The 3 Little Pigs Family getting ready in the morning 3000 
117.2 7 Days People in a corner store 3000 
118.2 5150 Elm’s Way Students leaving school 3000 
119.2 Cadavres Two people eating 3000 
119.3 Cadavres Man approaching a house 3000 
120.2 Welcome to the Sticks People eating at a table 3000 
121.2 Father and Guns Two men shaving 3000 
122.2 The five of us People at a market 3000 
123.1 Far Side of the Moon Boss yelling at employee 3000 
123.2 Far Side of the Moon Person with a walker in a hospital 3000 
124.2 Funkytown Family in a kitchen 3000 
126.2 Seraphin: Heart of Stone Man receiving a letter 3000 
127.2 Everything is fine People talking at a gas station 3000 
128.3 Bully Parents in a school office 3000 
128.4 Bully Young boy smiling 3000 
129.2 Bakhita: From slave to saint Braiding hair 3000 
130.3 Lac Mystère Man doing a puzzle 3000 
130.4 Lac Mystère Man lying down and talking 3000 
131.1 Le baiser du barbu Woman using her phone and computer 3000 
133.2 Love Comes Softly Girl cleaning up hay 3000 
134.1 Love’s Abiding Joy Woman looking at hand 3000 
134.2 Love’s Abiding Joy Woman eating soup 3000 
135.2 La reine rouge Man sweeping a hallway 3000 
136.2 Heartbeats People sorting through clothes 3000 
136.3 Heartbeats Two people sitting down for tea 3000 
137.1 February 15, 1839 People walking in snow 3000 
2.4 Квартирный вопрос (Kvartirnyĭ vopros) Men in dark alleyway 8240 
6.5 Craft Woman crying 6020 
7.5 Northern Skirts Girl crying 3210 
7.6 Northern Skirts Man kicking a car window 5020 
7.7 Northern Skirts Woman in hospital bed 7010 
8.3 Making Of Street chase 8190 
8.4 Making Of Teenagers in police office 9040 
9.5 Instructions Not Included Girl crying 6190 
10.4 Halfaouine: Child of the Terraces Man hitting boy’s feet 4050 
10.5 Halfaouine: Child of the Terraces Cutting intestines 5110 
56.4 Swing Girls Boy vomits into a tuba 10030 
60.5 Hera Pheri Men fighting 3110 
64.1 Children of Men Man with a hostage 7020 
64.2 Children of Men Man being abducted 12020 
64.3 Children of Men People looking out of a bus window 6020 
64.7 Children of Men Person smoking 6170 
66.2 The Peacekeepers Emergency vehicles on a street 8040 
68.1 Amour Dead woman on a bed 8130 
70.1 Life in the Swamp – La vita nella Palude HD [Fauna, Wildlife] Lizard eating bug 5150 
71.3 Komodo Dragons by Camera Komodo dragons 3200 
73.1 Leopard & Hyena Leapoard eating an animal carcass 10060 
76.1 Plastic Soup Garbage in water 6220 
76.2 Plastic Soup Garbage in water 3180 
76.3 Plastic Soup Garbage in water 3170 
78.1 Crocodile Attacks Elephant at Watering Hole Crocodile biting an elephant’s trunk 12140 
79.1 American Meth 2008 Methamphetamine full documentary ! Toddler on ice 11010 
80.1 Teenage Heroin Epidemic Injected drugs via needle 6070 
80.3 Teenage Heroin Epidemic Sick feet 4030 
80.4 Teenage Heroin Epidemic Sick feet 3020 
80.6 Teenage Heroin Epidemic Man injecting drugs 3020 
83.1 Gopro Rammed Into Wasp Nest! Wasp nest 4060 
84.2 Remus SharkCam: The hunter and the hunted Shark biting an underwater camera 10020 
89.1 Anchorage police dash camera captures light pole collision Car accident 6130 
91.1 Dothan, AL – Woman wanders into traffic Woman jaywalking 9220 
92.1 Tennessee bank robber shootout! Cop shooting at a car windshield 11170 
93.1 Random Stop Man yelling at someone 9170 
93.2 Random Stop Shooting 8080 
94.2 VCU Riot after 2011 Final Four Police walking through a riot 3180 
94.3 VCU Riot after 2011 Final Four Group of armed policemen 3170 
97.1 Raw Video: Black Forest Fire House on fire 9040 
98.1 Martin Acupuncture Acupuncture needle 5190 
99.1 Couleuvre tachetée – Capsule #3 (Milk Snake) Snake slithering across rocks 9100 
99.2 Couleuvre tachetée – Capsule #3 (Milk Snake) Snake hissing 3020 
100.1 NatureAlive By Penjo Baba Ants crawling on plastic 7030 
100.2 NatureAlive By Penjo Baba Centipede crawling on ground 9180 
104.2 SAMSARA food sequence Chicken slaughter house 4080 
108.1 Royal Python Hunt Snake hissing 4090 
109.1 Historic Flash Flood in Zion National Park (The Narrows) Rushing water 6000 
109.2 Historic Flash Flood in Zion National Park (The Narrows) Rushing water 13010 
111.1 Looking for Alexander Woman crying 3000 
112.1 Saint Rita Man on the ground 3000 
113.1 Sans elle Two women on a beach 3000 
113.2 Sans elle Woman with ropes 3000 
114.1 Shake Hands with the Devil Army station 3000 
115.1 Unité 9: Season 1 Woman locked in a room 3000 
116.1 The 3 Little Pigs Man with a police officer 3000 
117.1 7 Days Man lying on the ground 3000 
118.1 5150 Elm’s Way Person fell off bicycle 3000 
119.1 Cadavres Woman holding a gun 3000 
120.1 Welcome to the Sticks Man stumbling outside 3000 
121.1 Father and Guns Men shot in chest 3000 
124.1 Funkytown Man injecting a needle 3000 
125.1 Les invincibles Employee in a chicken slaughter house 3000 
126.1 Seraphin: Heart of Stone Man with a dead horse 3000 
127.1 Everything is fine Man crying 3000 
128.1 Bully Couple at a grave 3000 
128.2 Bully Girl crying 3000 
130.2 Lac Mystère Man reaching for a canoe 3000 
133.1 Love Comes Softly Girl crying 3000 
136.1 Heartbeats Woman getting attacked 3000 
137.2 February 15, 1839 Man being hung 3000 
138.1 La vie après l’amour Man at the dentist 3000 
4.2 Tumba y Tumbao Man attacking woman 5120 
7.4 Northern Skirts Couple physically fighting 4180 
11.2 A Citizen, a Detective & a Thief Man striking with a whip 11140 
64.5 Children of Men Armed forces intervention 7170 
66.1 The Peacekeepers People with an animal carcass 8160 
66.3 The Peacekeepers Men being held down by armed forces 12020 
67.2 Days of Glory Man at a cemetary 10070 
80.2 Teenage Heroin Epidemic Man injecting a needle into his arm 3140 
82.1 Mass Slaughter Of Pilot Whales In The Faroe Islands. Warning. Graphic. Injured whales 5110 
82.2 Mass Slaughter Of Pilot Whales In The Faroe Islands. Warning. Graphic. Whale hunting 4080 
90.1 1368–15 48 Pct Robbery Group beating up someone 9030 
93.3 Random Stop Shooting 11010 
95.1 Cận cảnh xe buyyts bị nước lũ cuốn trôi School bus drifting in flood 9220 
96.1 Enjoying a 20 year old crystal pepsi (warning: vomit alert) Man throwing up 13220 
104.1 Samsara food sequence Chicken farm 9230 
104.3 Samsara food sequence Chicken farm 8010 
110.1 Sur le seuil Man with stomach bleeding 3000 
114.2 Shake Hands with the Devil Dead woman 3000 
122.1 The five of us Woman being pulled from a car 3000 
129.1 Bakhita: From slave to saint Woman getting attacked 3000 
130.1 Lac Mystère Two men fighting 3000 
132.1 Les Lavigueur: La vraie histoire Man vomiting 3000 
135.1 La reine rouge Man being stabbed 3000 
4.1 Tumba y Tumbao Couple dancing at a club 6140 
4.3 Tumba y Tumbao Man eating ice cream 12190 
7.3 Northern Skirts Couple drinking alcohol 10160 
11.3 A Citizen, a Detective & a Thief Man kicked out of an apartment 9020 
32.1 We Were Soldiers (5/9) Movie CLIP – Arriving in North Vietnam (2002) HD Flying hellicopters 8030 
48.1 Avalanche Skier POV Helmet Cam Burial & Rescue in Haines, Alaska People skiing 9010 
49.1 Marina Bay Sands Skypark BASE Jump. Singapore 2012. Base jumping 3200 
49.2 Marina Bay Sands Skypark BASE Jump. Singapore 2012. Base jumping 3100 
51.1 Intimacy, by Angela Groen & Rob Bahou Couple kissing 4180 
52.1 First kiss nyc Couple kissing 3190 
54.2 EJT Massage for men Person caressing man 8070 
55.2 EJT Massage for women Couple hugging after massage 10010 
98.2 Martin Acupuncture Acupuncture needles on back 3210 
37.1 GoPro HD: Avalanche Cliff Jump – TV Commercial – You in HD Ski cliff jump 5070 
84.1 Remus SharkCam: The hunter and the hunted Shark biting an underwater camera 14030 
88.1 Base Jump Chute Failure, Miracle Save! Parachute accident 11090 

Results

Familiarity

The familiarity scores of one participant were excluded because the participant misunderstood the instructions, and responded “yes” if the clips looked similar to previously shown clips. On average, participants had previously seen 4.93 of 291 clips (SD = 9.66; range = 0–96). Most participants (n = 143) had seen fewer than 5% of the clips before the study. Of the remaining 10 participants: 7 had seen 5–9%, 2 had seen 10–12%, and 1 had seen 33% of the clips.

The familiarity scores were also averaged across all participants for each clip to determine whether certain clips were more familiar than others. On average, each clip had been seen by 2.59 participants (SD = 2.64; range = 0–15). Most clips (n = 273) had been seen by fewer than 5% of participants, demonstrating that the clips were generally unfamiliar. Of the remaining 18 clips: 10 had been seen by 5–6% of participants, 7 by 7–8% of participants, and 1 by 9.8% of participants. Note that in the attached spreadsheet containing these data, 0 = “unfamiliar” and 1 = “familiar.”

Valence, Arousal, and Impact Ratings

The mean valence, arousal, and impact ratings for each video clip are provided in the DEVO spreadsheet. The mean valence rating was 4.92 (SD = 1.61), with scores ranging from 1.73 to 8.29. The mean arousal rating was 4.87 (SD = 1.16), with scores ranging from 2.17 to 7.50. And, the mean impact rating was 3.91 (SD = 1.28), with scores ranging from 1.59 to 7.09. The range in scores for arousal and impact (5.33 and 5.50 points, respectively) was somewhat smaller than the range in scores for valence (6.56 points). See the DEVO spreadsheet for the mean ratings for men and women and the standard deviations.

Relations between the ratings of valence, arousal, and impact can be seen in Figures 1, 2, 3, 4, 5, 6, 7.

k-means Clustering

The k-means analysis required 8 iterations to reveal a stable cluster solution. This resulted in three clusters that differed in valence, arousal, and impact (Table 3). Based on the mean and range of valence scores, the first cluster included more negative clips, the second more positive clips, and the third more neutral clips. Arousal and impact were highest for the ‘negative’ cluster, lower for the ‘positive’ cluster, and even lower still for the ‘neutral’ cluster.

Table 3

Ratings for three clusters based on k-means analysis from Study 1.

ClusternValenceArousalImpact

Mean (SD)
Range
Mean (SD)
Range
Mean (SD)
Range

 
1 91 6.86 (0.72) 3.84 (0.80) 5.12 (0.97) 
4.48–8.29 2.17–5.50 3.59–7.09 
2 82 3.21 (0.72) 4.50 (0.72) 4.31 (0.67) 
1.73–5.02 2.87–6.33 3.09–6.20 
3 118 4.60 (0.75) 5.94 (0.62) 2.69 (0.55) 
3.44–6.29 4.50–7.50 1.59–4.00 
ClusternValenceArousalImpact

Mean (SD)
Range
Mean (SD)
Range
Mean (SD)
Range

 
1 91 6.86 (0.72) 3.84 (0.80) 5.12 (0.97) 
4.48–8.29 2.17–5.50 3.59–7.09 
2 82 3.21 (0.72) 4.50 (0.72) 4.31 (0.67) 
1.73–5.02 2.87–6.33 3.09–6.20 
3 118 4.60 (0.75) 5.94 (0.62) 2.69 (0.55) 
3.44–6.29 4.50–7.50 1.59–4.00 

Note: Valence was rated from 1 (happy) to 9 (unhappy); arousal from 1 (excited) to 9 (calm); impact from 1 (no impact) to 9 (intensive impact).

Ratings of valence and arousal followed the typical inverted-U shaped relationship (Figure 1), with valence and impact showing the typical U-shaped pattern (Figure 2). This was expected because the arousal scale was inverted, with lower scores reflecting higher levels of arousal (note that Study 1 had similar findings when men and women data were separated). A strong linear relationship between arousal and impact scores can be seen in Figure 3. Despite most cases following the typical pattern, three clips from the ‘negative’ cluster stand out as they appear more neutral than the rest of the cluster and are more arousing and impactful than other neutral clips. This, in addition to the particularly variable ‘positive’ cluster, suggest that there may be even smaller groupings of clips in the database. The hierarchical cluster analysis was thus performed to undercover whether smaller, yet meaningful, clusters exist.

Figure 1

Mean valence (1 = happy; 9 = unhappy) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1. The three clusters from the k-means analysis represent negative (red), positive (blue), and neutral (beige) clusters.

Figure 1

Mean valence (1 = happy; 9 = unhappy) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1. The three clusters from the k-means analysis represent negative (red), positive (blue), and neutral (beige) clusters.

Close modal
Figure 2

Mean valence (1 = happy; 9 = unhappy) and impact (1 = no impact; 9 = intensive impact) ratings per video clip from Study 1. The three clusters from the k-means analysis represent negative (red), positive (blue), and neutral (beige) clusters.

Figure 2

Mean valence (1 = happy; 9 = unhappy) and impact (1 = no impact; 9 = intensive impact) ratings per video clip from Study 1. The three clusters from the k-means analysis represent negative (red), positive (blue), and neutral (beige) clusters.

Close modal
Figure 3

Mean impact (1 = no impact; 9 = intensive impact) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1. The three clusters from the k-means analysis represent negative (red), positive (blue), and neutral (beige) clusters.

Figure 3

Mean impact (1 = no impact; 9 = intensive impact) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1. The three clusters from the k-means analysis represent negative (red), positive (blue), and neutral (beige) clusters.

Close modal
Men and women

The k-means clustering solution for men and women were very similar, forming three clusters of negative, positive, and neutral clips (Table 4). Once again, arousal and impact were highest for the negative cluster, lower for the positive cluster, and lowest for the neutral cluster. More clips were clustered into the neutral cluster for men than for women (138 vs. 95 clips respectively), due to some clips being considered negative for women, but more neutral for men. Compared to men, women rated the videos higher in arousal and impact (see Figure 4).

Table 4

Mean ratings (SD) for three clusters based onk-means analysis for men and women from Study 1.

ClusternValenceArousalImpact

MenWomenMenWomenMenWomenMenWomen

 
Neutral 138 113 4.77 (0.77) 4.68 (0.83) 6.09 (0.69) 5.70 (0.71) 2.73 (0.56) 2.71 (0.64) 
Positive 83 83 3.24 (0.55) 3.05 (0.80) 4.80 (0.72) 4.22 (0.83) 4.16 (0.67) 4.43 (0.81) 
Negative 70 95 6.69 (0.68) 7.17 (0.77) 3.99 (0.77) 3.48 (0.75) 4.95 (0.90) 5.45 (1.01) 
ClusternValenceArousalImpact

MenWomenMenWomenMenWomenMenWomen

 
Neutral 138 113 4.77 (0.77) 4.68 (0.83) 6.09 (0.69) 5.70 (0.71) 2.73 (0.56) 2.71 (0.64) 
Positive 83 83 3.24 (0.55) 3.05 (0.80) 4.80 (0.72) 4.22 (0.83) 4.16 (0.67) 4.43 (0.81) 
Negative 70 95 6.69 (0.68) 7.17 (0.77) 3.99 (0.77) 3.48 (0.75) 4.95 (0.90) 5.45 (1.01) 

Note: Valence was rated from 1 (happy) to 9 (unhappy); arousal from 1 (excited) to 9 (calm); impact from 1 (no impact) to 9 (intensive impact).

Figure 4

k-means three clustering solution based on women (left) and men’s (right) mean valence (1 = happy; 9 = unhappy), arousal (1 = excited; 9 = calm), and impact (1 = no impact; 9 = intensive impact) ratings from Study 1 per video clip.

Figure 4

k-means three clustering solution based on women (left) and men’s (right) mean valence (1 = happy; 9 = unhappy), arousal (1 = excited; 9 = calm), and impact (1 = no impact; 9 = intensive impact) ratings from Study 1 per video clip.

Close modal

Hierarchical Clustering

The clustering process was stopped after stage 284 (see the clustering script in Appendix A and the agglomeration schedule in Appendix B), because there was a 1.48 increase in the agglomeration coefficients, whereas the increase up until then was between 0.2 and 0.5. The large increase in the coefficient suggested that the clips being clustered together after that point were more heterogeneous than previously grouped clips. This eliminated the last 6 stages in the process, resulting in a total of seven clusters (because all clips are merged into one cluster at the end, removing each stage separates one cluster from a larger cluster each time).

The self-report ratings for the seven clusters are summarized in Table 5. The same data points as above are displayed in Figures 5, 6, 7, this time illustrating the division into seven clusters. The clusters contained between 3 and 88 clips each. The mean and range of scores were examined in order to provide a meaningful description for each cluster. Because each rating was given on a 9-point Likert scale: a) valence scores between 1–3.5 were considered as positive, 3.5–6.5 as neutral, and 6.5–9 as negative; b) arousal scores between 1–3.5 as high, 3.5–6.5 as medium, and 6.5–9 as low; c) impact scores between 1–3.5 as low, 3.5–6.5 as medium, and 6.5–9 as high. Based on the valence ratings, three clusters were identified as neutral, one cluster as positive, one cluster as negative, and two clusters with varying degrees of positive-to-neutral and negative-to-neutral scores (the greater variability in valence in the latter two clusters may have resulted from their large number of clips). The three neutral clusters differed in their mean ratings of arousal and impact: Clusters 3 and 6 were both medium arousal but the second was more impactful than the first, whereas cluster 7 was just as impactful as cluster 6 but much higher in arousal. In fact, the neutral cluster 7 contained only three videos, the same three that were described as unrepresentative of the negative cluster in thek-means analysis. When comparing the two groups of positive videos, cluster 2 was more positive, arousing, and impactful than the positive-to-neutral cluster 1. Similarly, the negative cluster 5 was more negative, arousing, and impactful on average than the negative-to-neutral cluster 4.

Table 5

Ratings for seven clusters based on hierarchical cluster analysis from Study 1.

ClusternValenceArousalImpactDescription

Mean (SD)
Range
Mean (SD)
Range
Mean (SD)
Range

 
1 88 3.60 (0.51) 5.18 (0.64) 3.48 (0.54) Positive-to-neutral, medium-low arousal/impact 
2.40–4.81 3.87–7.50 2.50–4.87 
2 25 2.49 (0.38) 4.16 (0.43) 4.95 (0.48) Positive, medium arousal/impact 
1.73–3.23 3.33–5.00 4.24–6.20 
3 67 4.89 (0.55) 6.33 (0.38) 2.37 (0.49) Neutral, medium arousal, low impact 
3.63–6.02 5.48–7.29 1.59–4.00 
4 72 6.54 (0.48) 4.31 (0.61) 4.48 (0.70) Negative-to-neutral, medium arousal/impact 
5.54–7.40 3.00–5.50 3.04–6.20 
5 23 7.79 (0.30) 2.96 (0.39) 6.39 (0.43) Negative, high arousal, medium impact 
7.27–8.29 2.33–4.00 5.61–7.09 
6 13 4.39 (0.48) 3.78 (0.57) 4.54 (0.58) Neutral, medium arousal/impact 
3.71–5.06 2.87–4.50 3.67–5.37 
7 5.42 (0.83) 2.34 (0.23) 6.31 (0.37) Neutral, high arousal, medium impact 
4.48–6.04 2.17–2.60 5.91–6.63 
ClusternValenceArousalImpactDescription

Mean (SD)
Range
Mean (SD)
Range
Mean (SD)
Range

 
1 88 3.60 (0.51) 5.18 (0.64) 3.48 (0.54) Positive-to-neutral, medium-low arousal/impact 
2.40–4.81 3.87–7.50 2.50–4.87 
2 25 2.49 (0.38) 4.16 (0.43) 4.95 (0.48) Positive, medium arousal/impact 
1.73–3.23 3.33–5.00 4.24–6.20 
3 67 4.89 (0.55) 6.33 (0.38) 2.37 (0.49) Neutral, medium arousal, low impact 
3.63–6.02 5.48–7.29 1.59–4.00 
4 72 6.54 (0.48) 4.31 (0.61) 4.48 (0.70) Negative-to-neutral, medium arousal/impact 
5.54–7.40 3.00–5.50 3.04–6.20 
5 23 7.79 (0.30) 2.96 (0.39) 6.39 (0.43) Negative, high arousal, medium impact 
7.27–8.29 2.33–4.00 5.61–7.09 
6 13 4.39 (0.48) 3.78 (0.57) 4.54 (0.58) Neutral, medium arousal/impact 
3.71–5.06 2.87–4.50 3.67–5.37 
7 5.42 (0.83) 2.34 (0.23) 6.31 (0.37) Neutral, high arousal, medium impact 
4.48–6.04 2.17–2.60 5.91–6.63 

Note: Valence was rated from 1 (happy) to 9 (unhappy); arousal from 1 (excited) to 9 (calm); impact from 1 (no impact) to 9 (intensive impact).

Figure 5

Mean valence (1 = happy; 9 = unhappy) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1, organized in seven clusters based on the hierarchical cluster analysis.

Figure 5

Mean valence (1 = happy; 9 = unhappy) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1, organized in seven clusters based on the hierarchical cluster analysis.

Close modal
Figure 6

Mean valence (1 = happy; 9 = unhappy) and impact (1 = no impact; 9 = intensive impact) ratings per video clip from Study 1, organized in seven clusters based on the hierarchical cluster analysis.

Figure 6

Mean valence (1 = happy; 9 = unhappy) and impact (1 = no impact; 9 = intensive impact) ratings per video clip from Study 1, organized in seven clusters based on the hierarchical cluster analysis.

Close modal
Figure 7

Mean impact (1 = no impact; 9 = intensive impact) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1, organized in seven clusters based on the hierarchical cluster analysis.

Figure 7

Mean impact (1 = no impact; 9 = intensive impact) and arousal (1 = excited; 9 = calm) ratings per video clip from Study 1, organized in seven clusters based on the hierarchical cluster analysis.

Close modal

The ratings of arousal and impact did not appear to differently influence the positive and negative cluster descriptions overall, suggesting that perhaps we only need to consider one of the two dimensions. To follow-up, the same cluster analysis was performed using only two dimensions at a time (valence and arousal, or valence and impact) to determine whether the same clustering solution would appear when considering only arousal or impact. This greatly changed the clustering of clips by entirely eliminating either the neutral, medium arousal/impact cluster or the positive medium arousal/impact cluster, both of which solutions rendered the already variable positive-to-neutral cluster larger. Although arousal and impact were highly correlated and appeared to influence the emotional clusters to the same extent, they were both crucial and necessary for generating the seven meaningful clusters.

After we completed Study 1, reviewers made several suggestions, including that we collect more rating data, collect the three emotion ratings (i.e., valence, arousal, and impact) within-subjects, reverse the rating scale for arousal ratings, and collect data online. We made these changes and collected a new set of ratings of the full set of video clips from undergraduate students from the same university as in Study 1. Our primary goal was to determine the reliability of the ratings collected in Study 1 (collapsing men and women together).

Methods

Participants

124 adults (94 women, 30 men; mean age = 19.36 years ± 1.99SD) were each presented all 291 video clips for rating (we removed six additional participants: Five were duplicate cases, and one had rated valence as “5” for almost all of the stimuli). Similar to Study 1, participants were recruited from the University of Ottawa undergraduate research pool, and given partial course credit for their participation. The study was approved by the University of Ottawa Research Ethics Board (#H08-14-25). All participants provided informed consent at the outset.

Procedures

We collected ratings of each video clip online, using Qualtrics (https://www.qualtrics.com), and participants made their ratings outside the laboratory. We encouraged participants to take breaks if they wished. The 291 video clips were presented one at a time (in a separate random order for each participant), with a rating of impact, arousal, and valence (always in that order) made for each video after it was shown. The ratings scales and endpoints were the same as in Study 1, with the exception that we reversed the scale for arousal to make 1 = calm and 9 = excited (so that it would match the direction of the impact scale). We kept the direction of the valence scale the same as in Study 1, for the sake of internal replication. Participants were then asked to rate familiarity by indicating whether they had seen that particular video before (answering “yes” or “no” for each).

To help participants, we presented three example video clips (resembling the ones from the database) at the beginning of the study, and had participants make ratings of each example video’s impact, arousal, valence, and familiarity. The responses to these questions were not used for scoring, but rather to help the participant to become comfortable with the process. In addition, in our debriefing at the end of the study we asked: “We understand that when completing online studies, there are sometimes unavoidable distractions or interruptions. Please tell us if you took any breaks or were distracted/interrupted during the study (for how long, during which part, source of distraction, etc.)”. This was to allow participants to reflect on their participation and share whether they had experienced any distractions that would make their responses unreliable.

Results

At the outset, we make two notes: First, many participants reported on debriefing that they had taken one or more breaks during the ratings, sometimes to handle a distraction (e.g., an incoming text or phone call). A handful reported that they had rated the videos while also performing an additional task (e.g., watching television, waiting for clients at work, or attending a lecture[!]), but this is probably to be expected.

Second, whereas in Study 1 all participants provided ratings of all of the videos they were shown (i.e., they did not “skip” any questions), in Study 2 no question received a 100% response rate. The response rates per video were, for arousal (M = 50%; SD = 9%; Range = 30–73%), impact (M = 52%; SD = 10% range = 26–76%), valence (M = 70%;SD = 7% range = 46–82%), and familiarity (M = 79%;SD = 2%; range = 74–84%). Several participants indicated in the online debriefing notes that they thought the survey was too long, and they may have skipped questions to get to the end faster. The lower response rates for arousal and impact compared to valence and familiarity may be important, but we have no clear idea how. Only five of the 129 participants indicated any sort of technical problem with the video presentation (e.g., “The last videos that [I] skipped wouldn’t load”; “Some clips in the last quarter of the study wouldn’t play”; “There are some videos that didn’t work”) which could have been a problem on their particular device.

However, given the similarities between these data and those from Study 2 outlined below, we are confident that these data corroborate, and provide evidence for the validity of, the ratings that we collected in Study 1.

Familiarity

Focusing on the video clips, on average, each video clip had previously been seen by 2.5% of participants (range = 0–16.3%). 199 clips were familiar to less than or equal to 5% of participants, 54 clips to between 5% and 10%, and 7 clips were familiar to over 10% (of those 7, 2 to over 15% familiar).

Valence, Arousal, and Impact Ratings

The mean valence, arousal, and impact ratings for each video clip are provided in the DEVO spreadsheet. The video clips had an average valence rating of 5.13 (SD = 1.15; Range = 2.81–8.23), an average arousal rating of 4.19 (SD = 0.86; Range =2.30–6.40), and an average impact rating of 3.98 (SD = 0.94; Range = 2.11–6.89). The average standard deviations (across clips) for arousal, impact, and valence were 2.01, 2.06, and 1.26 respectively. Relations between the three rating variables can be seen in Figures 8, 9, 10.

Figure 8

Mean valence (1 = happy; 9 = unhappy) and arousal (1 = calm; 9 = excited) ratings per video clip from Study 2, organized in five clusters based on the hierarchical cluster analysis.

Figure 8

Mean valence (1 = happy; 9 = unhappy) and arousal (1 = calm; 9 = excited) ratings per video clip from Study 2, organized in five clusters based on the hierarchical cluster analysis.

Close modal
Figure 9

Mean valence (1 = happy; 9 = unhappy) and impact (1 = no impact; 9 = intensive impact) ratings per video clip from Study 2, organized in five clusters based on the hierarchical cluster analysis.

Figure 9

Mean valence (1 = happy; 9 = unhappy) and impact (1 = no impact; 9 = intensive impact) ratings per video clip from Study 2, organized in five clusters based on the hierarchical cluster analysis.

Close modal
Figure 10

Mean impact (1 = no impact; 9 = intensive impact) and arousal (1 = calm; 9 = excited) ratings per video clip from Study 2, organized in five clusters based on the hierarchical cluster analysis.

Figure 10

Mean impact (1 = no impact; 9 = intensive impact) and arousal (1 = calm; 9 = excited) ratings per video clip from Study 2, organized in five clusters based on the hierarchical cluster analysis.

Close modal

Hierarchical Clustering

We chose to stop the clustering process after the 286th step. After this step, there was a large increase in the coefficient in the agglomeration schedule, which can be found in Appendix B. Prior to the 286th step, the coefficient increased in relatively small increments (ranging from a 0.01 to 0.33 increase). Figures 8, 9 and 10 show the five clusters from the hierarchical cluster analysis of the Study 2 ratings.

The five clusters varied in levels of valence, and arousal/impact. The cluster size, means, standard deviations, ranges, and descriptions for each cluster can be found in Table 6. The number of videos found in each cluster varied between 6 and 144. The means and ranges were taken into account in order to describe the clusters in a meaningful way. As in Study 1, we considered a valence rating between 1 and 3.5 as positive, a rating between 3.5 and 6.5 as neutral, and a rating between 6.5 and 9 as negative. For arousal and impact, we considered a rating between 1 and 3.5 as low, a rating between 3.5 and 6.5 as medium, and a rating between 6.5 and 9 as high.

Table 6

Ratings for five clusters based on the hierarchical cluster analysis from Study 2.

ClusterNValenceArousalImpactDescription

Mean (SD)
Range
Mean (SD)
Range
Mean (SD)
Range

 
1 144 4.56 (0.41)
3.68–5.31 
3.63 (0.61)
2.3–4.99 
4.28 (0.49)
2.11–4.23 
Positive-to-neutral; low arousal/impact 
2 81 6.09 (0.52)
5–7.17 
4.29 (0.50)
3.21–5.2 
4.28 (0.55)
3.12–5.26 
Negative-to-neutral; medium-low arousal/impact 
3 25 7.40 (0.42)
6.4–8.23 
5.49 (0.43)
4.27–6.27 
5.88 (0.51)
4.94–6.89 
Negative; medium arousal/impact 
4 35 3.60 (0.37)
2.81–4.11 
5.01 (0.46)
4.17–5.65 
4.49 (0.38)
3.76–5.47 
Positive; medium-low arousal/impact 
5 5.42 (0.41)
4.82–6.06 
5.92 (0.37)
5.51–6.4 
5.33 (0.48)
4.69–6.09 
Neutral; medium-high arousal; medium impact 
ClusterNValenceArousalImpactDescription

Mean (SD)
Range
Mean (SD)
Range
Mean (SD)
Range

 
1 144 4.56 (0.41)
3.68–5.31 
3.63 (0.61)
2.3–4.99 
4.28 (0.49)
2.11–4.23 
Positive-to-neutral; low arousal/impact 
2 81 6.09 (0.52)
5–7.17 
4.29 (0.50)
3.21–5.2 
4.28 (0.55)
3.12–5.26 
Negative-to-neutral; medium-low arousal/impact 
3 25 7.40 (0.42)
6.4–8.23 
5.49 (0.43)
4.27–6.27 
5.88 (0.51)
4.94–6.89 
Negative; medium arousal/impact 
4 35 3.60 (0.37)
2.81–4.11 
5.01 (0.46)
4.17–5.65 
4.49 (0.38)
3.76–5.47 
Positive; medium-low arousal/impact 
5 5.42 (0.41)
4.82–6.06 
5.92 (0.37)
5.51–6.4 
5.33 (0.48)
4.69–6.09 
Neutral; medium-high arousal; medium impact 

Note: Valence was rated from 1 (happy) to 9 (unhappy); arousal from 1 (calm) to 9 (excited); impact from 1 (no impact) to 9 (intensive impact).

We found five meaningful clusters, which varied in levels of valence: positive, positive-to-neutral, neutral, negative-to-neutral, and negative. No cluster had high levels of arousal or impact. Arousal and impact did not seem to differentially influence the clusters formed, perhaps because arousal and impact were highly correlated (as shown in Figure 10). The largest cluster had 144 videos with positive-to-neutral valence, low impact, and low arousal: These clips included videos of human activities (e.g., socializing, eating, dancing, kissing) and videos of animals. The smallest cluster had 6 videos with neutral valence, medium-high arousal and medium impact: These clips included ski cliff jumping, a shark biting an underwater camera, and parachute jumping.

The cluster with negative valence also had medium arousal and medium impact, including clips of violence between animals, hunting, human death, and pollution. The cluster with positive valence had medium-low arousal and medium-low impact, including clips of marriages, baby animals, hockey shootouts, desserts and family relationships. Lastly, the negative-to-neutral valence cluster had medium-low arousal and medium-low impact, including clips of animals (e.g., wasps, snakes), people crying, and rushing water.

Spearman correlation comparing in-person and web-based ratings

Reassuringly, the ratings from the Study 2 (i.e., web-based) participants looked similar in rank-order to those from the Study 1 (i.e., in-person): Spearman correlation for arousal, ρ = –0.90; for impact, ρ = 0.93; and for valence, ρ = 0.98. Figures 11, 12, 13 show these correlations across the two sets of raters (i.e., Studies 1 and 2).

Figure 11

Mean valence ratings per video clip from Study 1 and Study 2.

Figure 11

Mean valence ratings per video clip from Study 1 and Study 2.

Close modal
Figure 12

Mean arousal ratings per video clip from Study 1 and Study 2.

Figure 12

Mean arousal ratings per video clip from Study 1 and Study 2.

Close modal
Figure 13

Mean impact ratings per video clip from Study 1 and Study 2.

Figure 13

Mean impact ratings per video clip from Study 1 and Study 2.

Close modal

The purpose of this study was to create a set of 291 brief and unfamiliar video clips to complement existing libraries. We are making these clips available to colleagues, and would suggest local exploration and validation before using them. Nonetheless, as a starting point we have provided basic subjective ratings from young Canadians. Participants reported whether they had seen each video prior to the experiment, which indicated that the videos were generally unfamiliar. They also rated their feelings of emotional valence, arousal, or impact after each clip. As expected, there was a wide range of positive, negative, and neutral clips with varying degrees of arousal and impact. In general, positive and negative clips were more arousing and impactful than neutral clips. A closer examination of the underlying structure of the clips was provided by the k-means and hierarchical cluster analyses.

Arousal versus Impact

Researchers are beginning to explore the different effects of arousal and impact on emotion processing. Until recently, arousal garnered much of the research focus, especially when pertaining to emotional memory (e.g., McGaugh, 2000, 2015; McGaugh et al., 1993). Yet, some work suggests that when arousal is controlled, impact influences visual attention allocation (Murphy, Hill, Ramponi, Calder, & Barnard, 2010) and amygdala activation when viewing negative images (Ewbank, Barnard, Croucher, Ramponi, & Calder, 2009). In fact, impact may even be a better predictor of recognition memory than arousal, despite both dimensions being significantly correlated (Croucher et al., 2011). In the present set of ratings, arousal and impact were highly correlated and did not appear to differentially influence the positive, negative, and neutral clusters generated by thek-means analysis in Study 1. The patterns of arousal and impact seemed similar to one another in the hierarchical cluster analysis from Study 2, also. However, a number of subtle differences can be seen between arousal and impact, for example when closely examining the seven clusters from the hierarchical cluster analysis in Study 1. Despite their significant overlap, both arousal and impact contributed independently to the underlying structure of the clusters because the removal of one of the two ratings reduced the number and homogeneity of clusters. The structure of the clips in the database therefore depended on both arousal and impact, in addition to valence. Moreover, impact differentiated between two neutral, medium arousal clusters whereas arousal differentiated between two neutral, medium impact clusters. A final difference was observed in the extremity of scores: on average, impact scores were not as extreme as arousal. Only 9 video clips had a high impact score greater than 6.5, whereas 38 clips had a high arousal score smaller than 3.5. In fact, the highest average impact of a single clip was 7.09 compared to the lowest average arousal value of 2.17. This may explain why there were no clusters with high impact despite there being some clusters high in arousal (we note however that the average impact of clusters did gradually increase in parallel to arousal, yet simply did not surpass our absolute threshold for high impact). The difference in extremity of scores may also reflect subtle differences in the scales being used. Whereas the impact scale is clearly linear from no impact (1) to intensive impact (9), arousal is measured from excited (1) to calm (9), where the midpoint (5) represents a state that is neither calm nor excited (as per the IAPS instructions; Lang et al., 2008). More linear measurements of arousal (e.g., the affective slider from Betella & Verschure, 2016) could be useful for future work directly contrasting the role of arousal and impact on emotion processing. In addition, a reviewer noted when exploring the Study 1 ratings with a simultaneous regression predicting valence that impact but not arousal had an effect.

Cluster Analyses

We performed a three-solution k-means cluster analysis based on the ratings from Study 1 because a large number of researchers select stimuli based on one of three levels of emotional valence (positive, negative, neutral). This revealed a cluster of negative clips high in arousal/impact, a cluster of positive clips slightly lower in arousal/impact, and a cluster of neutral clips even lower in arousal/impact. Despite the utility of separating clips into these three groups, smaller subgroups of clips may be more meaningful. The range of valence scores for each cluster in the k-means solution was very wide, encompassing both emotional and neutral clips (negative cluster range = 4.48–8.29; positive cluster range = 1.73–5.02). Furthermore, three neutral clips were sorted into the negative cluster due to their increased impact/arousal, but they remained very different in terms of their valence, which is problematic for researchers selecting stimuli based primarily on valence. Without knowing the underlying number of clusters in the database, we performed a hierarchical cluster analysis which identifies subgroups of clips that are similar with one another, and maximally dissimilar from clips in other subgroups. This generated seven clusters of clips: three neutral, two negative or negative-to-neutral, and two positive or positive-to-neutral. Unique to this database, there were three types of neutral videos varying from low to medium impact and low to high arousal. These videos will be particularly useful for researchers who aim to study valence independent of arousal. Arousal has been exceedingly challenging to control when using neutral pictures (e.g., of a blue mug or buffalo) or word stimuli (e.g., door, table, shoe) that do not readily capture participants’ attention and are likely to quickly induce habituation. Videos on the other hand readily capture attention (Rauschenberger, 2003; Theeuwes, 2010) and boost physiological arousal more easily than pictures (Detenber & Simons, 1998). This will allow for a greater distinction between the contributions of valence and arousal to cognitive processing in future research.

In addition, the hierarchical cluster analysis in Study 1 revealed a group of 25 positive videos that evoked medium levels of arousal and impact, which will provide a useful comparison for medium arousing/impactful negative videos. Moreover, both positive clusters and the negative-to-neutral cluster were also similar in terms of arousal and impact scores. However, the hierarchical analysis also revealed a unique 23-item cluster that included negative videos that were more arousing and impactful than nearly all other clips. This cluster included scenes of: violence toward humans (gun shooting, group fighting) and animals (whale hunting, animal carcass), a school bus drifting in a flood, animal farming, a man throwing up, and a man injecting drugs. This speaks to the variety of themes that can evoke intense negative feelings. Yet, it remains challenging to reliably evoke intense positive feelings, even when using videos. Here we included a variety of positive themes (animals, surfing, hockey shootouts, babies), but these were not as arousing or impactful as the negative videos. We also included scenes of intimate kissing with limited nudity, and although these evoked high levels of arousal, the ratings of valence were variable. As a result, this video database will be most useful for researchers comparing the effects of low-to-medium levels of arousal and impact between positive, negative, and neutral stimuli.

Familiarity

The novelty of experimental stimuli is vital to studies on emotion because participants’ reactions may be influenced by prior exposure to a stimulus (Bransford & Johnson, 1972; Craik & Lockhart, 1972; Gabert-Quillen et al., 2015; Hannula, 2010; Hutchinson & Turk-Browne, 2012; Kuhl & Chun, 2014; Robertson & Köhler, 2007; Westmacott et al., 2004). In the present study, fewer than 2% of participants on average had seen each video clip prior to the experiment, meaning that their emotional reactions to the clips were not influenced by a previous experience. This is important because it can be challenging for researchers to objectively measure and control for participants’ past exposure to stimuli. Although the majority of the videos were unfamiliar, seven clips in Study 1 had been previously seen by 7–8% of participants and one had been seen by nearly 10% of participants. The most familiar clip was of a hockey shootout, which may be less familiar to non-Canadian participants. The remaining clips that were rated as somewhat familiar were of humans, animals, food, and natural disasters, collected from online sources or movies/documentaries. Interestingly, other clips selected from these same sources were not highly familiar to participants, suggesting that participants either did not remember the source videos themselves or had seen other videos highly similar to these experimental clips and mistook them as being the same. We should point out here a possible influence on participants` familiarity ratings, that might have inflated the ratings for some clips: Even though each participant was shown each clip only once (and was told this would be the case), some participants may have erroneously thought that they had seen some of the clips before. This came up in the debriefing notes from Study 2 and may have to do with the fact that some clips were very similar to one another (for example, we have two separate Nashville Predators hockey players scoring on the goalie in a shoot-out practice, and both had high familiarity ratings). This is a potential hazard of presenting any participant more than one clip in a set. Thus, if anything, it is likely that overall familiarity of the clips in this set is slightly lower than indicated. To disentangle these responses in future work, participants could also rate how often they view scenes similar to the ones presented, on a continuous scale from never tovery often (Libkuman, Otani, Kern, Viger, & Novak, 2007).

Applications of the Database

These video clips were selected to increase the affective realism (Camerer & Mobbs, 2017; Zaki & Ochsner, 2012) and ecological validity of emotion eliciting stimuli, without having to, for example, present live tarantulas and spiders during an experiment. They should provide a useful alternative to static pictures, which may be less arousing for participants today than they were two decades ago (Libkuman et al., 2007), perhaps due to the increased exposure to visual material on the internet. Including a large number of video clips that are controlled on various factors (e.g., duration, source, levels of arousal/impact) will reduce confounds when comparing emotional and neutral stimuli.

There may be various other factors that researchers think are important when selecting stimuli from this set. A reviewer noted that the relation between means and standard deviations for each video may vary across the dimensions of valence, arousal, and impact. For instance, for valence many raters seemed to converge on neutral ratings for many videos. That is, if a video seemed to be neutral in valence, many of the raters agreed on this. In contrast, for the rating of impact it seemed that most raters agreed on what the “low impact” videos were, leading to a small standard deviation value for the low mean ratings. However, as mean ratings of a video increased, so did the diversity of ratings (leading to a larger standard deviation value for the higher mean ratings).

This novel database of video clips can join existing film libraries. To date, close to 300 film clips have been used across dozens of studies (for a list of existing film clips, see Gilman et al., 2017). The current study (with 291 videos) doubles the number of clips available to researchers. Note that the present clips are different from the existing sets in several ways. First, the present clips are shorter than most of the ones in the existing literature. We selected relatively short clips because our primary aim was to use these as targets in future memory research, analogously to how the IAPS images pictures have been used (e.g., Bradley, 2014). Second, unlike most existing clips, the present ones are low in familiarity (as mentioned in the previous paragraph). Again, this will be a major advantage for studies of attention and memory, because familiarity can influence participants’ emotional responses (Gabert-Quillen et al., 2015), eye movements (Hannula, 2010), attention (Hutchinson & Turk-Browne, 2012;Kuhl & Chun, 2014), and declarative memory (Bransford & Johnson, 1972; Craik & Lockhart, 1972; Robertson & Köhler, 2007; Westmacott, Black, Freedman, & Moscovitch, 2004). In fact, with a set of similar video clips to ours from Frederiks et al. (2019), researchers will have two compatible sets from which to choose individual stimuli.

These clips will be particularly useful for studies on emotional memory. First, videos capture attention (Rauschenberger, 2003; Theeuwes, 2010) and increase physiological arousal (Detenber & Simons, 1998) more easily than static images, which may intensify participants’ emotional reactions. Second, videos are often better remembered than static images, referred to as the dynamic superiority effect (Buratto, Matthews, & Lamberts, 2009), because of increased attention at encoding or even their increased complexity and conceptual richness (Candan et al., 2016). Using videos as study material will greatly reduce floor effects in memory studies and will also allow researchers to employ longer study-test intervals. These behavioural effects can be studied in parallel to event-related potentials that depend on a large number of well-controlled stimuli.

In the present paper, we provide the average ratings of valence, arousal, and impact of the video clips and also discuss a meaningful way to organize the clips based on cluster analysis. When selecting stimuli for an experiment, most researchers use specific cutoff values to divide each dimension into different categories (e.g., positive, negative, neutral valence, or high, medium, low arousal). This leads to two problems. First, researchers do not employ the same cutoff values, rendering comparisons between studies more limited (compare the different thresholds used when selecting stimuli from IAPS in Mikels et al., 2005 and Xing & Isaacowitz, 2006). Second, applying specific cutoff values means that researchers must rely on only one or two dimensions at a time while ignoring all other, possibly relevant, dimensions (Constantinescu et al., 2016). This practice ignores whether applying absolute thresholds for one or two dimensions approximates the internal structure of the stimuli in the database as there may exist subgroups of stimuli that go beyond the classifications of the selected dimensions (Constantinescu et al., 2016). In the present paper, we describe the internal structure of the video clip database that takes into account all three dimensions simultaneously. This will ensure that researchers compare groups of video clips that are maximally different on all three dimensions. This will be essential for developing a more accurate understanding of the relative influences of both impact and arousal on emotion processing.

User’s Manual

Summary information for the 291 video clips is provided in Table 2. In the DEVO spreadsheet, we provide the following information: 1) clip number (referring to the source and the specific clip from that source); 2) duration in milliseconds; 3) source title; 4) original source format (including a URL when available); 5) source release date; 6) brief description; 7) presence of people; 8) presence of animals or insects; 9) presence of food, drink, or drugs; 10) number and percent of participants that had previously seen the video; 11) cluster assignment based on the hierarchical cluster analysis; 12) and the means and standard deviations of valence, arousal, and impact for men, women, and both men and women together.

The video clips can be obtained directly from the corresponding author. There are two versions for each clip: one with the original sound and one with the sound removed. The current set of ratings were collected for the clips without sound. [We removed sound to make the clips uniform – that is, some of the source clips had no sound, some had dialogue in English, some had dialogue in a language other than English, some had sound effects, and some had background music. Anyone interested in obtaining the clips with sound need only contact us.] However, we provide the original versions, which often contained dialogue (in English, French, or other), sounds of nature, or background music. The emotional reactions to the clips with sound should be similar to, if not more extreme than, what is reported here for the silent clips, but researchers should collect their own ratings prior to using them. We encourage researchers to check the ratings locally before using any of the video clips to ensure the ratings are consistent cross-culturally.

Finally, it is important to ensure that the appropriate codecs are installed on the computers that will be used during an experiment (H.264 codec for the .avi files and the windows media video and audio professional codecs for the .wmv files). If using E-Prime software to play the silent videos, you may select an option within the codec configuration called “skip audio” that allows the silent videos to play without an audio load error. For further help with codec configuration in E-Prime, visit: http://www.pstnet.com/support/kb.asp?TopicID=3162.

The Database of Emotional Videos from Ottawa provides researchers with a large set of ecologically valid and naturalistic videos suitable for emotion research. We provide standardized ratings of valence, arousal, and impact and recommend selecting videos based on all three dimensions, using the subgroups identified through hierarchical cluster analysis as a guide. Due to the wide range in arousal and impact ratings for both emotional and neutral videos, researchers will be able to better control for the differences between emotional and neutral stimuli when examining the effects of emotion on cognition. With its wide range of themes, future work could also examine the discrete emotions specifically elicited by each video.

For access to the stimuli, email moviesstudyuo@gmail.com or patrick.davidson@uottawa.ca.

Appendix A

Clustering Scripts for k-means and hierarchical clustering for Studies 1 and 2.

Note: These scripts can be opened in SPSS or alternatively in the open-access PSPP program (https://www.gnu.org/software/pspp/).

Study 1

*k-Means*

QUICK CLUSTER valenceMean arousalMean impactMean

 /MISSING=LISTWISE

 /CRITERIA=CLUSTER(3) MXITER(10) CONVERGE(0)

 /METHOD=KMEANS(NOUPDATE)

 /SAVE CLUSTER

 /PRINT ID(ClipName) INITIAL.

*Hierarchical Cluster Analysis*

CLUSTER  valenceMean arousalMean impactMean

 /METHOD BAVERAGE

 /MEASURE=SEUCLID

 /ID=ClipName

 /PRINT SCHEDULE CLUSTER(7)

 /PLOT DENDROGRAM VICICLE

 /SAVE CLUSTER(7).

Study 2

*Hierarchical Cluster Analysis*

CLUSTER  arousal impact valence

 /METHOD BAVERAGE

 /MEASURE=SEUCLID

 /PRINT SCHEDULE

 /PLOT DENDROGRAM VICICLE.

 /SAVE CLUSTER(5).

Appendix B

Hierarchical Cluster Analysis from Study 1: Agglomeration Schedule.

Appendix B Part 1

Hierarchical Cluster Analysis from Study 1: Agglomeration Schedule.

StageCluster CombinedCoefficientsStage Cluster First AppearsNext Stage


Cluster 1Cluster 2Cluster 1Cluster 2

 
264 28 1.001 255 205 277 
265 1.002 217 256 276 
266 24 31 1.046 240 207 268 
267 15 19 1.050 248 270 
268 24 1.156 251 266 280 
269 10 22 1.248 250 261 274 
270 15 1.300 263 267 279 
271 16 132 1.322 258 214 287 
272 90 1.378 243 227 279 
273 43 68 1.393 252 262 280 
274 10 142 1.513 269 238 283 
275 29 66 1.548 260 277 
276 106 1.622 265 236 283 
277 29 1.867 264 275 285 
278 73 173 2.210 259 287 
279 2.455 270 272 289 
280 43 2.474 268 273 284 
281 26 91 2.955 241 254 285 
282 14 36 3.149 230 253 284 
283 10 3.330 276 274 288 
284 14 3.796 280 282 286 
285 26 5.276 277 281 286 
286 5.756 284 285 289 
287 16 73 6.991 271 278 288 
288 16 8.691 283 287 290 
289 9.019 286 279 290 
290 17.972 289 288 
StageCluster CombinedCoefficientsStage Cluster First AppearsNext Stage


Cluster 1Cluster 2Cluster 1Cluster 2

 
264 28 1.001 255 205 277 
265 1.002 217 256 276 
266 24 31 1.046 240 207 268 
267 15 19 1.050 248 270 
268 24 1.156 251 266 280 
269 10 22 1.248 250 261 274 
270 15 1.300 263 267 279 
271 16 132 1.322 258 214 287 
272 90 1.378 243 227 279 
273 43 68 1.393 252 262 280 
274 10 142 1.513 269 238 283 
275 29 66 1.548 260 277 
276 106 1.622 265 236 283 
277 29 1.867 264 275 285 
278 73 173 2.210 259 287 
279 2.455 270 272 289 
280 43 2.474 268 273 284 
281 26 91 2.955 241 254 285 
282 14 36 3.149 230 253 284 
283 10 3.330 276 274 288 
284 14 3.796 280 282 286 
285 26 5.276 277 281 286 
286 5.756 284 285 289 
287 16 73 6.991 271 278 288 
288 16 8.691 283 287 290 
289 9.019 286 279 290 
290 17.972 289 288 

Hierarchical Cluster Analysis from Study 2: Agglomeration Schedule.

Appendix B Part 2

Hierarchical Cluster Analysis from Study 2: Agglomeration Schedule.

StageCluster CombinedCoefficientsStage Cluster First AppearsNext Stage


Cluster 1Cluster 2Cluster 1Cluster 2

 
264 51 59 .492 232 213 280 
265 103 166 .493 219 281 
266 13 .514 247 251 272 
267 .545 260 257 282 
268 12 25 .547 252 240 274 
269 82 .604 254 236 278 
270 43 .650 224 245 271 
271 31 .697 270 248 284 
272 35 .719 266 262 278 
273 90 167 .720 250 281 
274 10 12 .807 204 268 276 
275 33 34 .819 253 256 280 
276 10 21 .997 274 259 286 
277 151 1.054 263 242 279 
278 1.193 272 269 285 
279 154 1.248 277 261 283 
280 33 51 1.454 275 264 289 
281 90 103 1.791 273 265 288 
282 192 1.939 267 285 
283 24 2.145 279 258 288 
284 137 2.231 271 286 
285 2.539 278 282 287 
286 10 2.624 284 276 287 
287 5.346 285 286 289 
288 90 5.572 283 281 290 
289 33 6.312 287 280 290 
290 14.210 289 288 
StageCluster CombinedCoefficientsStage Cluster First AppearsNext Stage


Cluster 1Cluster 2Cluster 1Cluster 2

 
264 51 59 .492 232 213 280 
265 103 166 .493 219 281 
266 13 .514 247 251 272 
267 .545 260 257 282 
268 12 25 .547 252 240 274 
269 82 .604 254 236 278 
270 43 .650 224 245 271 
271 31 .697 270 248 284 
272 35 .719 266 262 278 
273 90 167 .720 250 281 
274 10 12 .807 204 268 276 
275 33 34 .819 253 256 280 
276 10 21 .997 274 259 286 
277 151 1.054 263 242 279 
278 1.193 272 269 285 
279 154 1.248 277 261 283 
280 33 51 1.454 275 264 289 
281 90 103 1.791 273 265 288 
282 192 1.939 267 285 
283 24 2.145 279 258 288 
284 137 2.231 271 286 
285 2.539 278 282 287 
286 10 2.624 284 276 287 
287 5.346 285 286 289 
288 90 5.572 283 281 290 
289 33 6.312 287 280 290 
290 14.210 289 288 

Appendix C

Rating Instructions

Valence

After viewing each clip, use the following scale to indicate how you felt in reaction to it on a happy vs. unhappy scale. On one extreme of the scale (1) you felt happy, pleased, satisfied, contented, hopeful, while viewing the video; whereas on the other end of the scale (9) you felt completely unhappy, annoyed, unsatisfied, melancholic, despaired, bored. If you felt completely neutral, neither happy nor unhappy, you can respond 5. Please provide the most accurate rating of how you felt in response to the video using any number between 1 and 9.

Arousal

After viewing each clip, use the following scale to indicate how you felt in reaction to it on an excited vs. calm scale. At one extreme of the scale (1) you felt stimulated, excited, frenzied, jittery, wide-awake, aroused; whereas on the other end of the scale (9) you felt completely relaxed, calm, sluggish, dull, sleepy, unaroused. If you were not at all excited nor at all calm, you can respond 5. Please provide the most accurate rating of how you felt in response to the video using any number between 1 and 9.

Impact

After viewing each video, we would like you to judge whether you feel the content of the video created an instant sense of impact on you personally. Try not to think in detail about the video or its contents in terms of particular properties like the particular positive or negative feelings it might invoke in you (e.g., fear, anger, joy, etc.), how distinctive the image is or how many thoughts and ideas it leads to. We just want an estimate of its overall immediate impact, irrespective of what it is that might underlie its impact on you personally (i.e., whether it’s positive, negative or neither). Remember, it is your own personal reaction we are interested in, not how you think people in general should feel. Just glance at the video and make an “instant” judgment using the following scale.

Note: In Study 2, we reversed the scale for arousal.

Different software were used to trim the videos because the initial set of 90 video clips were included in a pilot study in 2014 (results not reported here), after which the additional 201 clips were edited using Filmora software which allowed for precise frame-by-frame trimming.

Age was removed for one participant because he/she inputted 0.

The purpose of these analyses was to provide an initial examination of the underlying structure of the database, although other clustering or data reduction techniques could be used in future work (cf. Constantinescu, Wolters, Moore, & MacPherson, 2016).

We are grateful to Charles Collin, Stéphane Rainville, and Raynald Tanguay for advice. We also thank Jessica O’Dwyer for video suggestions and help with data collection and Zaki Khouani for help with data entry. We are grateful to Stéphane Rainville for providing the analyses of Hue, Saturation, and Value in the appended spreadsheet. We also thank another researcher for invaluable contributions to this study.

This research was supported by a Discovery Grant and graduate scholarship from the Natural Sciences and Engineering Research Council of Canada, and a graduate scholarship from the Ontario Graduate Scholarship program.

The authors have no competing interests to declare.

KTAB, CB, and PSRD were involved in the initial conceptualization of the study. All authors helped process the videos and collect data. KTAB conducted the analyses and wrote the manuscript with PSRD.

1
Baggett
,
P.
(
1979
).
Structurally equivalent stories in movie and text and the effect of the medium on recall
.
Journal of Verbal Learning and Verbal Behavior
,
18
(
3
),
333
356
. DOI:
2
Betella
,
A.
, &
Verschure
,
P. F. M. J.
(
2016
).
The Affective Slider: A Digital Self-Assessment Scale for the Measurement of Human Emotions
.
PLOS ONE
,
11
(
2
),e0148037. DOI:
3
Bos
,
M. G. N.
,
Jentgens
,
P.
,
Beckers
,
T.
, &
Kindt
,
M.
(
2013
).
Psychophysiological Response Patterns to Affective Film Stimuli
.
PLOS ONE
,
8
(
4
), e62661. DOI:
4
Bradley
,
M. M.
(
2014
).Emotional memory: A dimensional analysis. In
Emotions
(pp.
111
148
).
Psychology Press
.
5
Bransford
,
J. D.
, &
Johnson
,
M. K.
(
1972
).
Contextual prerequisites for understanding: Some investigations of comprehension and recall
.
Journal of Verbal Learning and Verbal Behavior
,
11
(
6
),
717
726
. DOI:
6
Buratto
,
L. G.
,
Matthews
,
W. J.
, &
Lamberts
,
K.
(
2009
).
When are moving images remembered better? Study–test congruence and the dynamic superiority effect
.
The Quarterly Journal of Experimental Psychology
,
62
(
10
),
1896
1903
. DOI:
7
Camerer
,
C.
, &
Mobbs
,
D.
(
2017
).
Differences in Behavior and Brain Activity during Hypothetical and Real Choices
.
Trends in Cognitive Sciences
,
21
(
1
),
46
56
. DOI:
8
Candan
,
A.
,
Cutting
,
J. E.
, &
DeLong
,
J. E.
(
2016
).
RSVP at the movies: dynamic images are remembered better than static images when resources are limited
.
Visual Cognition
,
23
(
9–10
),
1205
1216
. DOI:
9
Carvalho
,
S.
,
Leite
,
J.
,
Galdo-Álvarez
,
S.
, &
Gonçalves
,
O. F.
(
2012
).
The Emotional Movie Database (EMDB): a self-report and psychophysiological study
.
Applied Psychophysiology and Biofeedback
,
37
(
4
),
279
294
. DOI:
10
Chen
,
T.
,
Han
,
M.
,
Hua
,
W.
,
Gong
,
Y.
, &
Huang
,
T. S.
(
2003
).
A new tracking technique: Object tracking and identification from motion
. In
International Conference on Computer Analysis of Images and Patterns
(pp.
157
164
).Springer. Consulté à l’adressehttp://link.springer.com/chapter/10.1007/978-3-540-45179-2_20. DOI:
11
Constantinescu
,
A. C.
,
Wolters
,
M.
,
Moore
,
A.
, &
MacPherson
,
S. E.
(
2016
).
A cluster-based approach to selecting representative stimuli from the International Affective Picture System (IAPS) database
.
Behavior Research Methods
. DOI:
12
Courtney
,
C. G.
,
Dawson
,
M. E.
,
Schell
,
A. M.
,
Iyer
,
A.
, &
Parsons
,
T. D.
(
2010
).
Better than the real thing: eliciting fear with moving and static computer-generated stimuli
.
International Journal of Psychophysiology: Official Journal of the International Organization of Psychophysiology
,
78
(
2
),
107
114
. DOI:
13
Craik
,
F. I.
, &
Lockhart
,
R. S.
(
1972
).
Levels of processing: A framework for memory research
.
Journal of Verbal Learning and Verbal Behavior
,
11
,
671
684
. DOI:
14
Croucher
,
C. J.
,
Calder
,
A. J.
,
Ramponi
,
C.
,
Barnard
,
P. J.
, &
Murphy
,
F. C.
(
2011
).
Disgust Enhances the Recollection of Negative Emotional Images
.
PLoS ONE
,
6
(
11
),e26571. DOI:
15
Detenber
,
B. H.
, &
Simons
,
R. F.
(
1998
).
Roll ’em!: The effects of picture motion on emotional responses
.
Journal of Broadcasting & Electronic Media
,
42
(
1
),
113
. DOI:
16
Ewbank
,
M. P.
,
Barnard
,
P. J.
,
Croucher
,
C. J.
,
Ramponi
,
C.
, &
Calder
,
A. J.
(
2009
).
The amygdala response to images with impact
.
Social Cognitive and Affective Neuroscience
,
4
(
2
),
127
133
. DOI:
17
Ferguson
,
R.
(
2014
).
Visual recognition for dynamic scenes
(Ph.D. Dissertation).
Arizona State University
. Consulté à l’adressehttps://repository.asu.edu/attachments/135015/content/Ferguson_asu_0010E_13954.pdf
18
Ferguson
,
R.
,
Homa
,
D.
, &
Ellis
,
D.
(
2016
).
Memory for temporally dynamic scenes
.
The Quarterly Journal of Experimental Psychology
(pp.
1
14
). DOI:
19
Frederiks
,
K.
,
Kark
,
S. M.
, &
Kensinger
,
E. A.
(
2019
).
The Positive and Negative Affective Movie Set (PANAMS)
.
Poster presented at the annual meeting of the Society for Affective Science 2019
,
Boston, MA
.
20
Gabert-Quillen
,
C. A.
,
Bartolini
,
E. E.
,
Abravanel
,
B. T.
, &
Sanislow
,
C. A.
(
2015
).
Ratings for emotion film clips
.
Behavior Research Methods
,
47
(
3
),
773
787
. DOI:
21
Gilman
,
T. L.
,
Shaheen
,
R.
,
Nylocks
,
K. M.
,
Halachoff
,
D.
,
Chapman
,
J.
,
Flynn
,
J. J.
,
Coifman
,
K. G.
, et al. (
2017
).
A film set for the elicitation of emotion in research: A comprehensive catalog derived from four decades of investigation
.
Behavior Research Methods
(pp.
1
22
). DOI:
22
Gross
,
J. J.
, &
Levenson
,
R. W.
(
1995
).
Emotion elicitation using films
.
Cognition & emotion
,
9
(
1
),
87
108
. DOI:
23
Hannula
,
D. E.
(
2010
).
Worth a glance: using eye movements to investigate the cognitive neuroscience of memory
.
Frontiers in Human Neuroscience
,
4
(
166
). DOI:
24
Hirose
,
Y.
(
2010
).
Perception and memory across viewpoint changes in moving images
.
Journal of Vision
,
10
(
4
),
1
19
. DOI:
25
Hutchinson
,
J. B.
, &
Turk-Browne
,
N. B.
(
2012
).
Memory-guided attention: control from multiple memory systems
.
Trends in Cognitive Sciences
,
16
(
12
),
576
579
. DOI:
26
Johansson
,
G.
(
1973
).
Visual perception of biological motion and a model for its analysis
.
Perception & Psychophysics
,
14
(
2
),
201
211
. DOI:
27
Kuhl
,
B. A.
, &
Chun
,
M.
(
2014
).Memory and Attention. In
A.
Nobre
,
C.
Anna
, &
S.
Kastner
(Éd.),
The Oxford Handbook of Attention
.
Oxford University Press
. DOI:
28
Lang
,
P. J.
,
Bradley
,
M. M.
, &
Cuthbert
,
B. N.
(
2008
).International affective picture system (IAPS): Affective ratings of pictures and instruction manual. In
Technical Report A-8
.
Gainesville, FL
:
University of Florida
.
29
Libkuman
,
T. M.
,
Otani
,
H.
,
Kern
,
R.
,
Viger
,
S. G.
, &
Novak
,
N.
(
2007
).
Multidimensional normative ratings for the International Affective Picture System
.
Behavior Research Methods
,
39
(
2
),
326
334
. DOI:
30
Matthews
,
W. J.
,
Benjamin
,
C.
, &
Osborne
,
C.
(
2007
).
Memory for moving and static images
.
Psychonomic Bulletin & Review
,
14
(
5
),
989
993
. DOI:
31
Matthews
,
W. J.
,
Buratto
,
L. G.
, &
Lamberts
,
K.
(
2010
).
Exploring the memory advantage for moving scenes
.
Visual Cognition
,
18
(
10
),
1393
1419
. DOI:
32
McGaugh
,
J. L.
(
2000
).
Memory--a Century of Consolidation
.
Science
,
287
(
5451
),
248
251
. DOI:
33
McGaugh
,
J. L.
(
2015
).
Consolidating Memories
.
Annual Review of Psychology
,
66
(
1
),
1
24
. DOI:
34
McGaugh
,
J. L.
,
Introini-Collison
,
I. B.
,
Cahill
,
L. F.
,
Castellano
,
C.
,
Dalmaz
,
C.
,
Parent
,
M. B.
, &
Williams
,
C. L.
(
1993
).
Neuromodulatory systems and memory storage: role of the amygdala
.
Behavioural brain research
,
58
(
1–2
),
81
90
. DOI:
35
Mikels
,
J. A.
,
Fredrickson
,
B. L.
,
Larkin
,
G. R.
,
Lindberg
,
C. M.
,
Maglio
,
S. J.
, &
Reuter-Lorenz
,
P. A.
(
2005
).
Emotional category data on images from the International Affective Picture System
.
Behavior research methods
,
37
(
4
),
626
630
. DOI:
36
Morissette
,
L.
, &
Chartier
,
S.
(
2013
).
The k-means clustering technique: General considerations and implementation in Mathematica
.
Tutorials in Quantitative Methods for Psychology
,
9
(
1
),
15
24
. DOI:
37
Murphy
,
F. C.
,
Hill
,
E. L.
,
Ramponi
,
C.
,
Calder
,
A. J.
, &
Barnard
,
P. J.
(
2010
).
Paying attention to emotional images with impact
.
Emotion
,
10
(
5
),
605
614
. DOI:
38
Rauschenberger
,
R.
(
2003
).
Attentional capture by auto- and allo-cues
.
Psychonomic Bulletin & Review
,
10
(
4
),
814
842
. DOI:
39
Regan
,
D.
(
1986
).
Visual processing of four kinds of relative motion
.
Vision Research
,
26
(
1
),
127
145
. DOI:
40
Robertson
,
E. K.
, &
Köhler
,
S.
(
2007
).
Insights from child development on the relationship between episodic and semantic memory
.
Neuropsychologia
,
45
(
14
),
3178
3189
. DOI:
41
Saal
,
F. E.
,
Downey
,
R. G.
, &
Lahey
,
M. A.
(
1980
).
Rating the ratings: Assessing the psychometric quality of rating data
.
Psychological Bulletin
,
88
(
2
),
413
428
. DOI:
42
Samson
,
A. C.
,
Kreibig
,
S. D.
,
Soderstrom
,
B.
,
Wade
,
A. A.
, &
Gross
,
J. J.
(
2016
).
Eliciting positive, negative and mixed emotional states: A film library for affective scientists
.
Cognition and Emotion
,
30
(
5
),
827
856
. DOI:
43
Schaefer
,
A.
,
Nils
,
F.
,
Sanchez
,
X.
, &
Philippot
,
P.
(
2010
).
Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers
.
Cognition & Emotion
,
24
(
7
),
1153
1172
. DOI:
44
Shimamura
,
A. P.
,
Cohn-Sheehy
,
B. I.
,
Pogue
,
B. L.
, &
Shimamura
,
T. A.
(
2015
).
How attention is driven by film edits: A multimodal experience
.
Psychology of Aesthetics, Creativity, and the Arts
,
9
(
4
),
417
422
. DOI:
45
Shimamura
,
A. P.
,
Cohn-Sheehy
,
B. I.
, &
Shimamura
,
T. A.
(
2014
).
Perceiving movement across film edits: A psychocinematic analysis
.
Psychology of Aesthetics, Creativity, and the Arts
,
8
(
1
),
77
80
. DOI:
46
Smith
,
T. J.
, &
Henderson
,
J. M.
(
2008
).
Edit blindness: The relationship between attention and global change blindness in dynamic scenes
.
Journal of Eye Movement Research
,
2
(
2
),
1
17
.
47
Theeuwes
,
J.
(
2010
).
Top–down and bottom–up control of visual selection
.
Acta Psychologica
,
135
(
2
),
77
99
. DOI:
48
Ullman
,
S.
(
1979
).
The interpretation of visual motion
.
Cambridge, MA
:
MIT Press
. DOI:
49
Westmacott
,
R.
,
Black
,
S. E.
,
Freedman
,
M.
, &
Moscovitch
,
M.
(
2004
).
The contribution of autobiographical significance to semantic memory: evidence from Alzheimer’s disease, semantic dementia, and amnesia
.
Neuropsychologia
,
42
(
1
),
25
48
. DOI:
50
Xing
,
C.
, &
Isaacowitz
,
D. M.
(
2006
).
Aiming at Happiness: How Motivation Affects Attention to and Memory for Emotional Images
.
Motivation and Emotion
,
30
(
3
),
243
250
. DOI:
51
Yim
,
O.
, &
Ramdeen
,
K. T.
(
2015
).
Hierarchical cluster analysis: Comparison of three linkage measures and application to psychological data
.
Quant. Methods Psychol
,
11
,
8
24
. DOI:
52
Zaki
,
J.
, &
Ochsner
,
K.
(
2012
).
The neuroscience of empathy: progress, pitfalls and promise
.
Nature Neuroscience
,
15
(
5
),
675
680
. DOI:

The author(s) of this paper chose the Open Review option, and the peer review comments can be downloaded at: http://doi.org/10.1525/collabra.180.pr

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.