Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Date
Availability
1-20 of 43
Keywords: emotion
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2020) 37 (5): 392–402.
Published: 10 June 2020
... “flawless” (Rocklage et al., 2017). Higher emotionality ratings are associated with more “feeling” words (e.g., “I feel,” “emotional”), while lower emotionality ratings are associated with more “cognitive” words (e.g., “I believe,” “I think”). For each of the three coding schemes (AI, LIWC, EL), we...
Abstract
The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants ( N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002 ), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015 ), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018 ). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).
Journal Articles
Music Perception (2020) 37 (5): 392–402.
Published: 10 June 2020
...). Higher emotionality ratings are associated with more “feeling” words (e.g., “I feel,” “emotional”), while lower emotionality ratings are associated with more “cognitive” words (e.g., “I believe,” “I think”). For each of the three coding schemes (AI, LIWC, EL), we performed two complementary...
Abstract
The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants ( N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002 ), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015 ), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018 ). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).
Journal Articles
Music Perception (2020) 37 (3): 240–258.
Published: 01 February 2020
...Lindsay A. Warrenburg When designing a new study regarding how music can portray and elicit emotion, one of the most crucial design decisions involves choosing the best stimuli. Every researcher must find musical samples that are able to capture an emotional state, are appropriate lengths, and have...
Abstract
When designing a new study regarding how music can portray and elicit emotion, one of the most crucial design decisions involves choosing the best stimuli. Every researcher must find musical samples that are able to capture an emotional state, are appropriate lengths, and have minimal potential for biasing participants. Researchers have often utilized musical excerpts that have previously been used by other scholars, but the appropriate musical choices depend on the specific goals of the study in question and will likely change among various research designs. The intention of this paper is to examine how musical stimuli have been selected in a sample of 306 research articles dating from 1928 through 2018. Analyses are presented regarding the designated emotions, how the stimuli were selected, the durations of the stimuli, whether the stimuli are excerpts from a longer work, and whether the passages have been used in studies about perceived or induced emotion. The results suggest that the literature relies on nine emotional terms, focuses more on perceived emotion than on induced emotion, and contains mostly short musical stimuli. I suggest that some of the inconclusive results from previous reviews may be due to the inconsistent use of emotion terms throughout the music community.
Journal Articles
Music Perception (2019) 37 (2): 95–110.
Published: 01 December 2019
...Rosalie Ollivier; Louise Goupil; Marco Liuni; Jean-Julien Aucouturier T raditional neurobiological theories of musical emotions explain well why extreme music such as punk, hardcore, or metal—whose vocal and instrumental characteristics share much similarity with acoustic threat signals—should...
Abstract
T raditional neurobiological theories of musical emotions explain well why extreme music such as punk, hardcore, or metal—whose vocal and instrumental characteristics share much similarity with acoustic threat signals—should evoke unpleasant feelings for a large proportion of listeners. Why it doesn't for metal music fans, however, is controversial: metal fans may differ from non-fans in how they process threat signals at the sub-cortical level, showing deactivated responses that differ from controls. Alternatively, appreciation for metal may depend on the inhibition by cortical circuits of a normal low-order response to auditory threat. In a series of three experiments, we show here that, at a sensory level, metal fans actually react equally negatively, equally fast, and even more accurately to cues of auditory threat in vocal and instrumental contexts than non-fans; conversely, we tested the hypothesis that cognitive load reduced fans' appreciation of metal to the level experienced by non-fans, but found only limited support that it was the case. Nevertheless, taken together, these results are not compatible with the idea that extreme music lovers do so because of a different sensory response to threat, and highlight a potential contribution of controlled cognitive processes in their aesthetic experience.
Journal Articles
Music Perception (2019) 37 (1): 66–91.
Published: 01 September 2019
... dimensional model of emotion can offer more reliable measurement with emotionally ambiguous stimuli ( Eerola & Vuoskoski, 2010 ). For example, Russell's (1980) popular circumplex model organizes emotional responses into two dimensions: valence and arousal. In this framework, valence represents the...
Abstract
C omposers convey emotion through music by co-varying structural cues. Although the complex interplay provides a rich listening experience, this creates challenges for understanding the contributions of individual cues. Here we investigate how three specific cues (attack rate, mode, and pitch height) work together to convey emotion in Bach's Well Tempered-Clavier (WTC) . In three experiments, we explore responses to (1) eight-measure excerpts and (2) musically “resolved” excerpts, and (3) investigate the role of different standard dimensional scales of emotion. In each experiment, thirty nonmusician participants rated perceived emotion along scales of valence and intensity (Experiments 1 & 2) or valence and arousal ( Experiment 3 ) for 48 pieces in the WTC . Responses indicate listeners used attack rate, Mode, and pitch height to make judgements of valence, but only attack rate for intensity/arousal. Commonality analyses revealed mode predicted the most variance for valence ratings, followed by attack rate, with pitch height contributing minimally. In Experiment 2 mode increased in predictive power compared to Experiment 1 . For Experiment 3 , using “arousal” instead of “intensity” showed similar results to Experiment 1 . We discuss how these results complement and extend previous findings of studies with tightly controlled stimuli, providing additional perspective on complex issues of interpersonal communication.
Journal Articles
Music Perception (2019) 36 (5): 448–456.
Published: 01 June 2019
... The Regents of the University of California 2019 music emotion source dilemma dissonance auditory scene analysis A ccording to the processing fluency theory of aesthetic pleasure ( Reber, Schwarz, & Winkielman, 2004 ), when an object is more quickly and/or accurately processed...
Abstract
I n a recent article , B onin , T rainor , B elyk , and Andrews (2016) proposed a novel way in which basic processes of auditory perception may influence affective responses to music. According to their source dilemma hypothesis (SDH), the relative fluency of a particular aspect of musical processing—the parsing of the music into distinct audio streams—is hedonically marked: Efficient stream segregation elicits pleasant affective experience whereas inefficient segregation results in unpleasant affective experience, thereby contributing to (dis)preference for a musical stimulus. Bonin et al. (2016) conducted two experiments, the results of which were ostensibly consistent with the SDH. However, their research designs introduced major confounds that undermined the ability of these initial studies to offer unequivocal evidence for their hypothesis. To address this, we conducted a large-scale ( N = 311) constructive replication of Bonin et al. (2016 ; Experiment 2), significantly modifying the design to rectify these methodological shortfalls and thereby better assess the validity of the SDH. Results successfully replicated those of Bonin et al. (2016) , although they indicated that source dilemma effects on music preference may be more modest than their original findings would suggest. Unresolved issues and directions for future investigation of the SDH are discussed.
Journal Articles
Music Perception (2018) 36 (2): 243–249.
Published: 01 December 2018
... and tension. Notably, modes with a major 3rd from the tonic (Ionian, Lydian, Mixolydian) were perceived as happier and less tense than modes with a minor 3rd (Dorian, Aeolian, Phrygian, Locrian). The results confirm that the perception of notes in a melody, and their consequent emotional connotation...
Abstract
I n most W estern music , notes in a melody relate not only to each other, but also to a “key”—a tonal center combined with an associated scale. Music is often classified as in a major or minor key, but within a scale that defines a major key, emphasizing different notes as the tonic yields different “modes.” Thus, within a set of notes, changing the tonal center changes the putative role of any given note. In this experiment, we eliminated all structural cues to the tonic within a melody by presenting notes randomly selected from the C major scale. A “mode” was established by a continuous drone note lower than the melody. Subjects rated mood (happy versus sad) and tension of each pseudo-melody. Consistent with Temperley and Tan (2013) —in which multiple structural cues were present—different modes produced reliable differences in judged mood and tension. Notably, modes with a major 3rd from the tonic (Ionian, Lydian, Mixolydian) were perceived as happier and less tense than modes with a minor 3rd (Dorian, Aeolian, Phrygian, Locrian). The results confirm that the perception of notes in a melody, and their consequent emotional connotation, depend at least in part to their relationship to a tonal center.
Journal Articles
Music Perception (2018) 36 (1): 53–59.
Published: 01 September 2018
...Ronald S. Friedman A ccording to J uslin (2001), final ritardandi constitute part of the acoustic code employed by musicians to communicate two distinct emotional states: sadness and tenderness. To test this proposition, in two experiments, participants were exposed to a set of hymns that were...
Abstract
A ccording to J uslin (2001), final ritardandi constitute part of the acoustic code employed by musicians to communicate two distinct emotional states: sadness and tenderness. To test this proposition, in two experiments, participants were exposed to a set of hymns that were modified in mode and/or tempo to primarily express either happiness, sadness, or tenderness. In addition, the inclusion of final ritardandi was experimentally manipulated such that these timing variations were either present or absent in the hymn stimuli. Participants were then asked to rate the emotions expressed by each variant of the hymns. In line with Juslin (2001) , results revealed that when final ritardandi were included, expressively sad music was perceived as conveying more sadness, whereas expressively tender music was perceived as conveying more tenderness. Inclusion of ritardandi did not heighten the expression of happiness in music that was in a major key nor promote the expression of tenderness in music that was in a minor key. This suggests that final ritardandi do not generally heighten emotional expressivity and only amplify the emotional message already established by other cues, particularly those based on mode and overall tempo.
Includes: Multimedia, Supplementary data
Journal Articles
Music Perception (2018) 35 (5): 527–539.
Published: 01 June 2018
... acoustic conditions. Kirk N. Olsen, Department of Psychology, Macquarie University, New South Wales, 2109, Sydney, Australia. E-mail: kirk.olsen@mq.edu.au 11 6 2017 3 3 2018 © 2018 by The Regents of the University of California 2018 auditory perception emotion expertise...
Abstract
Death Metal music with violent themes is characterized by vocalizations with unnaturally low fundamental frequencies and high levels of distortion and roughness. These attributes decrease the signal to noise ratio, rendering linguistic content difficult to understand and leaving the impression of growling, screaming, or other non-linguistic vocalizations associated with aggression and fear. Here, we compared the ability of fans and non-fans of Death Metal to accurately perceive sung words extracted from Death Metal music. We also examined whether music training confers an additional benefit to intelligibility. In a 2 × 2 between-subjects factorial design (fans/non-fans, musicians/nonmusicians), four groups of participants ( n = 16 per group) were presented with 24 sung words (one per trial), extracted from the popular American Death Metal band Cannibal Corpse . On each trial, participants completed a four-alternative forced-choice word recognition task. Intelligibility (word recognition accuracy) was above chance for all groups and was significantly enhanced for fans (65.88%) relative to non-fans (51.04%). In the fan group, intelligibility between musicians and nonmusicians was statistically similar. In the non-fan group, intelligibility was significantly greater for musicians relative to nonmusicians. Results are discussed in the context of perceptual learning and the benefits of expertise for decoding linguistic information in sub-optimum acoustic conditions.
Includes: Supplementary data
Journal Articles
Music Perception (2018) 35 (5): 540–560.
Published: 01 June 2018
... Huron, School of Music1866 College Rd., Ohio State University, Columbus, OH 43210, USA. E-mail: hansen.491@osu.edu , huron.1@osu.edu 17 8 2017 11 2 2018 © 2018 by The Regents of the University of California 2018 emotion music analysis instrumental music cognitive musicology...
Abstract
Given the extensive instrumental resources afforded by an orchestra, why would a composer elect to feature a single solo instrument? In this study we explore one possible use of solos—that of conveying or enhancing a sad affect. Orchestral passages were identified from an existing collection and categorized as solos or non-solos. Independently, the passages were characterized on seven other features previously linked to sad affect, including mode, tempo, dynamics, articulation, rhythmic smoothness, relative pitch height, and pitch range. Using the first four factors, passages were classified into nine previously defined expressive categories. Passages containing acoustic features associated with the “sad/relaxed” expressive category were twice as likely to employ solo texture. Moreover, a regression model incorporating all factors significantly predicted solo status. However, only two factors (legato articulation, quiet dynamics) were significant individual predictors. Finally, with the notable exception of string instruments, we found a strong correlation ( ρ = .88) between the likelihood that a solo is assigned to a given instrument and an independent scale of the capacity of that instrument for expressing sadness. Although solo instrumentation undoubtedly serves many other functions, these results are consistent with a significant though moderate association between sadness-related acoustic features and solo textures.
Journal Articles
Music Perception (2018) 35 (5): 561–572.
Published: 01 June 2018
... memories had high emotional and vividness characteristics whereas Everyday memories elicited emotion and other heightened responses only in the “vivid” instruction condition. However, when we added two other specific AB categories (Dining and Holidays) in phase two, the Music memories were no longer unique...
Abstract
We compared young adults’ autobiographical (AB) memories involving Music to memories concerning other specific categories and to Everyday AB memories with no specific cue. In all cases, participants reported both their most vivid memory and another AB memory from approximately the same time. We analyzed responses via quantitative ratings scales on aspects such as vividness and importance, as well as via qualitative thematic coding. In the initial phase, comparison of Music-related to Everyday memories suggested all Musical memories had high emotional and vividness characteristics whereas Everyday memories elicited emotion and other heightened responses only in the “vivid” instruction condition. However, when we added two other specific AB categories (Dining and Holidays) in phase two, the Music memories were no longer unique. We offer these results as a cautionary tale: before concluding that music is special in its relationship to cognition, perception, or emotion, studies should include appropriate control conditions.
Journal Articles
Music Perception (2018) 35 (4): 518–523.
Published: 01 April 2018
... height (F0) of a pitch gamut independently impacts the perceived emotional expression of melodies derived from the gamut. Study participants rated the perceived happiness/sadness of a set of isochronous and semi-random tone sequences derived from the Bohlen-Pierce scale, an unconventional scale based on...
Abstract
Prior research has amply documented that happy music tends to be faster, louder, higher in average pitch, more variable in pitch, and more staccato in articulation, whereas sad music tends to be slower, lower, less variable, and more legato in articulation. However, the bulk of existing studies are either correlational or allow these expressive cues to covary freely, thereby making it difficult to confirm the causal influence of a given cue. To help address this gap, we experimentally assessed whether the average height (F0) of a pitch gamut independently impacts the perceived emotional expression of melodies derived from the gamut. Study participants rated the perceived happiness/sadness of a set of isochronous and semi-random tone sequences derived from the Bohlen-Pierce scale, an unconventional scale based on pitch intervals that do not appear in common practice music. Results were consistent with the notion that higher average pitch height communicates happiness and/or that lower pitch height communicates sadness. Moreover, they suggested that the effect is: (1) sufficiently robust to be detected using rudimentary melodies based on an unconventional musical scale; and, (2) independent of interval size.
Journal Articles
Music Perception (2017) 35 (1): 38–59.
Published: 01 September 2017
...Mattson Ogg; David R. W. Sears; Manuela M. Marin; Stephen McAdams A number of psychophysiological measures indexing autonomic and somatovisceral activation to music have been proposed in line with the wider emotion literature. However, attempts to replicate experimental findings and provide...
Abstract
A number of psychophysiological measures indexing autonomic and somatovisceral activation to music have been proposed in line with the wider emotion literature. However, attempts to replicate experimental findings and provide converging evidence for music-evoked emotions through physiological changes, overt expression, and subjective measures have had mixed success. This may be due to issues in stimulus and participant selection. Therefore, the aim of Experiment 1 was to select musical stimuli that were controlled for instrumentation, musical form, style, and familiarity. We collected a wide range of subjective responses from 30 highly trained musicians to music varying along the affective dimensions of arousal and valence. Experiment 2 examined a set of psychophysiological correlates of emotion in 20 different musicians by measuring heart rate, skin conductance, and facial electromyography during listening without requiring behavioral reports. Excerpts rated higher in arousal in Experiment 1 elicited larger cardiovascular and electrodermal responses. Excerpts rated positively in valence produced higher zygomaticus major activity, whereas excerpts rated negatively in valence produced higher corrugator supercilii activity. These findings provide converging evidence of emotion induction during music listening in musicians via subjective self-reports and psychophysiological measures, and further, that such responses are similar to emotions observed outside the musical domain.
Journal Articles
Personal Music Listening: A Model of Emotional Outcomes Developed Through Mobile Experience Sampling
Music Perception (2017) 34 (5): 501–514.
Published: 01 June 2017
...William M. Randall; Nikki S. Rickard Personal music listening on mobile phones is rapidly growing as a popular means of everyday engagement with music. This portable and flexible style of listening allows for the immediate selection of music to fulfil emotional needs, presenting it as a powerful...
Abstract
Personal music listening on mobile phones is rapidly growing as a popular means of everyday engagement with music. This portable and flexible style of listening allows for the immediate selection of music to fulfil emotional needs, presenting it as a powerful resource for emotion regulation. The experience sampling method (ESM) is ideal for observing music listening behavior, as it assesses current subjective experience during natural everyday music episodes. The current study aimed to develop a comprehensive model of personal music listening, and to determine the interaction of variables that produce various emotional outcomes. Data were collected from 195 participants using the MuPsych app: a mobile ESM designed for the real-time and ecologically valid measurement of personal music listening. Multilevel structural equation modelling was utilized to determine predictors of emotional outcomes on both experience and listener levels. Results revealed that music generally returns affect to a neutral state, but this is counteracted through the selection of mood-congruent music. Emotional reasons for listening, along with critical ranges of initial mood, were found to put listeners at risk of potentially undesirable outcomes. Finally, it was revealed that emotional outcomes are determined almost entirely within situations, which emphasizes the importance of accounting for contextual variables in all music and emotion research. This model has provided valuable insight into personal music listening, and the variables that are influential in producing desired emotional outcomes.
Journal Articles
Music Perception (2017) 34 (5): 605–623.
Published: 01 June 2017
... loudness significantly altered viewers’ perceptions of many elements that are fundamental to the storyline, including inferences about the relationship, intentions, and emotions of the film characters, their romantic interest toward each other, and the overall perceived tension of the scene. Surprisingly...
Abstract
Previous studies have shown that pairing a film excerpt with different musical soundtracks can change the audience’s interpretation of the scene. This study examined the effects of mixing the same piece of music at different levels of loudness in a film soundtrack to suggest diegetic music (“source music,” presented as if arising from within the fictional world of the film characters) or to suggest nondiegetic music (a “dramatic score” accompanying the scene but not originating from within the fictional world). Adjusting the level of loudness significantly altered viewers’ perceptions of many elements that are fundamental to the storyline, including inferences about the relationship, intentions, and emotions of the film characters, their romantic interest toward each other, and the overall perceived tension of the scene. Surprisingly, varying the loudness (and resulting timbre) of the same piece of music produced greater differences in viewers’ interpretations of the film scene and characters than switching to a different music track. This finding is of theoretical and practical interest as changes in loudness and timbre are among the primary post-production modifications sound editors make to differentiate “source music” from “dramatic score” in motion pictures, and the effects on viewers have rarely been empirically investigated.
Journal Articles
Music Perception (2017) 34 (4): 371–386.
Published: 01 April 2017
...Eduardo Coutinho; Klaus R. Scherer The systematic study of music-induced emotions requires standardized measurement instruments to reliably assess the nature of affective reactions to music, which tend to go beyond garden-variety basic emotions. We describe the development and conceptual validation...
Abstract
The systematic study of music-induced emotions requires standardized measurement instruments to reliably assess the nature of affective reactions to music, which tend to go beyond garden-variety basic emotions. We describe the development and conceptual validation of a checklist for rapid assessment of music-induced affect, designed to extend and complement the Geneva Emotional Music Scale. The checklist contains a selection of affect and emotion categories that are frequently used in the literature to refer to emotional reactions to music. The development of the checklist focused on an empirical investigation of the semantic structure of the relevant terms, combined with fuzzy classes based on a series of hierarchical cluster analyses. Two versions of the checklist for assessing the intensity and frequency of affective responses to music are proposed.
Journal Articles
Music Perception (2017) 34 (3): 352–365.
Published: 01 February 2017
... 27 6 2016 © 2017 by The Regents of the University of California 2017 familiarity diatonic modes scales emotion popular music PERCEPTION AND FAMILIARITY OF DIATONIC MODES DAPHNE TAN Indiana University Jacobs School of Music DAVID TEMPERLEY Eastman School of Music of the University...
Abstract
In a prior study (Temperley & Tan, 2013), participants rated the “happiness” of melodies in different diatonic modes. A strong pattern was found, with happiness decreasing as scale steps were lowered. We wondered: Does this pattern reflect the familiarity of diatonic modes? The current study examines familiarity directly. In the experiments reported here, college students without formal music training heard a series of melodies, each with a three-measure beginning (“context”) in a diatonic mode and a one-measure ending that was either in the context mode or in a mode that differed from the context by one scale degree. Melodies were constructed using four pairs of modes with the same tonic: Lydian/Ionian, Ionian/Mixolydian, Dorian/Aeolian, and Aeolian/Phrygian. Participants rated how well the ending “fit” the context. Two questions were of interest: (1) Do listeners give higher ratings to some modes (as endings) overall? (2) Do listeners give a higher rating to the ending if its mode matches that of the context? The results show a strong main effect of ending, with Ionian (major) and Aeolian (natural minor) as the most familiar (highly rated) modes. This aligns well with corpus data representing the frequency of different modes in popular music. There was also a significant interaction between ending and context, whereby listeners rated an ending higher if its mode matched the context. Our findings suggest that (1) our earlier “happiness” results cannot be attributed to familiarity alone, and (2) listeners without formal knowledge of diatonic modes are able to internalize diatonic modal frameworks.
Journal Articles
Music Perception (2015) 32 (5): 484–492.
Published: 01 June 2015
... responsiveness experienced more intense chills from music. Moreover, the results showed that the experience of chills induced highly pleasurable emotions and psychophysiological arousal. The present study suggests that general reward sensitivity is a predictor of music-evoked chills. Although music is just a...
Abstract
Chills (goose bumps or shivers) evoked by listening to one’s favorite music are an indicator of a rewarding experience. The current study examined the relationship between individual differences in general reward sensitivity and music-evoked chills. To assess this relationship, we measured the three subscales of the behavioral activation system (BAS) and the frequency and intensity of music-evoked chills in a large-sample survey (Study 1) and a psychophysiological experiment (Study 2). One result observed in both studies was that people with high BAS reward responsiveness experienced more intense chills from music. Moreover, the results showed that the experience of chills induced highly pleasurable emotions and psychophysiological arousal. The present study suggests that general reward sensitivity is a predictor of music-evoked chills. Although music is just a sequence of tones and not clearly related to survival value, music could create a rewarding experience partially similar to other rewarding actions or events.
Journal Articles
Music Perception (2014) 32 (2): 170–185.
Published: 01 December 2014
...Carolina Labbé; Didier Grandjean In our study, two groups of participants ( n = 61 and n = 58) listened to nine pieces for solo violin and rated how they felt along an affect dimension and along the nine Geneva Emotional Music Scale dimensions. After each piece, they completed a 12-item...
Abstract
In our study, two groups of participants ( n = 61 and n = 58) listened to nine pieces for solo violin and rated how they felt along an affect dimension and along the nine Geneva Emotional Music Scale dimensions. After each piece, they completed a 12-item questionnaire corresponding to subjective entrainment reports. A factorial analysis of this Musical Entrainment Questionnaire revealed a two-factor solution, with Visceral Entrainment (VE) corresponding to sensations of internal bodily entrainment and Motor Entrainment (ME) reflecting participants’ inclination to move to the beat. These findings represent, to the best of our knowledge, the first empirical evidence for the existence of two components underlying entrainment capable of predicting specific emotional responses to music. Indeed, although both factors predicted Affect, Joyful activation, Transcendence, Wonder, Power, and Tenderness dimensions, only VE predicted Nostalgia and Sadness. Moreover, Peacefulness was mostly predicted by ME, whereas Tension was mostly predicted by VE.
Journal Articles
Music Perception (2014) 31 (5): 470–484.
Published: 01 June 2014
... repulsion) and emotionally arousing. Emotions are more distinct and have more specified cognitive appraisal (e.g., related to basic emotions such as happiness, sadness, anger and fear). Music Perception, VOLUME 31, ISSUE 5, PP. 470 484, ISSN 0730-7829, ELECTRONIC ISSN 1533-8312. © 2014 BY THE REGENTS OF THE...
Abstract
Three behavioral experiments were conducted to investigate the hypothesis that perceived emotion activates expectations for upcoming musical events. Happy, sad, and neutral pictures were used as emotional primes. In Experiments 1 and 2, expectations for the continuation of neutral melodic openings were tested using an implicit task that required participants to judge the tuning of the first note of the melodic continuation. This first note was either high or low in pitch (Experiment 1) or followed either a narrow or wide melodic interval (Experiment 2). Experiment 3 assessed expectations using an explicit task and required participants to rate the quality of melodic continuations, which varied in register and interval size. Experiments 1 and 3 confirmed that emotion indeed modulates expectations for melodic continuations in a high or low register. The effect of emotion on expectations for melodic intervals was significant only in Experiment 3, although there was a trend for happiness to increase expectations for wide intervals in Experiment 2.