Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Subjects
Journal
Article Type
Date
Availability
1-14 of 14
William Forde Thompson
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Journal:
Music Perception
Music Perception (2019) 37 (1): 57–65.
Published: 01 September 2019
Abstract
N ote - to - note changes in brightness are able to influence the perception of interval size. Changes that are congruent with pitch tend to expand interval size, whereas changes that are incongruent tend to contract. In the case of singing, brightness of notes can vary as a function of vowel content. In the present study, we investigated whether note-to-note changes in brightness arising from vowel content influence perception of relative pitch. In Experiment 1, three-note sequences were synthesized so that they varied with regard to the brightness of vowels from note to note. As expected, brightness influenced judgments of interval size. Changes in brightness that were congruent with changes in pitch led to an expansion of perceived interval size. A follow-up experiment confirmed that the results of Experiment 1 were not due to pitch distortions. In Experiment 2, the final note of three-note sequences was removed, and participants were asked to make speeded judgments of the pitch contour. An analysis of response times revealed that brightness of vowels influenced contour judgments. Changes in brightness that were congruent with changes in pitch led to faster response times than did incongruent changes. These findings show that the brightness of vowels yields an extra-pitch influence on the perception of relative pitch in song.
Journal Articles
Journal:
Music Perception
Music Perception (2018) 35 (5): 527–539.
Published: 01 June 2018
Abstract
Death Metal music with violent themes is characterized by vocalizations with unnaturally low fundamental frequencies and high levels of distortion and roughness. These attributes decrease the signal to noise ratio, rendering linguistic content difficult to understand and leaving the impression of growling, screaming, or other non-linguistic vocalizations associated with aggression and fear. Here, we compared the ability of fans and non-fans of Death Metal to accurately perceive sung words extracted from Death Metal music. We also examined whether music training confers an additional benefit to intelligibility. In a 2 × 2 between-subjects factorial design (fans/non-fans, musicians/nonmusicians), four groups of participants ( n = 16 per group) were presented with 24 sung words (one per trial), extracted from the popular American Death Metal band Cannibal Corpse . On each trial, participants completed a four-alternative forced-choice word recognition task. Intelligibility (word recognition accuracy) was above chance for all groups and was significantly enhanced for fans (65.88%) relative to non-fans (51.04%). In the fan group, intelligibility between musicians and nonmusicians was statistically similar. In the non-fan group, intelligibility was significantly greater for musicians relative to nonmusicians. Results are discussed in the context of perceptual learning and the benefits of expertise for decoding linguistic information in sub-optimum acoustic conditions.
Includes: Supplementary data
Journal Articles
Journal:
Music Perception
Music Perception (2016) 33 (4): 401–413.
Published: 01 April 2016
Abstract
We examined explicit processing of musical syntax and tonality in a group of Han Chinese Mandarin speakers with congenital amusia, and the extent to which pitch discrimination impairments were associated with syntax and tonality processing. In Experiment 1, we assessed whether congenital amusia is associated with impaired explicit processing of musical syntax. Congruity ratings were examined for syntactically regular or irregular endings in harmonic and melodic contexts. Unlike controls, amusic participants failed to explicitly distinguish regular from irregular endings in both contexts. Surprisingly, however, a concurrent manipulation of pitch distance did not affect the processing of musical syntax for amusics, and their impaired music-syntactic processing was uncorrelated with their pitch discrimination thresholds. In Experiment 2, we assessed tonality perception using a probe-tone paradigm. Recovery of the tonal hierarchy was less evident for the amusic group than for the control group, and this reduced sensitivity to tonality in amusia was also unrelated to poor pitch discrimination. These findings support the view that music structure is processed by cognitive and neural resources that operate independently of pitch discrimination, and that these resources are impaired in explicit judgments for individuals with congenital amusia.
Journal Articles
Journal:
Music Perception
Music Perception (2015) 33 (1): 96–109.
Published: 01 September 2015
Abstract
Four experiments assessed the influence of emergent-level structure on melodic processing difficulty. Emergent-level structure was manipulated across experiments and defined with reference to the Implication-Realization model of melodic expectancy (Narmour, 1990, 1992, 2000). Two measures of melodic processing difficulty were used to assess the influence of emergent-level structure: serial-reconstruction and cohesion ratings. In the serial-reconstruction experiment (Experiment 1), reconstruction was more efficient for melodies with simple emergent-level structure. In the cohesion experiments (Experiments 2-4), ratings were higher for melodies with simple emergent-level structure, and the advantage was generally greater in the presence of simple surface-level structure. Results indicate that emergent-level structure as defined by the model can influence melodic processing difficulty.
Journal Articles
Journal:
Music Perception
Music Perception (2011) 28 (3): 247–264.
Published: 01 February 2011
Abstract
In Two Experiments, We Assessed the Experiential and cognitive consequences of seven minutes exposure to music (Experiment 1) and speech (Experiment 2). In Experiment 1, participants listened to music for seven minutes and reported their emotional experiences based on ratings of valence (pleasant-unpleasant) and two types of arousal: energy (energetic-boring) and tension (tense-calm). They were then assessed on two cognitive skills: speed of processing and creativity. Music varied in pitch height (high or low pitched), rate (fast or slow), and intensity (loud or soft). Experiment 2 replicated Experiment 1 using male and female speech. Experiential and cognitive consequences of stimulus manipulations were overlapping in the two experiments, suggesting that music and speech draw on a common emotional code. There were also divergent effects, however, implicating domain-specific influences on emotion induction. We discuss the results in view of a psychological framework for understanding auditory signals of emotion.
Journal Articles
Journal:
Music Perception
Music Perception (2009) 26 (5): 475–488.
Published: 01 June 2009
Abstract
FACIAL EXPRESSIONS ARE USED IN MUSIC PERFORMANCE to communicate structural and emotional intentions. Exposure to emotional facial expressions also may lead to subtle facial movements that mirror those expressions. Seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing. Four different participants were recorded using facial electromyography (EMG) while performing the same task. Participants saw and heard recordings of musical phrases sung with happy, sad, and neutral emotional connotations. They then imitated the target stimulus, paying close attention to the emotion expressed. Facial expressions were monitored during four epochs: (a) during the target; (b) prior to their imitation; (c) during their imitation; and (d) after their imitation. Expressive activity was observed in all epochs, implicating a role of facial expressions in the perception, planning, production, and post-production of emotional singing.
Journal Articles
Journal:
Music Perception
Music Perception (2006) 24 (1): 89–94.
Published: 01 September 2006
Abstract
The rigors of establishing innateness and domain specificity pose challenges to adaptationist models of music evolution. In articulating a series of constraints, the authors of the target articles provide strategies for investigating the potential origins of music. We propose additional approaches for exploring theories based on exaptation. We discuss a view of music as a multimodal system of engaging with affect, enabled by capacities of symbolism and a theory of mind.
Journal Articles
Journal:
Music Perception
Music Perception (2006) 23 (4): 319–330.
Published: 01 April 2006
Abstract
Using a three-dimensional model of affect, we compared the affective consequences of manipulating intensity, rate, and pitch height in music and speech. Participants rated 64 music and 64 speech excerpts on valence (pleasant-unpleasant), energy arousal (awake-tired), and tension arousal (tense-relaxed). For music and speech, loud excerpts were judged as more pleasant, energetic, and tense than soft excerpts. Manipulations of rate had overlapping effects on music and speech. Fast music and speech were judged as having greater energy than slow music and speech. However, whereas fast speech was judged as less pleasant than slow speech, fast music was judged as having greater tension than slow music. Pitch height had opposite consequences for music and speech, with high-pitched speech but lowpitched music associated with higher ratings of valence (more pleasant). Interactive effects on judgments were also observed. We discuss similarities and differences between vocal and musical communication of affect, and the need to distinguish between two types of arousal: energy and tension.
Journal Articles
Journal:
Music Perception
Music Perception (2002) 20 (2): 151–171.
Published: 01 December 2002
Abstract
We examined effects of tempo and mode on spatial ability, arousal, and mood. A Mozart sonata was performed by a skilled pianist and recorded as a MIDI file. The file was edited to produce four versions that varied in tempo (fast or slow) and mode (major or minor). Participants listened to a single version and completed measures of spatial ability, arousal, and mood. Performance on the spatial task was superior after listening to music at a fast rather than a slow tempo, and when the music was presented in major rather than minor mode. Tempo manipulations affected arousal but not mood, whereas mode manipulations affected mood but not arousal. Changes in arousal and mood paralleled variation on the spatial task. The findings are consistent with the view that the "Mozart effect" is a consequence of changes in arousal and mood.
Journal Articles
Journal Articles
Journal:
Music Perception
Music Perception (1999) 17 (1): 43–64.
Published: 01 October 1999
Abstract
Studies of the link between music and emotion have primarily focused on listeners' sensitivity to emotion in the music of their own culture. This sensitivity may reflect listeners' enculturation to the conventions of their culture's tonal system. However, it may also reflect responses to psychophysical dimensions of sound that are independent of musical experience. A model of listeners' perception of emotion in music is proposed in which emotion in music is communicated through a combination of universal and cultural cues. Listeners may rely on either of these cues, or both, to arrive at an understanding of musically expressed emotion. The current study addressed the hypotheses derived from this model using a cross-cultural approach. The following questions were investigated: Can people identify the intended emotion in music from an unfamiliar tonal system? If they can, is their sensitivity to intended emotions associated with perceived changes in psychophysical dimensions of music? Thirty Western listeners rated the degree of joy, sadness, anger, and peace in 12 Hindustani raga excerpts (field recordings obtained in North India). In accordance with the raga-rasa system, each excerpt was intended to convey one of the four moods or "rasas" that corresponded to the four emotions rated by listeners. Listeners also provided ratings of four psychophysical variables: tempo, rhythmic complexity, melodic complexity, and pitch range. Listeners were sensitive to the intended emotion in ragas when that emotion was joy, sadness, or anger. Judgments of emotion were significantly related to judgments of psychophysical dimensions, and, in some cases, to instrument timbre. The findings suggest that listeners are sensitive to musically expressed emotion in an unfamiliar tonal system, and that this sensitivity is facilitated by psychophysical cues.
Journal Articles
Journal:
Music Perception
Music Perception (1998) 15 (3): 231–252.
Published: 01 April 1998
Abstract
Principles of melodic implication were evaluated in an analysis of folk song melodies. Implicative and closural intervals were identified in 513 Bohemian melodies, and the note following each interval (the continuation note) was analyzed. Multinomial log-linear analysis was conducted to assess the extent to which Narmour's (1990) implicative principles could predict continuation notes. Support was found for five principles, with slightly greater support for strongly implicative intervals than for closural intervals, and for large intervals than for small intervals. Alternative models of melodic implication are discussed.
Journal Articles
Journal:
Music Perception
Music Perception (1997) 14 (3): 263–280.
Published: 01 April 1997
Abstract
In two experiments, goodness-of-fit ratings of pairs of musical elements (triads, dyads, and octave-complex tones) were examined in view of a psychoacoustic model. The model, referred to as the pitch commonality model, evaluates the sharing of fundamental frequencies, overtones, and subharmonic tone sensations between sequential elements and also considers the effects of auditory masking within each element. Two other models were also assessed: a reduced model that considers the sharing of fundamental frequencies alone and the cycle-of-fifths model of key and chord relatedness. In Experiment 1, listeners rated the goodness of fit of 12 octave-complex tones following a major triad, major-third dyad, and perfect-fifth dyad. Multiple regression revealed that pitch commonality provided predictive power beyond that of the reduced model. A regression model based on pitch commonality and the cycle of fifths had a multiple R of .92. In Experiment 2, listeners rated how well a triad or dyad followed another triad or dyad. All pairings of the major triad, major-third dyad, and perfect-fifth dyad (pair types) were presented at various transpositions with respect to one another. Multiple regression revealed that pitch commonality again provided predictive power beyond that of the reduced model. A regression model based on pitch commonality, the cycle of fifths, and a preference for trials ending with a triad had a multiple R of .84. We discuss the role of psychoacoustic factors and knowledge of chord and key relationships in shaping the perception of harmonic material.
Journal Articles
Journal:
Music Perception
Music Perception (1989) 7 (1): 15–42.
Published: 01 October 1989
Abstract
This report examines Clynes's theory of "pulse" for performances of music by Mozart and Beethoven (e. g., Clynes, 1983,1987). In three experiments that used a total of seven different compositions, an analysis-bysynthesis approach was used to examine the repetitive patterns of timing and loudness thought to be associated with performances of Mozart and Beethoven. Across performances, judgments by trained musicians provided support for some of the basic claims made by Clynes. However, judgments of individual performances were not always consistent with predictions. In Experiment 1, melodies were judged to be more musical if they were played with the pulse than if they were played with an altered version of the pulse or if they were played without expression. In Experiment 2, listeners were asked to judge whether performances of Mozart were "Mozartian" and whether performances of Beethoven were " Beethovenian." Ratings were highest if the pulse of the composer was implemented, and significantly lower if the pulse of another composer was implemented (e. g., the Mozart pulse in the Beethoven piece) in all or part of each piece. In Experiment 3, a Beethoven piece was played with each of three pulses: Beethoven, Haydn, and Schubert. Listeners judged the version with the Beethoven pulse as most Beethovenian, but the version with the Haydn pulse as most "musical." Although the overall results were encouraging, it is suggested that there are significant difficulties in evaluating Clynes's theory and that much more research is needed before his ideas can be assessed adequately. The need for clarification of some theoretical issues surrounding the concept of pulse is emphasized.