Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Journal
Article Type
Date
Availability
1-8 of 8
Frank A. Russo
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2020) 37 (4): 359–362.
Published: 11 March 2020
Abstract
In his article “The Territory Between Speech and Song: A Joint Speech Perspective,” Cummins (2020) argues that research has failed to adequately recognize an important category of vocal activity that falls outside of the domains of language and music, at least as they are typically defined. This category, referred to by Cummins as joint speech , spans a range of vocal activity so broad that it is not possible to define it using musical or phonetic terms. Instead, the feature that draws the varied examples together is vocal activity that is coordinated across participants and embedded in a physical and social context. In this invited commentary, I argue that although joint speech adds an important thread to the discourse on the relations between speech and song by putting an emphasis on the collective, it is ultimately related to a wider class of joint action phenomena found in the animal kingdom.
Journal Articles
Music Perception (2020) 37 (4): 359–362.
Published: 11 March 2020
Abstract
In his article “The Territory Between Speech and Song: A Joint Speech Perspective,” Cummins (2020) argues that research has failed to adequately recognize an important category of vocal activity that falls outside of the domains of language and music, at least as they are typically defined. This category, referred to by Cummins as joint speech , spans a range of vocal activity so broad that it is not possible to define it using musical or phonetic terms. Instead, the feature that draws the varied examples together is vocal activity that is coordinated across participants and embedded in a physical and social context. In this invited commentary, I argue that although joint speech adds an important thread to the discourse on the relations between speech and song by putting an emphasis on the collective, it is ultimately related to a wider class of joint action phenomena found in the animal kingdom.
Journal Articles
Music Perception (2019) 37 (1): 57–65.
Published: 01 September 2019
Abstract
N ote - to - note changes in brightness are able to influence the perception of interval size. Changes that are congruent with pitch tend to expand interval size, whereas changes that are incongruent tend to contract. In the case of singing, brightness of notes can vary as a function of vowel content. In the present study, we investigated whether note-to-note changes in brightness arising from vowel content influence perception of relative pitch. In Experiment 1, three-note sequences were synthesized so that they varied with regard to the brightness of vowels from note to note. As expected, brightness influenced judgments of interval size. Changes in brightness that were congruent with changes in pitch led to an expansion of perceived interval size. A follow-up experiment confirmed that the results of Experiment 1 were not due to pitch distortions. In Experiment 2, the final note of three-note sequences was removed, and participants were asked to make speeded judgments of the pitch contour. An analysis of response times revealed that brightness of vowels influenced contour judgments. Changes in brightness that were congruent with changes in pitch led to faster response times than did incongruent changes. These findings show that the brightness of vowels yields an extra-pitch influence on the perception of relative pitch in song.
Journal Articles
Music Perception (2015) 33 (1): 96–109.
Published: 01 September 2015
Abstract
Four experiments assessed the influence of emergent-level structure on melodic processing difficulty. Emergent-level structure was manipulated across experiments and defined with reference to the Implication-Realization model of melodic expectancy (Narmour, 1990, 1992, 2000). Two measures of melodic processing difficulty were used to assess the influence of emergent-level structure: serial-reconstruction and cohesion ratings. In the serial-reconstruction experiment (Experiment 1), reconstruction was more efficient for melodies with simple emergent-level structure. In the cohesion experiments (Experiments 2-4), ratings were higher for melodies with simple emergent-level structure, and the advantage was generally greater in the presence of simple surface-level structure. Results indicate that emergent-level structure as defined by the model can influence melodic processing difficulty.
Journal Articles
Music Perception (2015) 32 (4): 355–363.
Published: 01 April 2015
Abstract
Skips are relatively infrequent in diatonic melodies and are compositionally treated in systematic ways. This treatment has been attributed to deliberate compositional strategies that are also subject to certain constraints. Study 1 showed that ease of vocal production may be accommodated compositionally. Number of skips and their distribution within a melody’s pitch range were compared between diverse statistical samples of vocal and instrumental melodies. Skips occurred less frequently in vocal melodies. Skips occurred more frequently in melodies’ lower and upper ranges, but there were more low skips than high (“low-skip bias”), especially in vocal melodies. Study 2 replicated these findings in the vocal and instrumental melodies of a single composition (Bach’s Mass in B minor ). Study 3 showed that among the instrumental melodies of classical composers, low-skip bias was correlated with the proportion of vocal music within composers’ total output. We propose that, to varying degrees, composers apply a vocal template to instrumental melodies.
Journal Articles
Music Perception (2013) 30 (4): 361–367.
Published: 01 April 2013
Abstract
We examined facial responses to audio-visual presentations of emotional singing. Although many studies have now found evidence for facial responses to emotional stimuli, most have involved static facial expressions and none have involved singing. Singing represents a dynamic ecologically valid emotional stimulus with unique demands on orofacial motion that are independent of emotion, related to pitch and linguistic production. Observers’ facial muscles were recorded with electromyography while they saw and heard recordings of a vocalist’s performance sung with different emotional intentions (happy, neutral, and sad). Audio-visual presentations successfully elicited facial mimicry in observers that were congruent with the performer’s intended emotions. Happy singing performances elicited increased activity in the zygomaticus major muscle region of observers, while sad performances evoked increased activity in the corrugator supercilii muscle region. These spontaneous facial muscle responses occurred within the first three seconds following onset of video presentation indicating that emotional nuances of singing performances can elicit dynamic facial responses from observers.
Journal Articles
Music Perception (2009) 26 (5): 475–488.
Published: 01 June 2009
Abstract
FACIAL EXPRESSIONS ARE USED IN MUSIC PERFORMANCE to communicate structural and emotional intentions. Exposure to emotional facial expressions also may lead to subtle facial movements that mirror those expressions. Seven participants were recorded with motion capture as they watched and imitated phrases of emotional singing. Four different participants were recorded using facial electromyography (EMG) while performing the same task. Participants saw and heard recordings of musical phrases sung with happy, sad, and neutral emotional connotations. They then imitated the target stimulus, paying close attention to the emotion expressed. Facial expressions were monitored during four epochs: (a) during the target; (b) prior to their imitation; (c) during their imitation; and (d) after their imitation. Expressive activity was observed in all epochs, implicating a role of facial expressions in the perception, planning, production, and post-production of emotional singing.
Journal Articles
Music Perception (2003) 21 (1): 119–127.
Published: 01 September 2003
Abstract
Children (3––6 years old) and adults were trained for 6 weeks to identify a single tone, C 5 . Test sessions, held at the end of each week, had participants identify C 5 within a set of seven alternative tones. By the third week of training, identification accuracy of children 5––6 years old surpassed the accuracies of children 3––4 years old and adults. Combined with an analysis of perceptual strategies, the data provide strong support for a critical period for absolute pitch acquisition. Received July 12, 2003, accepted August 1,2003