Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Journal
Article Type
Date
Availability
1-4 of 4
Elizabeth West Marvin
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2012) 29 (3): 325.
Published: 01 February 2012
Journal Articles
Music Perception (2008) 25 (3): 193–212.
Published: 01 February 2008
Abstract
THIS STUDY EXAMINES THE DISTRIBUTIONAL VIEW OF key-finding, which holds that listeners identify key by monitoring the distribution of pitch-classes in a piece and comparing this to an ideal distribution for each key. In our experiment, participants judged the key of melodies generated randomly from pitch-class distributions characteristic of tonal music. Slightly more than half of listeners' judgments matched the generating keys, on both the untimed and the timed conditions. While this performance is much better than chance, it also indicates that the distributional view is far from a complete explanation of human key identification. No difference was found between participants with regard to absolute pitch ability, either in the speed or accuracy of their key judgments. Several key-finding models were tested on the melodies to see which yielded the best match to participants' responses.
Journal Articles
Music Perception (2000) 18 (2): 111–137.
Published: 01 December 2000
Abstract
Previous research has shown that listeners with absolute pitch identify white-key pitches (as on the piano) more quickly and accurately than they identify black-key pitches. Related research has shown that the timbre of tones also affects pitch identification. Our experiments extend the investigation of color and timbre effects on pitch recognition from isolated pitches to more complex musical textures, using both musicians with absolute pitch and musicians without absolute pitch as participants. In Experiment 1, listeners named isolated pitches in synthesized violin or piano timbres; in Experiment 2, they named the tonal center of classical string quartets or piano solos; in Experiment 3, they identified the tonal center of the same musical excerpts by humming. We replicated the color effect for both groups of participants in the response times for all three experiments, but found a color effect on accuracy rates only in Experiment 2. Timbre effects were found only in Experiment 1, where response times were quicker for piano tones than string tones. Participants' instrumental training affected response times: string players identified isolated tones most quickly; keyboard players identified the keys of compositions most quickly.
Journal Articles
Music Perception (1999) 16 (4): 389–407.
Published: 01 July 1999
Abstract
This study arises in response to previous research that calls into question the ability of musically trained listeners to perceive tonal closure in the original tonic key. In our Experiment 1, 36 experienced musicians heard 12 randomly ordered excerpts from piano and orchestral works in three categories: nonmodulating, modulating to the dominant, modulating to a key other than the dominant. After hearing each excerpt, participants answered six questions, one of which asked whether the concluding key was the same as the initial one. Participants correctly answered this question at above-chance levels, with music academics (theorists and musicologists) more accurate than other musicians. In Experiment 2, 33 experienced musicians heard MIDI performances of six Handel keyboard compositions. On each trial, participants heard either the original composition or one of two variants with phrase units rearranged. Trials were quasi-randomly ordered so that an original and variant were not heard in succession. Three types of tonal motion resulted from our formal manipulation: the stimulus began and ended in the tonic key, began and ended in the dominant key, or began and ended in different keys. After hearing each work, participants answered seven questions, of which data were analyzed for three: whether the beginning and ending key were the same, whether the harmonic structure conformed to stylistic expectations, and whether the final key was the tonic. Participants' accuracy on the beginning/ending key question was no better than chance would predict; however, listeners were able to discriminate between works that ended in the tonic key and those that did not. Unlike Experiment 1, we found no significant differences in accuracy between music academics and other musicians. Listeners generally found both the original and the manipulated compositions to conform to stylistic expectations, possibly because they attended to local harmonic relationships rather than global ones.