Previous research indicates that people gain musical information from short (250 ms) segments of music. This study extended previous findings by presenting shorter (100 ms, 150 ms, and 200 ms) segments of Western popular music in rapid temporal arrays; similar to scanning through music listening options. The question remains, is there a critical feature, such as the song’s vocalist, that listeners used when processing the complex timbral arrangements of Western popular music? Participants were presented with familiar and unfamiliar music segments, four segments in succession. Each trial contained a female or a male vocalist, or was purely instrumental. Participants were asked whether they heard a vocalist (Experiment 1) or a female vocalist (Experiment 2) in one of the four music segments. Vocalist detection in Experiment 1 was well above chance for the shortest stimuli (100 ms), and performance was better in the familiar trials than the unfamiliar. When instructed in Experiment 2 to detect a female vocalist, however, participants performed better with the unfamiliar trials than the familiar trials. Together, these findings suggest that the vocalist and vocalist gender may be stored as separate features and their utility differs based on one’s familiarity with the musical stimulus.

This content is only available via PDF.
You do not currently have access to this content.