Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Journal
Article Type
Date
Availability
1-6 of 6
Keywords: popular music
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2019) 36 (4): 353–370.
Published: 01 April 2019
... another syllable on or before the next nth-note beat . 30 1 2018 20 11 2018 Ivan Tan, Eastman School of Music, 26 Gibbs St., Rochester, NY 14604. E-mail: itan@u.rochester.edu rhythm and timing popular music metre corpus music and language Notes 1. The reader may...
Abstract
W hile syncopation generally refers to any conflict between surface accents and underlying meter, in rock and other recent popular styles it takes a more specific form in which accented notes occur just before strong beats. Such “anticipatory” syncopations suggest that there is an underlying cognitive representation in which the accented notes and strong beats align. Syllabic stress is crucial to the identification of such syncopations; to facilitate this, we present a corpus of rock melodies annotated with lyrics and syllabic stress values. We propose a new measure of syncopation that incorporates syllabic stress; we also propose a measure of anticipatory syncopation, and show that it reveals a strong presence of this type of syncopation in rock music. We then use these measures to explore other aspects of syncopation in rock, including its occurrence in different parts of the 4/4 measure, its dependence on tempo, its historical evolution, and its aesthetic functions.
Journal Articles
Music Perception (2018) 35 (5): 607–621.
Published: 01 June 2018
...Shannon L. Layman; W. Jay Dowling Previous research indicates that people gain musical information from short (250 ms) segments of music. This study extended previous findings by presenting shorter (100 ms, 150 ms, and 200 ms) segments of Western popular music in rapid temporal arrays; similar to...
Abstract
Previous research indicates that people gain musical information from short (250 ms) segments of music. This study extended previous findings by presenting shorter (100 ms, 150 ms, and 200 ms) segments of Western popular music in rapid temporal arrays; similar to scanning through music listening options. The question remains, is there a critical feature, such as the song’s vocalist, that listeners used when processing the complex timbral arrangements of Western popular music? Participants were presented with familiar and unfamiliar music segments, four segments in succession. Each trial contained a female or a male vocalist, or was purely instrumental. Participants were asked whether they heard a vocalist (Experiment 1) or a female vocalist (Experiment 2) in one of the four music segments. Vocalist detection in Experiment 1 was well above chance for the shortest stimuli (100 ms), and performance was better in the familiar trials than the unfamiliar. When instructed in Experiment 2 to detect a female vocalist, however, participants performed better with the unfamiliar trials than the familiar trials. Together, these findings suggest that the vocalist and vocalist gender may be stored as separate features and their utility differs based on one’s familiarity with the musical stimulus.
Journal Articles
Music Perception (2017) 34 (3): 352–365.
Published: 01 February 2017
... aligns well with corpus data representing the frequency of different modes in popular music. There was also a significant interaction between ending and context, whereby listeners rated an ending higher if its mode matched the context. Our findings suggest that (1) our earlier “happiness” results cannot...
Abstract
In a prior study (Temperley & Tan, 2013), participants rated the “happiness” of melodies in different diatonic modes. A strong pattern was found, with happiness decreasing as scale steps were lowered. We wondered: Does this pattern reflect the familiarity of diatonic modes? The current study examines familiarity directly. In the experiments reported here, college students without formal music training heard a series of melodies, each with a three-measure beginning (“context”) in a diatonic mode and a one-measure ending that was either in the context mode or in a mode that differed from the context by one scale degree. Melodies were constructed using four pairs of modes with the same tonic: Lydian/Ionian, Ionian/Mixolydian, Dorian/Aeolian, and Aeolian/Phrygian. Participants rated how well the ending “fit” the context. Two questions were of interest: (1) Do listeners give higher ratings to some modes (as endings) overall? (2) Do listeners give a higher rating to the ending if its mode matches that of the context? The results show a strong main effect of ending, with Ionian (major) and Aeolian (natural minor) as the most familiar (highly rated) modes. This aligns well with corpus data representing the frequency of different modes in popular music. There was also a significant interaction between ending and context, whereby listeners rated an ending higher if its mode matched the context. Our findings suggest that (1) our earlier “happiness” results cannot be attributed to familiarity alone, and (2) listeners without formal knowledge of diatonic modes are able to internalize diatonic modal frameworks.
Journal Articles
Music Perception (2013) 30 (3): 237–257.
Published: 01 February 2013
... mode in both classical and popular music, is the happiest, and happiness declines with increasing distance from Ionian. However, familiarity does not entirely explain our results. Familiarity predicts that Mixolydian would be happier than Lydian (since they are equally similar to Ionian, and Mixolydian...
Abstract
In this experiment, participants (nonmusicians) heard pairs of melodies and had to judge which of the two melodies was happier. Each pair consisted of a single melody presented in two different diatonic modes (Lydian, Ionian, Mixolydian, Dorian, Aeolian, or Phrygian) with a constant tonic of C; all pairs of modes were used. The results suggest that modes imply increasing happiness as scale-degrees are raised, with the exception of Lydian, which is less happy than Ionian. Overall, the results are best explained by familiarity: Ionian (major mode), the most common mode in both classical and popular music, is the happiest, and happiness declines with increasing distance from Ionian. However, familiarity does not entirely explain our results. Familiarity predicts that Mixolydian would be happier than Lydian (since they are equally similar to Ionian, and Mixolydian is much more common in popular music); but for almost half of our participants, the reverse was true. This suggests that the “sharpness” of a mode also affects its perceived happiness, either due to pitch height or to the position of the scale relative to the tonic on the “line of fifths”; we favor the latter explanation.
Journal Articles
Music Perception (2012) 30 (2): 129–146.
Published: 01 December 2012
... College, The Arts and Media Campus, Beezon Road, Kendal, Cumbria, LA9 6EL, UK. E-mail: alisunr@yahoo.com informal singing popular music music in everyday life neo-tribes random forest statistical model The Science of Singing Along 129 Music Perception volume 30, issue 2, pp. 129 146. issn...
Abstract
the study investigates contextual and musical factors that incite audiences in Western music entertainment venues to sing along to pop songs. Thirty nights of field research were carried out in five entertainment venues across northern England. The percentage of people singing along was recorded for each of the 1,054 “song events,” serving as the dependent variable. In addition, musical analysis was carried out on the songs of a subset of 332 song events. Nine contextual factors as well as 32 musical features of the songs were considered as different categories of explanatory variables. Regression trees and a random forest analysis were employed to model the empirical data statistically. Results indicate that contextual factors can account for 40% of the variability in sing-along behavior, while adding musical factors into the model – in particular those relating to vocal performance – was able to explain about another 25% of the variance. Results are discussed with respect to theoretical approaches on neo-tribal behavior.
Journal Articles
Music Perception (2010) 27 (5): 337–354.
Published: 01 June 2010
... information. ©© 2010 By the Regents of the University of California music memory meta-memory popular music emotion style Music Perception VOLUME 27, ISSUE 5, PP. 337 354, ISSN 0730-7829, ELECTRONIC ISSN 1533-8312 © 2010 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA. ALL RIGHTS RESERVED. PLEASE...
Abstract
SHORT CLIPS (300 AND 400 MS), TAKEN FROM POPULAR songs from the 1960's through the 2000's,were presented to participants in two experiments to study the detail and contents of musical memory. For 400 ms clips, participants identified both artist and title on more than 25% of the trials.Very accurate confidence ratings showed that this knowledge was recalled consciously. Performance was somewhat higher if the clip contained a word or partword from the title. Even when a clip was not identified, it conveyed information about emotional content, style and, to some extent, decade of release. Performance on song identification was markedly lower for 300 ms clips, although participants still gave consistent emotion and style judgments, and fairly accurate judgments of decade of release. The decade of release had no effect on identification, emotion consistency, or style consistency. However, older songs were preferred, suggesting that the availability of recorded music alters the pattern of preferences previously assumed to be established during adolescence and early adulthood. Taken together, the results point to extraordinary abilities to identify music based on highly reduced information.