Skip Nav Destination
Close Modal
Search Results for
*
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Date
Availability
1-20 of 1390 Search Results for
*
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 335–336.
Published: 01 February 2021
Abstract
In the article “Consonance preferences within an unconventional tuning system,” Friedman and colleagues (2021) examine consonance ratings of a large range of dyads and triads from the Bohlen-Pierce chromatic just (BPCJ) scale. The study is designed as a replication of a recent paper by Bowling, Purves, and Gill (2018) , which proposes that perception of consonance in dyads, triads, and tetrads can be predicted by their harmonic similarity to human vocalisations. In this commentary, we would like to correct some interpretations regarding Friedman et al.’s (2021) discussion of our paper ( Smit, Milne, Dean, & Weidemann, 2019 ), as well as express some concerns regarding the statistical methods used. We also propose a stronger emphasis on the use of, as named by Friedman et al., composite models as a range of recent evidence strongly suggests that no single acoustic measure can fully predict the complex experience of consonance.
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 337–339.
Published: 01 February 2021
Abstract
I discuss three fundamental questions underpinning the study of consonance: 1) What features cause a particular chord to be perceived as consonant? 2) How did humans evolve the ability to perceive these features? 3) Why did humans evolve to attribute particular aesthetic valences to these features (if they did at all)? The first question has been addressed by several recent articles, including Friedman, Kowalewski, Vuvan, and Neill (2021) , with the common conclusion that consonance in Western listeners is driven by multiple features such as harmonicity, interference between partials, and familiarity. On this basis, it seems relatively straightforward to answer the second question: each of these consonance features seems to be grounded in fundamental aspects of human auditory perception, such as auditory scene analysis and auditory long-term memory. However, the third question is harder to resolve. I describe several potential answers, and argue that the present evidence is insufficient to distinguish between them, despite what has been claimed in the literature. I conclude by discussing what kinds of future studies might be able to shed light on this problem.
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 331–334.
Published: 01 February 2021
Abstract
Evidence supporting a link between harmonicity and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation. I examine the analysis undertaken to produce this evidence and clarify its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication.
Journal Articles
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 245–266.
Published: 01 February 2021
Abstract
Effective audience engagement with musical performance involves social, cognitive and affective elements. We investigate the influence of observers’ musical expertise and instrumental motor expertise on their affective and cognitive responses to complex and unfamiliar classical piano performances of works by Scriabin and Hanson presented in audio and audio-visual formats. Observers gave their felt affect (arousal and valence) and their action understanding responses continuously while observing the performances. Liking and familiarity were rated after each excerpt. As hypothesized: visual information enhanced observers’ action understanding and liking ratings; observers with music training rated their action understanding, liking and familiarity higher than did nonmusicians; observers’ felt affect did not vary according to their musical or motor expertise. Contrary to our hypotheses: visual information had only a slight effect on observers’ arousal felt affect responses and none on valence; musicians’ specific instrumental motor expertise did not influence action understanding responses. We also observed a significant negative relationship between action understanding and felt affect responses. Ideas of empathy in musical interactions motivated the research; the empathy framework in relation to musical performance is discussed. Nonmusician audiences might be sensitized to challenging musical performances through multimodal strategies to build the performer-observer connection and increase understanding of performance.
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 267–281.
Published: 01 February 2021
Abstract
Passive music listening has shown its capacity to soothe pain in several clinical and experimental studies. This phenomenon—known as music-induced analgesia—could partly be explained by the modulation of pain signals in response to the stimulation of brain and brainstem centers. We hypothesized that music-induced analgesia may involve inhibitory descending pain systems. We assessed pain-related responses to endogenous pain control mechanisms known to depend on descending pain modulation: peak of first pain (PP), temporal summation (TS), and diffuse noxious inhibitory control (DNIC). Twenty-seven healthy participants (14 men, 13 women) were exposed to a conditioned pain modulation paradigm during a 20-minute relaxing music session and a silence condition. Pain was continually measured with a visual analogue scale. Pain ratings were significantly lower with music listening ( p < .02). Repeated measures ANOVA indicated significant differences between conditions within PP and TS ( p < .05) but not in DNIC. Those findings suggested that music listening could strengthen components of the inhibitory descending pain pathways operating at the dorsal spinal cord level.
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 282–292.
Published: 01 February 2021
Abstract
In music, vibrato consists of cyclic variations in pitch, loudness, or spectral envelope (hereafter, “timbre vibrato”—TV) or combinations of these. Here, stimuli with TV were compared with those having loudness vibrato (LV). In Experiment 1, participants chose from tones with different vibrato depth to match a reference vibrato tone. When matching to tones with the same vibrato type, 70% of the variance was explained by linear matching of depth. Less variance (40%) was explained when matching dissimilar vibrato types. Fluctuations in loudness were perceived as approximately the same depth as fluctuations in spectral envelope (i.e., about 1.3 times deeper than fluctuations in spectral centroid). In Experiment 2, participants matched a reference with test stimuli of varying depths and types. When the depths of the test and reference tones were similar, the same type was usually selected, over the range of vibrato depths. For very disparate depths, matches were made by type only about 50% of the time. The study revealed good, fairly linear sensitivity to vibrato depth regardless of vibrato type, but also some poorly understood findings between physical signal and perception of TV, suggesting that more research is needed in TV perception.
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 293–312.
Published: 01 February 2021
Abstract
Considerable evidence converges on the plasticity of attention and the possibility that it can be modulated through regular training. Music training, for instance, has been correlated with modulations of early perceptual and attentional processes. However, the extent to which music training can modulate mechanisms involved in processing information (i.e., perception and attention) is still widely unknown, particularly between sensory modalities. If training in one sensory modality can lead to concomitant enhancements in different sensory modalities, then this could be taken as evidence of a supramodal attentional system. Additionally, if trained musicians exhibit improved perceptual skills outside of the domain of music, this could be taken as evidence for the notion of far-transfer, where training in one domain can lead to improvements in another. To investigate this further, we evaluated the effects of music training using tasks designed to measure simultaneity perception and temporal acuity, and how these are influenced by music training in auditory, visual, and audio-visual conditions. Trained musicians showed significant enhancements for simultaneity perception in the visual modality, as well as generally improved temporal acuity, although not in all conditions. Visual cues directing attention influenced simultaneity perception for musicians for visual discrimination and temporal accuracy in auditory discrimination, suggesting that musicians have selective enhancements in temporal discrimination, arguably due to increased attentional efficiency when compared to nonmusicians. Implications for theory and future training studies are discussed.
Journal Articles
Journal:
Music Perception
Music Perception (2021) 38 (3): 313–330.
Published: 01 February 2021
Abstract
Recently, Bowling, Purves, and Gill (2018a) , found that individuals perceive chords with spectra resembling a harmonic series as more consonant. This is consistent with their vocal similarity hypothesis (VSH), the notion that the experience of consonance is based on an evolved preference for sounds that resemble human vocalizations. To rule out confounding between harmonicity and familiarity, we extended Bowling et al.’s (2018a) procedure to chords from the unconventional Bohlen-Pierce chromatic just (BPCJ) scale. We also assessed whether the association between harmonicity and consonance was moderated by timbre by presenting chords generated from either piano or clarinet samples. Results failed to straightforwardly replicate this association; however, evidence of a positive correlation between harmonicity and consonance did emerge across timbres following post hoc exclusion of chords containing intervals that were particularly similar to conventional equal-tempered dyads. Supplementary regression analyses using a more comprehensive measure of harmonicity confirmed its positive association with consonance ratings of BPCJ chords, yet also showed that spectral interference independently contributed to these ratings. In sum, our results are consistent with the VSH; however, they also suggest that a composite model, incorporating both harmonicity as well as spectral interference as predictors, would best account for variance in consonance judgments.
Includes: Supplementary data
Journal Articles
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (2): 214–242.
Published: 25 November 2020
Abstract
We combine perceptual research and acoustic analysis to probe the messy, pluralistic world of musical semantics, focusing on sound mass music. Composers and scholars describe sound mass with many semantic associations. We designed an experiment to evaluate to what extent these associations are experienced by other listeners. Thirty-eight participants heard 40 excerpts of sound mass music and related contemporary genres and rated them along batteries of semantic scales. Participants also described their rating strategies for some categories. A combination of qualitative stimulus analyses, Cronbach’s alpha tests, and principal component analyses suggest that cross-domain mappings between semantic categories and musical properties are statistically coherent between participants, implying non-arbitrary relations. Some aspects of participants’ descriptions of their rating strategies appear to be reflected in their numerical ratings. We sought quantitative bases for these associations in the acoustic signals. After attempts to correlate semantic ratings with classical audio descriptors failed, we pursued a neuromimetic representation called spectrotemporal modulations (STMs), which explains much more of the variance in semantic ratings. This result suggests that semantic interpretations of music may involve qualities or attributes that are objectively present in the music, since computer simulation can use sound signals to partially reconstruct human semantic ratings.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (2): 195–213.
Published: 25 November 2020
Abstract
Duo musicians exhibit a broad variety of bodily gestures, but it is unclear how soloists’ and accompanists’ movements differ and to what extent they attract observers’ visual attention. In Experiment 1, seven musical duos’ body movements were tracked while they performed two pieces in two different conditions. In a congruent condition, soloist and accompanist behaved according to their expected musical roles; in an incongruent condition, the soloist behaved as accompanist and vice versa. Results revealed that behaving as soloist, regardless of the condition, led to more, smoother, and faster head and shoulder movements over a larger area than behaving as accompanist. Moreover, accompanists in the incongruent condition moved more than soloists in the congruent condition. In Experiment 2, observers watched videos of the duo performances with and without audio, while eye movements were tracked. Observers looked longer at musicians behaving as soloists compared to musicians behaving as accompanists, independent of their respective musical role. This suggests that visual attention was allocated to the most salient visuo-kinematic cues (i.e., expressive bodily gestures) rather than the most salient musical cues (i.e., the solo part). Findings are discussed regarding auditory-motor couplings and theories of motor control as well as auditory-visual integration and attention.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (2): 106–135.
Published: 25 November 2020
Abstract
What factors influence listeners’ perception of meter in a musical piece or a musical style? Many cues are available in the musical “surface,” i.e., the pattern of sounds physically present during listening. Models of meter processing focus on the musical surface. However, percepts of meter and other musical features may also be shaped by reactivation of previously heard music, consistent with exemplar accounts of memory. The current study explores a phenomenon that is here termed metrical restoration : listeners who hear melodies with ambiguous meters report meter preferences that match previous listening experiences in the lab, suggesting reactivation of those experiences. Previous studies suggested that timbre and brief rhythmic patterns may influence metrical restoration. However, variations in the magnitude of effects in different experiments suggest that other factors are at work. Experiments reported here explore variation in metrical restoration as a function of: melodic diversity in timbre and tempo, associations of rhythmic patterns with particular melodies and meters, and associations of meter with overall melodic form . Rhythmic patterns and overall melodic form, but not timbre, had strong influences. Results are discussed with respect to style-specific or culture-specific musical processing, and everyday listening experiences. Implications for models of musical memory are also addressed.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (2): 136–194.
Published: 25 November 2020
Abstract
Interpersonal musical entrainment—temporal synchronization and coordination between individuals in musical contexts—is a ubiquitous phenomenon related to music’s social functions of promoting group bonding and cohesion. Mechanisms other than sensorimotor synchronization are rarely discussed, while little is known about cultural variability or about how and why entrainment has social effects. In order to close these gaps, we propose a new model that distinguishes between different components of interpersonal entrainment: sensorimotor synchronization —a largely automatic process manifested especially with rhythms based on periodicities in the 100–2000 ms timescale—and coordination , extending over longer timescales and more accessible to conscious control. We review the state of the art in measuring these processes, mostly from the perspective of action production, and in so doing present the first cross-cultural comparisons between interpersonal entrainment in natural musical performances, with an exploratory analysis that identifies factors that may influence interpersonal synchronization in music. Building on this analysis we advance hypotheses regarding the relationship of these features to neurophysiological, social, and cultural processes. We propose a model encompassing both synchronization and coordination processes and the relationship between them, the role of culturally shared knowledge, and of connections between entrainment and social processes.
Includes: Supplementary data
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (1): 27–45.
Published: 09 September 2020
Abstract
The parsing of undifferentiated tone sequences into groups of qualitatively distinct elements is one of the earliest rhythmic phenomena to have been investigated experimentally ( Bolton, 1894 ). The present study aimed to replicate and extend these findings through online experimentation using a spontaneous grouping paradigm with forced-choice response (from 1 to 12 tones per group). Two types of isochronous sequences were used: equitone sequences, which varied only with respect to signal rate (200, 550, or 950 ms interonset intervals), and accented sequences, in which accents were added every two or three tones to test the effect of induced grouping (duple vs. triple) and accent type (intensity, duration, or pitch). In equitone sequences, participants’ grouping percepts ( N = 4,194) were asymmetrical and tempo-dependent, with “no grouping” and groups of four being most frequently reported. In accented sequences, slower rate, induced triple grouping, and intensity accents correlated with increases in group length. Furthermore, the probability of observing a mixed metric type—that is, grouping percepts divisible by both two and three (6 and 12)—was found to be highest in faster sequences with induced triple grouping. These findings suggest that lower-level triple grouping gives rise to binary grouping percepts at higher metrical levels.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (1): 46–65.
Published: 09 September 2020
Abstract
Music often triggers a pleasurable urge in listeners to move their bodies in response to the rhythm. In music psychology, this experience is commonly referred to as groove . This study presents the Experience of Groove Questionnaire , a newly developed self-report questionnaire that enables respondents to subjectively assess how strongly they feel an urge to move and pleasure while listening to music. The development of the questionnaire was carried out in several stages: candidate questionnaire items were generated on the basis of the groove literature, and their suitability was judged by fifteen groove and rhythm research experts. Two listening experiments were carried out in order to reduce the number of items, to validate the instrument, and to estimate its reliability. The final questionnaire consists of two scales with three items each that reliably measure respondents’ urge to move (Cronbach’s α = .92) and their experience of pleasure ( α = .97) while listening to music. The two scales are highly correlated ( r = .80), which indicates a strong association between motor and emotional responses to music. The scales of the Experience of Groove Questionnaire can independently be applied in groove research and in a variety of other research contexts in which listeners’ subjective experience of music-induced movement and enjoyment need to be addressed: for example the study of the interaction between music and motivation in sports and research on therapeutic applications of music in people with neurological movement disorders.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (1): 66–77.
Published: 09 September 2020
Abstract
In studies of perceptual and neural processing differences between musicians and nonmusicians, participants are typically dichotomized on the basis of personal report of musical experience. The present study relates self-reported musical experience and objectively measured musical aptitude to a skill that is important in music perception: temporal resolution (or acuity). The Advanced Measures of Music Audiation (AMMA) test was used to objectively assess participant musical aptitude, and adaptive psychophysical measurements were obtained to assess temporal resolution on two tasks: within-channel gap detection and across-channel gap detection. Results suggest that musical aptitude measured with the AMMA and self-reporting of music experiences (duration of music instruction) are both related to temporal resolution ability in musicians. The relationship between musical aptitude and/or duration of music training is important to music educators advocating for the benefits of music programs as well as in behavioral and neurophysiological research.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (1): 1–26.
Published: 09 September 2020
Abstract
This study reports on an experiment that tested whether drummers systematically manipulated not only onset but also duration and/or intensity of strokes in order to achieve different timing styles. Twenty-two professional drummers performed two patterns (a simple “back-beat” and a complex variation) on a drum kit (hi-hat, snare, kick) in three different timing styles (laid-back, pushed, on-beat), in tandem with two timing references (metronome and instrumental backing track). As expected, onset location corresponded to the instructed timing styles for all instruments. The instrumental reference led to more pronounced timing profiles than the metronome (pushed strokes earlier, laid-back strokes later). Also, overall the metronome reference led to earlier mean onsets than the instrumental reference, possibly related to the “negative mean asynchrony” phenomenon. Regarding sound, results revealed systematic differences across participants in the duration (snare) and intensity (snare and hi-hat) of strokes played using the different timing styles. Pattern also had an impact: drummers generally played the rhythmically more complex pattern 2 louder than the simpler pattern 1 (snare and kick). Overall, our results lend further evidence to the hypothesis that both temporal and sound-related features contribute to the indication of the timing of a rhythmic event in groove-based performance.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 38 (1): 78–98.
Published: 09 September 2020
Abstract
Consonance and dissonance are basic phenomena in the perception of chords that can be discriminated very early in sensory processing. Musical expertise has been shown to facilitate neural processing of various musical stimuli, but it is unclear whether this applies to detecting consonance and dissonance. Our study aimed to determine if sensitivity to increasing levels of dissonance differs between musicians and nonmusicians, using a combination of neural (electroencephalographic mismatch negativity, MMN) and behavioral measurements (conscious discrimination). Furthermore, we wanted to see if focusing attention to the sounds modulated the neural processing. We used chords comprised of either highly consonant or highly dissonant intervals and further manipulated the degree of dissonance to create two levels of dissonant chords. Both groups discriminated dissonant chords from consonant ones neurally and behaviorally. The magnitude of the MMN differed only marginally between the more dissonant and the less dissonant chords. The musicians outperformed the nonmusicians in the behavioral task. As the dissonant chords elicited MMN responses for both groups, sensory dissonance seems to be discriminated in an early sensory level, irrespective of musical expertise, and the facilitating effects of musicianship for this discrimination may arise in later stages of auditory processing, appearing only in the behavioral auditory task.
Journal Articles
Journal:
Music Perception
Music Perception (2020) 37 (5): 392–402.
Published: 10 June 2020
Abstract
The study of music-evoked autobiographical memories (MEAMs) has grown substantially in recent years. Prior work has used various methods to compare MEAMs to memories evoked by other cues (e.g., images, words). Here, we sought to identify which methods could distinguish between MEAMs and picture-evoked memories. Participants ( N = 18) listened to popular music and viewed pictures of famous persons, and described any autobiographical memories evoked by the stimuli. Memories were scored using the Autobiographical Interview (AI; Levine, Svoboda, Hay, Winocur, & Moscovitch, 2002 ), Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2015 ), and Evaluative Lexicon (EL; Rocklage & Fazio, 2018 ). We trained three logistic regression models (one for each scoring method) to differentiate between memories evoked by music and faces. Models trained on LIWC and AI data exhibited significantly above chance accuracy when classifying whether a memory was evoked by a face or a song. The EL, which focuses on the affective nature of a text, failed to predict whether memories were evoked by music or faces. This demonstrates that various memory scoring techniques provide complementary information about cued autobiographical memories, and suggests that MEAMs differ from memories evoked by pictures in some aspects (e.g., perceptual and episodic content) but not others (e.g., emotional content).