We explore the effects of trained musical movements on sensorimotor interactions in order to clarify the interpretation of previously observed expertise differences. Pianists and non-pianists listened to an auditory sequence and identified whether the final event occurred in time with the sequence. In half the trials participants listened without moving, and in half they synchronized keystrokes while listening. Pianists and non-pianists were better able to identify the timing of the final tone after synchronizing keystrokes compared to listening only. Curiously, this effect of movement did not differ between pianists and non-pianists despite substantial training differences with respect to finger movements. We also found few group differences in the ability to align keystrokes with events in the auditory sequence; however, movements were less variable (lower coefficient of variation) in pianists compared to non-pianists. Consistent with the idea that the benefits of synchronization on rhythm perception are constrained by motor effector kinematics, this work helps clarify previous findings in this paradigm. We discuss these outcomes in light of training and the kinematics involved in pianist keystrokes compared to musicians synchronizing movements in other studies. We also overview how these differences across motor effector synchronization and training must be accounted for in models of perception and action.
C omposers convey emotion through music by co-varying structural cues. Although the complex interplay provides a rich listening experience, this creates challenges for understanding the contributions of individual cues. Here we investigate how three specific cues (attack rate, mode, and pitch height) work together to convey emotion in Bach's Well Tempered-Clavier (WTC) . In three experiments, we explore responses to (1) eight-measure excerpts and (2) musically “resolved” excerpts, and (3) investigate the role of different standard dimensional scales of emotion. In each experiment, thirty nonmusician participants rated perceived emotion along scales of valence and intensity (Experiments 1 & 2) or valence and arousal ( Experiment 3 ) for 48 pieces in the WTC . Responses indicate listeners used attack rate, Mode, and pitch height to make judgements of valence, but only attack rate for intensity/arousal. Commonality analyses revealed mode predicted the most variance for valence ratings, followed by attack rate, with pitch height contributing minimally. In Experiment 2 mode increased in predictive power compared to Experiment 1 . For Experiment 3 , using “arousal” instead of “intensity” showed similar results to Experiment 1 . We discuss how these results complement and extend previous findings of studies with tightly controlled stimuli, providing additional perspective on complex issues of interpersonal communication.
Recent work from our lab illustrates amplitude envelope’s crucial role in both perceptual (Schutz, 2009) and cognitive (Schutz & Stefanucci, 2010) processing. Consequently, we surveyed the amplitude envelopes of sounds used in Music Perception , categorizing them as either flat (i.e., trapezoidal shape), percussive (aka “damped” or “decaying”), other, or undefined . Curiously, the undefined category represented the largest percentage of sounds observed, with 35% lacking definition of this important property (approximately 27% were percussive, 27% flat, and 11% other). This omission of relevant information was not indicative of general inattention to methodological detail. Studies using tones with undefined amplitude envelopes generally defined other properties such as spectral structure (85%), duration (80%), and even model of headphones/speakers (65%) at high rates. Consequently, this targeted omission is intriguing, and suggests amplitude envelope is an area ripe for future research.