THE ACCURACY WITH WHICH INDIVIDUALS ARE ABLE to synchronize with each other using vision alone is well documented. Less attention, however, has been given to the spatio-temporal characteristics of human movement that offer cues for such synchronization. The present study investigated such cues in the context of conductor-musician synchronization. Twenty-four participants tapped in time with dynamic point-light representations of traditional conducting gestures, in which the clarity of the beat and overall tempo was manipulated. A series of nine linear regression analyses identified absolute acceleration along the trajectory as the main cue for synchronization, while beat clarity and tempo influenced the weights of the variables in the emergent models.
This study demonstrates a comprehensive method for linking expert musicians' interpretive choices and associated performances to listeners' perceptions of emotionality. In Phase 1 of the study, 10 expert pianists recorded their prepared interpretations of a highly emotional piece of music (F. Chopin's Prelude op. 28, no. 4). They were also interviewed about their deliberate interpretive choices. In Phase 2, 28 musicians listened to the interpretations and provided postperformance ratings of expressivity and other performance aspects. During listening, subjects moved a mouse pointer on a continuous response computer interface, rating the moment-to-moment (concurrent) level of perceived emotionality. The correlation between postperformance ratings of expressivity and mean concurrent ratings was moderate (.50). In general, musical structure and the trajectory (trace) of concurrent emotionality ratings corresponded strongly. Statistically reliable trace divergences between individual performances and the grand mean performance demonstrated systematic relationships between emotionality ratings and performance data (loudness, timing). Increases in emotionality appear to be caused by specific local deviations from the performance characteristics of an average performance. Interpretive choices clustered at musical phrase boundaries. Many of the analyzed divergences were reflected in performers' interpretive intentions as revealed in interview data.
The fingerings used by keyboard players are determined by a range of ergonomic (anatomic/motor), cognitive, and music-interpretive constraints. We have attempted to encapsulate the most important ergonomic constraints in a model. The model, which is presently limited to isolated melodic fragments, begins by generating all possible fingerings, limited only by maximum practical spans between finger pairs. Many of the fingerings generated in this way seldom occur in piano performance. In the next stage of the model, the difficulty of each fingering is estimated according to a system of rules. Each rule represents a specific ergonomic source of difficulty. The model was subjected to a preliminary test by comparing its output with fingerings written by pianists on the scores of a selection of short Czerny studies. Most fingerings recommended by pianists were among those fingerings predicted by the model to be least difficult; but the model also predicted numerous fingerings that were not recommended by pianists. A variety of suggestions for improving the predictive power of the model are explored.
This article reports a study on a musical idiot savant (NP) who is capable of memorizing large-scale pieces of piano music in three or four hearings. Attempts to memorize two contrasting pieces are documented, one a tonal composition by Grieg, the other an atonal piece by Bartok. The results are compared with those provided by a professional pianist. Transcription of the reproductions shows that NP's ability is confined to tonal music and is structurally based. In this respect, it resembles the performance of high IQ memorizers and supports the view that general intelligence is not a prerequisite for structure-based skill.
Studies of music reading are reviewed with respect to two principal questions: (1) What differences are there between the reading processes of good and poor readers? and (2) To what extent is musical knowledge implicated in reading for performance? The evidence reviewed shows (1) a typical "skill effect" such that better readers have better visual memories for notation and show more sensitivity to structural configurations in the stimuli and (2) that much of what is read is analyzed for musical significance prior to the formulation of motor commands for response. Music reading is in this respect, despite its atypical input modality, a true species of music perception.