One reason why music features temporal regularities is that they elicit expectancies about when an event will occur, focusing a listener�s attention around certain points in time. Evidence comes from phoneme monitoring tasks (using reaction times, J. G. Martin, 1979) and pitch and time judgment tasks (using accuracy measures, M. R. Jones, H. Moynihan, N. MacKenzie,& J. Puente, 2002; E. W. Large & M. R. Jones, 1999). Reaction times were faster and accuracy was higher for rhythmically expected elements than for unexpected elements. By contrast, A. Penel and M. R. Jones (2004) recently reported an inversely related finding: faster reaction times for rhythmically unexpected tones, which they labeled a temporal capture effect. The present research examines expectancy versus capture phenomena by using a speeded detection task in which listeners must respond to a lower pitched target located within monotone and isochronous sequences. One interonset interval was shortened or lengthened independently of the target�s position. Temporal irregularities tended to trigger false alarms, suggesting capture effects. Patterns of reaction times showed expectancy effects when the temporally perturbed event preceded the target, but these effects seemed to decrease with time in the sequence. When the target itself was temporally perturbed, some capture was observed, but only when the target came early in the sequence. We conclude that Martin�s (1979) expectancy effects in phoneme monitoring were coarticulatory rather than rhythmical.
We investigate how the presence of performance microstructure (small variations in timing, intensity, and articulation) influences listeners' perception of musical excerpts, by measuring the way in which listeners synchronize with the excerpts. Musicians and nonmusicians tapped on a drum in synchrony with six musical excerpts, each presented in three versions: mechanical (synthesized from the score, without microstructure), accented (mechanical, with intensity accents), and expressive (performed by a concert pianist, with all types of microstructure). Participants' synchronizations with these excerpts were characterized in terms of three processes described in Mari Riess Jones's Dynamic Attending Theory: attunement (ease of synchronization), use of a referent level (spontaneous synchronization rate), and focal attending (range of synchronization levels). As predicted by beat induction models, synchronization was better with the temporally regular mechanical and accented versions than with the expressive versions. However, synchronization with expressive versions occurred at higher (slower) levels, within a narrower range of synchronization levels, and corresponded more frequently to the theoretically correct metrical hierarchy. We conclude that performance microstructure transmits a particular metrical interpretation to the listener and enables the perceptual organization of events over longer time spans. Compared with nonmusicians, musicians synchronized more accurately (heightened attunement), tapped more slowly (slower referent level), and used a wider range of hierarchical levels when instructed (enhanced focal attending), more often corresponding to the theoretically correct metrical hierarchy. We conclude that musicians perceptually organize events over longer time spans and have a more complete hierarchical representation of the music than do nonmusicians.