Groove is a sensation of movement or wanting to move when we listen to certain types of music; it is central to the appreciation of many styles such as Jazz, Funk, Latin, and many more. To better understand the mechanisms that lead to the sensation of groove, we explore the relationship between groove and systematic microtiming deviations. Manifested as small, intentional deviations in timing, systematic microtiming is widely considered within the music community to be a critical component of music performances that groove. To investigate the effect of microtiming on the perception of groove we synthesized typical rhythm patterns for Jazz, Funk, and Samba with idiomatic microtiming deviation patterns for each style. The magnitude of the deviations was parametrically varied from nil to about double the natural level. In two experiments, untrained listeners and experts listened to all combinations of same and different music and microtiming style and magnitude combinations, and rated liking, groove, naturalness, and speed. Contrary to a common and frequently expressed belief in the literature, systematic microtiming led to decreased groove ratings, as well as liking and naturalness, with the exception of the simple short-long shuffle Jazz pattern. A comparison of the ratings between the two listener groups revealed this effect to be stronger for the expert listener group than for the untrained listeners, suggesting that musical expertise plays an important role in the perception and appreciation of microtiming in rhythmic patterns.
This article brings forward the question of which acoustic features are the most adequate for identifying beats computationally in acoustic music pieces. We consider many different features computed on consecutive short portions of acoustic signal, among which those currently promoted in the literature on beat induction from acoustic signals and several original features, unmentioned in this literature. Evaluation of feature sets regarding their ability to provide reliable cues to the localization of beats is based on a machine learning methodology with a large corpus of beat-annotated music pieces, in audio format, covering distinctive music categories. Confirming common knowledge, energy is shown to be a very relevant cue to beat induction (especially the temporal variation of energy in various frequency bands, with the special relevance of frequency bands below 500 Hz and above 5 kHz). Some of the new features proposed in this paper are shown to outperform features currently promoted in the literature on beat induction from acoustic signals.We finally hypothesize that modeling beat induction may involve many different, complementary acoustic features and that the process of selecting relevant features should partly depend on acoustic properties of the very signal under consideration.