Voice leading is a common task in Western music composition whose conventions are consistent with fundamental principles of auditory perception. Here we introduce a computational cognitive model of voice leading, intended both for analyzing voice-leading practices within encoded musical corpora and for generating new voice leadings for unseen chord sequences. This model is feature-based, quantifying the desirability of a given voice leading on the basis of different features derived from Huron’s (2001) perceptual account of voice leading. We use the model to analyze a corpus of 370 chorale harmonizations by J. S. Bach, and demonstrate the model’s application to the voicing of harmonic progressions in different musical genres. The model is implemented in a new R package, “voicer,” which we release alongside this paper.
Skip Nav Destination
Article navigation
February 2020
Research Article|
February 01 2020
A Computational Cognitive Model for the Analysis and Generation of Voice Leadings
Peter M. C. Harrison,
Queen Mary University of London, London, United Kingdom
Peter M. C. Harrison, Max-Planck-Institut für empirische Ästhetik, Grüneburgweg 14, 60322 Frankfurt am Main. E-mail: peter.harrison@ae.mpg.de
Search for other works by this author on:
Peter M. C. Harrison, Max-Planck-Institut für empirische Ästhetik, Grüneburgweg 14, 60322 Frankfurt am Main. E-mail: peter.harrison@ae.mpg.de
Music Perception (2020) 37 (3): 208–224.
Article history
Received:
April 01 2019
Accepted:
September 27 2019
Citation
Peter M. C. Harrison, Marcus T. Pearce; A Computational Cognitive Model for the Analysis and Generation of Voice Leadings. Music Perception 1 February 2020; 37 (3): 208–224. doi: https://doi.org/10.1525/mp.2020.37.3.208
Download citation file:
Sign in
Don't already have an account? Register
Client Account
You could not be signed in. Please check your email address / username and password and try again.
Could not validate captcha. Please try again.