An objective indicator of musical imagery is developed that involves tracking the up and down movements of the tonal contour of an imagined musical phrase or tune. In two experiments, college students' imagery of music was examined. In both experiments, subjects learned musical phrases with words (songs) and without words (melodies). They then indicated as rapidly as possibly the tonal contour. In Experiment 1, the primary issue was whether musical imagery (as distinct from kinesthetic or visual imagery) drew on the same representation as overt song. Subjects processed the phrases by using either an imaginal or overtly sung representation. No difference in processing time was found between the imaginal and overt modes of representation, consistent with a common representation. A second issue was "tonal primacy," the priority of tonal coding over verbal or word coding in musical phrases; in fact, songs (with words) were processed as well or better than melodies (without words). No evidence favoring tonal primacy was found. In Experiment 2, the issues examined were possible kinesthetic or visual image coding of pitch representation and possible sharing of tonal and verbal generation processes for musical imagery and auditory imagery. Spoken responses for classifying tonal relations took longer than written responses, indicating that kinesthetic and visual image coding was unlikely and that the pitch generation of musical imagery shared resources with a more general auditory imagery.

This content is only available via PDF.
You do not currently have access to this content.