This paper explores perceived and experienced emotions elicited by computer-generated music. During the experiments, 30 participants listened to 20 excerpts. Each of the excerpts lasted for about 16 seconds and was generated in real-time by specifically designed software. Measurements were performed using both categorical (a free verbal description) and dimensional approaches. The relationship between structural factors of music (mode, tempo, pitch height, rhythm, articulation, and presence of the dissonance) and emotions was examined. Personal characteristics of the listener: gender and music training were also taken into account. The relationship between structural factors and the perceived emotions was mostly congruent with predictions derived from the literature, and the relationship between those factors and experienced emotions was very similar. Tempo and pitch height – the cues common to music and speech – turned out to have a strong influence on the evaluation of emotion. Personal factors had a marginal effect. In the case of verbal categories comparable with the dimensional model, a strong correspondence was found.

This content is only available via PDF.
You do not currently have access to this content.