Music both conveys and evokes emotions, and although both phenomena are widely studied, the difference between them is often neglected. The purpose of this study is to examine the difference between perceived and induced emotion for Western popular music using both categorical and dimensional models of emotion, and to examine the influence of individual listener differences on their emotion judgment. A total of 80 musical excerpts were randomly selected from an established dataset of 2,904 popular songs tagged with one of the four words “happy,” “sad,” “angry,” or “relaxed” on the Last.FM web site. Participants listened to the excerpts and rated perceived and induced emotion on the categorical model and dimensional model, and the reliability of emotion tags was evaluated according to participants’ agreement with corresponding labels. In addition, the Goldsmiths Musical Sophistication Index (Gold-MSI) was used to assess participants’ musical expertise and engagement. As expected, regardless of the emotion model used, music evokes emotions similar to the emotional quality perceived in music. Moreover, emotion tags predict music emotion judgments. However, age , gender and three factors from Gold-MSI, importance , emotion , and music training were found not to predict listeners’ responses, nor the agreement with tags.