I n a recent article , B onin , T rainor , B elyk , and Andrews (2016) proposed a novel way in which basic processes of auditory perception may influence affective responses to music. According to their source dilemma hypothesis (SDH), the relative fluency of a particular aspect of musical processing—the parsing of the music into distinct audio streams—is hedonically marked: Efficient stream segregation elicits pleasant affective experience whereas inefficient segregation results in unpleasant affective experience, thereby contributing to (dis)preference for a musical stimulus. Bonin et al. (2016) conducted two experiments, the results of which were ostensibly consistent with the SDH. However, their research designs introduced major confounds that undermined the ability of these initial studies to offer unequivocal evidence for their hypothesis. To address this, we conducted a large-scale ( N = 311) constructive replication of Bonin et al. (2016 ; Experiment 2), significantly modifying the design to rectify these methodological shortfalls and thereby better assess the validity of the SDH. Results successfully replicated those of Bonin et al. (2016) , although they indicated that source dilemma effects on music preference may be more modest than their original findings would suggest. Unresolved issues and directions for future investigation of the SDH are discussed.
I n most W estern music , notes in a melody relate not only to each other, but also to a “key”—a tonal center combined with an associated scale. Music is often classified as in a major or minor key, but within a scale that defines a major key, emphasizing different notes as the tonic yields different “modes.” Thus, within a set of notes, changing the tonal center changes the putative role of any given note. In this experiment, we eliminated all structural cues to the tonic within a melody by presenting notes randomly selected from the C major scale. A “mode” was established by a continuous drone note lower than the melody. Subjects rated mood (happy versus sad) and tension of each pseudo-melody. Consistent with Temperley and Tan (2013) —in which multiple structural cues were present—different modes produced reliable differences in judged mood and tension. Notably, modes with a major 3rd from the tonic (Ionian, Lydian, Mixolydian) were perceived as happier and less tense than modes with a minor 3rd (Dorian, Aeolian, Phrygian, Locrian). The results confirm that the perception of notes in a melody, and their consequent emotional connotation, depend at least in part to their relationship to a tonal center.
Prior research has amply documented that happy music tends to be faster, louder, higher in average pitch, more variable in pitch, and more staccato in articulation, whereas sad music tends to be slower, lower, less variable, and more legato in articulation. However, the bulk of existing studies are either correlational or allow these expressive cues to covary freely, thereby making it difficult to confirm the causal influence of a given cue. To help address this gap, we experimentally assessed whether the average height (F0) of a pitch gamut independently impacts the perceived emotional expression of melodies derived from the gamut. Study participants rated the perceived happiness/sadness of a set of isochronous and semi-random tone sequences derived from the Bohlen-Pierce scale, an unconventional scale based on pitch intervals that do not appear in common practice music. Results were consistent with the notion that higher average pitch height communicates happiness and/or that lower pitch height communicates sadness. Moreover, they suggested that the effect is: (1) sufficiently robust to be detected using rudimentary melodies based on an unconventional musical scale; and, (2) independent of interval size.
Two experiments examined whether discrimination of component pitches in a harmonic interval is affected by the consonance or dissonance of the interval. A single probe pitch (B or C) was followed by a two note harmonic interval including that pitch (e.g., C then C-F# or C-G) or not including it (e.g., C then B-F# or B-G). On each trial, subjects indicated by key press whether the probe note was repeated in the following interval. The target note in the interval either matched the probe or differed by one semitone (B or C). The other note produced a consonant (e.g., perfect fifth) or dissonant (e.g., tritone) context for the target. Pitch discrimination was faster and more accurate in consonant intervals than dissonant, when the context note was higher than the target (Experiment 1), but there was no effect of consonance when the target was higher (Experiment 2). We conclude that the perception of the lower but not the upper pitch in a two note harmonic interval is affected by the interval’s consonance or dissonance. We discuss the results in terms of the theoretical framework of processing fluency and aesthetics proposed by Winkielman, Schwarz, Fazendeiro, and Reber (2003).