Guitar distortion used in rock music modifies a chord so that new frequencies appear in its harmonic structure. A distorted dyad (power chord) has a special role in heavy metal music due to its harmonics that create a major third interval, making it similar to a major chord. We investigated how distortion affects cortical auditory processing of chords in musicians and nonmusicians. Electric guitar chords with or without distortion and with or without the interval of the major third (i.e., triads or dyads) were presented in an oddball design where one of them served as a repeating standard stimulus and others served as occasional deviants. This enabled the recording of event-related potentials (ERPs) of the electroencephalogram (EEG) related to deviance processing (the mismatch negativity MMN and the attention-related P3a component) in an ignore condition. MMN and P3a responses were elicited in most paradigms. Distorted chords in a nondistorted context only elicited early P3a responses. However, the power chord did not demonstrate a special role in the level of the ERPs. Earlier and larger MMN and P3a responses were elicited when distortion was modified compared to when only harmony (triad vs. dyad) was modified between standards and deviants. The MMN responses were largest when distortion and harmony deviated simultaneously. Musicians demonstrated larger P3a responses than nonmusicians. The results suggest mostly independent cortical auditory processing of distortion and harmony in Western individuals, and facilitated chord change processing in musicians compared to nonmusicians. While distortion has been used in heavy rock music for decades, this study is among the first ones to shed light on its cortical basis.
Major-minor and consonance-dissonance are two profound elements of Western tonal music, and have strong affective connotations for Western listeners. This review summarizes recent evidence on the neurocognitive basis of major-minor and consonance-dissonance by presenting studies on their processing and how it is affected by maturation, musical enculturation, and music training. Based on recent findings in the field, it is proposed that both classifications, particularly consonance-dissonance, have partly innate, biologically hard-wired properties. These properties can make them discriminable even for newborn infants and individuals living outside the Western music culture and, to a small extent, reflect their affective connotations in Western music. Still, musical enculturation and active music training drastically modify the sensory/acoustical as well as affective processing of major-minor and consonance-dissonance. This leads to considerable variance in psychophysiological and behavioral responses to these musical classifications.
Behavioral evidence indicates that musical context facilitates pitch discrimination. In the present study, we sought to determine whether pitch context and its familiarity might affect brain responses to pitch change even at the preattentive level. Ten musicians and 10 nonmusicians, while concentrating on reading a book, were presented with sound stimuli that had an infrequent (p = 15%) pitch shift of 144 Hz. In the familiar condition, the infrequent third-position deviant changed the mode (major vs. minor) of the five-tone pattern. In the unfamiliar condition, patterns were formed from five arithmetically determined tone frequencies, the deviant not causing any change of mode. The no-context condition included only third-position tones. All deviants elicited the change-specific mismatch negativity component of the event-related potentials in both groups of subjects. In both musicians and nonmusicians, pitch change in the familiar condition evoked larger mismatch negativity amplitude than the change in the unfamiliar condition and, correspondingly, larger mismatch negativity in the unfamiliar condition than in the no-context condition. This suggests that preattentive pitch-change processing is generally enhanced in a familiar context. Moreover, the latency of the mismatch negativity was shorter for musicians than for nonmusicians in both the familiar and unfamiliar conditions, whereas no difference between groups was observed in the no-context condition. This finding indicates that, in response to sequential structured sound events, the auditory system reacts faster in musicians than in nonmusicians.
The present study compared the degree of similarity of timbre representations as observed with brain recordings, behavioral studies, and computer simulations. To this end, the electrical brain activity of subjects was recorded while they were repetitively presented with five sounds differing in timbre. Subjects read simultaneously so that their attention was not focused on the sounds. The brain activity was quantified in terms of a change-specific mismatch negativity component. Thereafter, the subjects were asked to judge the similarity of all pairs along a five-step scale. A computer simulation was made by first training a Kohonen self-organizing map with a large set of instrumental sounds. The map was then tested with the experimental stimuli, and the distance between the most active artificial neurons was measured. The results of these methods were highly similar, suggesting that timbre representations reflected in behavioral measures correspond to neural activity, both as measured directly and as simulated in self-organizing neural network models.