Skip Nav Destination
Close Modal
Search Results for
auditory-event-related-potentials
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Date
Availability
1-20 of 351 Search Results for
auditory-event-related-potentials
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2017) 35 (1): 3–37.
Published: 01 September 2017
... stability, points of instability, and relations of attraction among them. We sketch a framework that aggregates inferences from normal auditory cognition and tonal inferences, by way of a the- ory of musical truth: a source undergoing a musical movement m is true of an object undergoing a series of events e...
Abstract
We provide the outline of a semantics for music. We take music cognition to be continuous with normal auditory cognition, and thus to deliver inferences about “virtual sources” of the music. As a result, sound parameters that trigger inferences about sound sources in normal auditory cognition produce related ones in music. But music also triggers inferences on the basis of the movement of virtual sources in tonal pitch space, which has points of stability, points of instability, and relations of attraction among them. We sketch a framework that aggregates inferences from normal auditory cognition and tonal inferences, by way of a theory of musical truth: a source undergoing a musical movement m is true of an object undergoing a series of events e just in case there is a certain structure-preserving map between m and e . This framework can help revisit aspects of musical syntax: Lerdahl and Jackendoff’s (1983) grouping structure can be seen to reflect the mereology (“partology”) of events that are abstractly represented in the music. Finally, we argue that this “refentialist” approach to music semantics still has the potential to provide an account of diverse emotional effects in music.
Journal Articles
Music Perception (2018) 35 (3): 315–331.
Published: 01 February 2018
... recorded change-related event-related potentials (ERPs) with electroencephalogram (EEG) to examine the auditory system in the cortical level. The mismatch negativity (MMN) is a fronto-centrally maximal negative- polarity response in the ERP waveform around 100- 250 ms after deviance onset that reflects a...
Abstract
Guitar distortion used in rock music modifies a chord so that new frequencies appear in its harmonic structure. A distorted dyad (power chord) has a special role in heavy metal music due to its harmonics that create a major third interval, making it similar to a major chord. We investigated how distortion affects cortical auditory processing of chords in musicians and nonmusicians. Electric guitar chords with or without distortion and with or without the interval of the major third (i.e., triads or dyads) were presented in an oddball design where one of them served as a repeating standard stimulus and others served as occasional deviants. This enabled the recording of event-related potentials (ERPs) of the electroencephalogram (EEG) related to deviance processing (the mismatch negativity MMN and the attention-related P3a component) in an ignore condition. MMN and P3a responses were elicited in most paradigms. Distorted chords in a nondistorted context only elicited early P3a responses. However, the power chord did not demonstrate a special role in the level of the ERPs. Earlier and larger MMN and P3a responses were elicited when distortion was modified compared to when only harmony (triad vs. dyad) was modified between standards and deviants. The MMN responses were largest when distortion and harmony deviated simultaneously. Musicians demonstrated larger P3a responses than nonmusicians. The results suggest mostly independent cortical auditory processing of distortion and harmony in Western individuals, and facilitated chord change processing in musicians compared to nonmusicians. While distortion has been used in heavy rock music for decades, this study is among the first ones to shed light on its cortical basis.
Journal Articles
Music Perception (2013) 30 (5): 463–479.
Published: 01 June 2013
..., University of Southern California, University Park Campus, B29b Hedco Neurosciences, HNB B29b, M/C 2520, Los Angeles, CA 90089-2520. E-mail: assal.habibi@usc.edu auditory event-related potentials electroencephalography pitch perception music training pitch deviations CORTICAL ACTIVITY DURING...
Abstract
This study investigates the effects of music training on brain activity to violations of melodic expectancies. We recorded behavioral and event-related brain potential (ERP) responses of musicians and nonmusicians to discrepancies of pitch between pairs of unfamiliar melodies based on Western classical rules. Musicians detected pitch deviations significantly better than nonmusicians. In musicians compared to nonmusicians, auditory cortical potentials to notes but not unrelated warning tones exhibited enhanced P200 amplitude generally, and in response to pitch deviations enhanced amplitude for N150 and P300 (P3a) but not N100 was observed. P3a latency was shorter in musicians compared to nonmusicians. Both the behavioral and cortical activity differences observed between musicians and nonmusicians in response to deviant notes were significant with stimulation of the right but not the left ear, suggesting that left-sided brain activity differentiated musicians from nonmusicians. The enhanced amplitude of N150 among musicians with right ear stimulation was positively correlated with earlier age onset of music training. Our data support the notion that long-term music training in musicians leads to functional reorganization of auditory brain systems, and that these effects are potentiated by early age onset of training.
Journal Articles
Music Perception (2016) 33 (4): 446–456.
Published: 01 April 2016
... Conservatory and age-matched controls. Half of the musicians were violinists, the other half were clarinetists; event-related potentials (ERPs) were recorded from 128 scalp sites and analyzed. Participants judged how many notes were played in each clip. The task was extremely easy for all participants. Over...
Abstract
A large body of evidence has shown that musicians’ brains differ in many ways from nonmusicians’ brains due to the particularly intense and prolonged sensorimotor training involved. Not much is known about the effects of the specific musical instrument played on brain processing of audiovisual information. In this study the effect of musical expertise was investigated in professional clarinetists and violinists. One hundred and eighty videos showing fragments of musical performances played on a violin or a clarinet were presented to musicians of G. Verdi Milan Conservatory and age-matched controls. Half of the musicians were violinists, the other half were clarinetists; event-related potentials (ERPs) were recorded from 128 scalp sites and analyzed. Participants judged how many notes were played in each clip. The task was extremely easy for all participants. Over prefrontal areas an anterior negativity response was found to be much larger in controls than in musicians, and in musicians for the unfamiliar over the familiar musical instrument. Furthermore, a later central negativity response showed a lack of note numerosity effect in the brains of musicians for their own instrument, but not for unfamiliar instrument. The data indicate that music training is instrument-specific and that it profoundly affects prefrontal encoding of music-related information and auditory processing.
Journal Articles
Music Perception (2003) 20 (4): 357–382.
Published: 01 June 2003
... their working memory, and thus 359Absolute Pitch and P300 Component of Event-Related Potential these subjects have no need to update their auditory environment. How- ever, individuals with RP do appear to create and evaluate temporary repre- sentations of either the sound of a current pitch or some...
Abstract
Our primary goal has been to elucidate a model of pitch memory by examining the brain activity of musicians with and without absolute pitch during listening tasks. Subjects, screened for both absolute and relative pitch abilities, were presented with two auditory tasks and one visual task that served as a control. In the first auditory task (pitch memory task), subjects were asked to differentiate between diatonic and nondiatonic tones within a tonal framework. In the second auditory task (contour task), subjects were presented with the same pitch sequences but instead asked to differentiate between tones moving upward or downward. For the visual control task, subjects were presented again with the same pitch sequences and asked to determine whether each pitch was diatonic or nondiatonic, only this time the note names appeared visually on the computer screen. Our findings strongly suggest that there are various levels of absolute pitch ability. Some absolute pitch subjects have, in addition to this skill, strong relative pitch abilities, and these differences are reflected quite consistently by the behavior of the P300 component of the event-related potential. Our research also strengthens the idea that the memory system for pitch and interval distances is distinct from the memory system for contour (W. J. Dowling, 1978). Our results are discussed within the context of the current absolute pitch literature.
Journal Articles
Music Perception (2006) 24 (2): 209–222.
Published: 01 December 2006
...Christiane Neuhaus; Thomas R. Knӧsche In two experiments with event-related potentials (ERPs), we investigated the formation of auditory Gestalts. For this purpose, we used tone sequences of different structure. In the first experiment, we contrasted a rhythmic section to a section with random time...
Abstract
In two experiments with event-related potentials (ERPs), we investigated the formation of auditory Gestalts. For this purpose, we used tone sequences of different structure. In the first experiment, we contrasted a rhythmic section to a section with random time values, each embedded in rhythmically irregular context. In the second experiment, melodies were contrasted to randomized sequences. Nonmusicians either had to detect the rhythmic pattern or to memorize short tone excerpts. Random versions in both experiments evoked a significant increase in the amplitude of P1 and P2. Randomized rhythm sections also evoked a late sustained negative potential. The enlarged P1 and P2 for random sequences might reflect stronger integration effort, as the predictability of tone progression was low. Thus, already at the early stage of encoding, sequence processing might be top-down-driven. The late negativity for rhythmically random sections is possibly task-related, reflecting expectancy violation in terms of regularity, since a metrical grid of beats could not be established. The memorizing of tone excerpts did not evoke a late neural correlate. (169)
Journal Articles
Music Perception (2001) 19 (2): 199–222.
Published: 01 December 2001
... unfamiliar condition, patterns were formed from five arithmetically determined tone frequencies, the deviant not causing any change of mode. The no-context condition included only third-position tones. All deviants elicited the change-specific mismatch negativity component of the event-related potentials in...
Abstract
Behavioral evidence indicates that musical context facilitates pitch discrimination. In the present study, we sought to determine whether pitch context and its familiarity might affect brain responses to pitch change even at the preattentive level. Ten musicians and 10 nonmusicians, while concentrating on reading a book, were presented with sound stimuli that had an infrequent (p = 15%) pitch shift of 144 Hz. In the familiar condition, the infrequent third-position deviant changed the mode (major vs. minor) of the five-tone pattern. In the unfamiliar condition, patterns were formed from five arithmetically determined tone frequencies, the deviant not causing any change of mode. The no-context condition included only third-position tones. All deviants elicited the change-specific mismatch negativity component of the event-related potentials in both groups of subjects. In both musicians and nonmusicians, pitch change in the familiar condition evoked larger mismatch negativity amplitude than the change in the unfamiliar condition and, correspondingly, larger mismatch negativity in the unfamiliar condition than in the no-context condition. This suggests that preattentive pitch-change processing is generally enhanced in a familiar context. Moreover, the latency of the mismatch negativity was shorter for musicians than for nonmusicians in both the familiar and unfamiliar conditions, whereas no difference between groups was observed in the no-context condition. This finding indicates that, in response to sequential structured sound events, the auditory system reacts faster in musicians than in nonmusicians.
Journal Articles
Music Perception (1993) 10 (3): 305–316.
Published: 01 April 1993
...M. Tervaniemi; K. Alho; P. Paavilainen; M. Sams; R. Näätänen An event-related brain potential (ERP) component called mismatch negativity (MMN) is elicited by physically deviant auditory stimuli presented among repetitive, "standard," stimuli. MMN reflects a mismatch process between sensory input...
Abstract
An event-related brain potential (ERP) component called mismatch negativity (MMN) is elicited by physically deviant auditory stimuli presented among repetitive, "standard," stimuli. MMN reflects a mismatch process between sensory input from the deviant stimulus and a shortduration neuronal representation developed by the standard stimulus. The MMN amplitude is known to correlate with pitch-discrimination performance. The purpose of the present study was to investigate whether the MMN is different in absolute pitch (AP) possessors and nonpossessors. ERPs were recorded from AP and non-AP groups, which were matched with regard to musical training. It was found that deviant stimuli differing from standard tones by a quartertone or a semitone elicited an MMN irrespective of whether the stimulus was located on (white key/black key) or off the Western musical scale. These results were obtained with both sinusoidal and piano tones. The MMN was larger and earlier when the stimuli were piano tones than when they were sinusoidal tones and when the standard-deviant difference amounted to a semitone rather than a quartertone. However, differences between the groups were not found in auditory information processing reflected by the MMN component of the ERP. In the light of the earlier MMN results showing a close correlation between the MMN and pitch- discrimination accuracy, it might be concluded that pitch discrimination and identification are based on different brain mechanisms. In addition, the differences in the MMN amplitude and latency between sinusoidal and piano tones might be interpreted as suggesting that sensory memory traces, as reflected by the MMN, are capable of storing information of very complex sound structures also.
Journal Articles
Music Perception (2020) 38 (2): 136–194.
Published: 25 November 2020
..., and has also revealed insights on the neural dynamics of audiovisual ( Nozaradan, Peretz, & Mouraux, 2012a ) and sensorimotor ( Nozaradan, Zerouali, Peretz, & Mouraux, 2015 ) integration. Approaches based on measuring transient event-related potentials (ERPs) from the neural signal have also...
Abstract
Interpersonal musical entrainment—temporal synchronization and coordination between individuals in musical contexts—is a ubiquitous phenomenon related to music’s social functions of promoting group bonding and cohesion. Mechanisms other than sensorimotor synchronization are rarely discussed, while little is known about cultural variability or about how and why entrainment has social effects. In order to close these gaps, we propose a new model that distinguishes between different components of interpersonal entrainment: sensorimotor synchronization —a largely automatic process manifested especially with rhythms based on periodicities in the 100–2000 ms timescale—and coordination , extending over longer timescales and more accessible to conscious control. We review the state of the art in measuring these processes, mostly from the perspective of action production, and in so doing present the first cross-cultural comparisons between interpersonal entrainment in natural musical performances, with an exploratory analysis that identifies factors that may influence interpersonal synchronization in music. Building on this analysis we advance hypotheses regarding the relationship of these features to neurophysiological, social, and cultural processes. We propose a model encompassing both synchronization and coordination processes and the relationship between them, the role of culturally shared knowledge, and of connections between entrainment and social processes.
Includes: Supplementary data
Journal Articles
Music Perception (2020) 38 (2): 214–242.
Published: 25 November 2020
... mass music, which shifts focus away from discrete pitch and rhythm and onto timbre and texture. We define sound mass as a perceptually homogeneous and dense auditory unit integrating multiple sound events or components while retaining an impression of multiplicity ( Noble & McAdams, 2020...
Abstract
We combine perceptual research and acoustic analysis to probe the messy, pluralistic world of musical semantics, focusing on sound mass music. Composers and scholars describe sound mass with many semantic associations. We designed an experiment to evaluate to what extent these associations are experienced by other listeners. Thirty-eight participants heard 40 excerpts of sound mass music and related contemporary genres and rated them along batteries of semantic scales. Participants also described their rating strategies for some categories. A combination of qualitative stimulus analyses, Cronbach’s alpha tests, and principal component analyses suggest that cross-domain mappings between semantic categories and musical properties are statistically coherent between participants, implying non-arbitrary relations. Some aspects of participants’ descriptions of their rating strategies appear to be reflected in their numerical ratings. We sought quantitative bases for these associations in the acoustic signals. After attempts to correlate semantic ratings with classical audio descriptors failed, we pursued a neuromimetic representation called spectrotemporal modulations (STMs), which explains much more of the variance in semantic ratings. This result suggests that semantic interpretations of music may involve qualities or attributes that are objectively present in the music, since computer simulation can use sound signals to partially reconstruct human semantic ratings.
Journal Articles
Music Perception (2020) 38 (2): 106–135.
Published: 25 November 2020
... immediate auditory event to infer that event’s meter and timing properties. These effects can broadly be classed into recent context effects —that is, influences of immediately preceding context on meter perception; and learning effects , presumably due to more extensive, long-term exposure...
Abstract
What factors influence listeners’ perception of meter in a musical piece or a musical style? Many cues are available in the musical “surface,” i.e., the pattern of sounds physically present during listening. Models of meter processing focus on the musical surface. However, percepts of meter and other musical features may also be shaped by reactivation of previously heard music, consistent with exemplar accounts of memory. The current study explores a phenomenon that is here termed metrical restoration : listeners who hear melodies with ambiguous meters report meter preferences that match previous listening experiences in the lab, suggesting reactivation of those experiences. Previous studies suggested that timbre and brief rhythmic patterns may influence metrical restoration. However, variations in the magnitude of effects in different experiments suggest that other factors are at work. Experiments reported here explore variation in metrical restoration as a function of: melodic diversity in timbre and tempo, associations of rhythmic patterns with particular melodies and meters, and associations of meter with overall melodic form . Rhythmic patterns and overall melodic form, but not timbre, had strong influences. Results are discussed with respect to style-specific or culture-specific musical processing, and everyday listening experiences. Implications for models of musical memory are also addressed.
Journal Articles
Music Perception (2020) 38 (1): 66–77.
Published: 09 September 2020
.... , Panda , M. R. , & Raj , S. ( 2015 ). Influence of musical training on sensitivity to temporal fine structure . International Journal of Audiology , 54 , 220 – 226 . Morrongiello , B. A. , & Trehub , S. E. ( 1987 ). Age-related changes in auditory temporal perception...
Abstract
In studies of perceptual and neural processing differences between musicians and nonmusicians, participants are typically dichotomized on the basis of personal report of musical experience. The present study relates self-reported musical experience and objectively measured musical aptitude to a skill that is important in music perception: temporal resolution (or acuity). The Advanced Measures of Music Audiation (AMMA) test was used to objectively assess participant musical aptitude, and adaptive psychophysical measurements were obtained to assess temporal resolution on two tasks: within-channel gap detection and across-channel gap detection. Results suggest that musical aptitude measured with the AMMA and self-reporting of music experiences (duration of music instruction) are both related to temporal resolution ability in musicians. The relationship between musical aptitude and/or duration of music training is important to music educators advocating for the benefits of music programs as well as in behavioral and neurophysiological research.
Journal Articles
Music Perception (2020) 38 (1): 78–98.
Published: 09 September 2020
... dissonance calls thus for more research, with paradigms consisting of dissonant chords structured in different ways regarding roughness and harmonicity, using different chords as frequent standards and infrequent deviating stimuli, and a spectrum of event-related brain potential (ERP) methods. It might...
Abstract
Consonance and dissonance are basic phenomena in the perception of chords that can be discriminated very early in sensory processing. Musical expertise has been shown to facilitate neural processing of various musical stimuli, but it is unclear whether this applies to detecting consonance and dissonance. Our study aimed to determine if sensitivity to increasing levels of dissonance differs between musicians and nonmusicians, using a combination of neural (electroencephalographic mismatch negativity, MMN) and behavioral measurements (conscious discrimination). Furthermore, we wanted to see if focusing attention to the sounds modulated the neural processing. We used chords comprised of either highly consonant or highly dissonant intervals and further manipulated the degree of dissonance to create two levels of dissonant chords. Both groups discriminated dissonant chords from consonant ones neurally and behaviorally. The magnitude of the MMN differed only marginally between the more dissonant and the less dissonant chords. The musicians outperformed the nonmusicians in the behavioral task. As the dissonant chords elicited MMN responses for both groups, sensory dissonance seems to be discriminated in an early sensory level, irrespective of musical expertise, and the facilitating effects of musicianship for this discrimination may arise in later stages of auditory processing, appearing only in the behavioral auditory task.
Journal Articles
Music Perception (2020) 38 (1): 27–45.
Published: 09 September 2020
... studies have addressed some of these limitations. Brochard, Abecasis, Potter, Ragot, and Drake (2003) measured differences in event-related brain potentials (ERPs) to occasional deviant tones (4 dB softer) in odd versus even positions in equitone isochronous sequences. The experimenters found that...
Abstract
The parsing of undifferentiated tone sequences into groups of qualitatively distinct elements is one of the earliest rhythmic phenomena to have been investigated experimentally ( Bolton, 1894 ). The present study aimed to replicate and extend these findings through online experimentation using a spontaneous grouping paradigm with forced-choice response (from 1 to 12 tones per group). Two types of isochronous sequences were used: equitone sequences, which varied only with respect to signal rate (200, 550, or 950 ms interonset intervals), and accented sequences, in which accents were added every two or three tones to test the effect of induced grouping (duple vs. triple) and accent type (intensity, duration, or pitch). In equitone sequences, participants’ grouping percepts ( N = 4,194) were asymmetrical and tempo-dependent, with “no grouping” and groups of four being most frequently reported. In accented sequences, slower rate, induced triple grouping, and intensity accents correlated with increases in group length. Furthermore, the probability of observing a mixed metric type—that is, grouping percepts divisible by both two and three (6 and 12)—was found to be highest in faster sequences with induced triple grouping. These findings suggest that lower-level triple grouping gives rise to binary grouping percepts at higher metrical levels.
Journal Articles
Music Perception (2020) 38 (1): 1–26.
Published: 09 September 2020
... complex pattern 2 louder than the simpler pattern 1 (snare and kick). Overall, our results lend further evidence to the hypothesis that both temporal and sound-related features contribute to the indication of the timing of a rhythmic event in groove-based performance. Guilherme Schmidt Câmara, RITMO...
Abstract
This study reports on an experiment that tested whether drummers systematically manipulated not only onset but also duration and/or intensity of strokes in order to achieve different timing styles. Twenty-two professional drummers performed two patterns (a simple “back-beat” and a complex variation) on a drum kit (hi-hat, snare, kick) in three different timing styles (laid-back, pushed, on-beat), in tandem with two timing references (metronome and instrumental backing track). As expected, onset location corresponded to the instructed timing styles for all instruments. The instrumental reference led to more pronounced timing profiles than the metronome (pushed strokes earlier, laid-back strokes later). Also, overall the metronome reference led to earlier mean onsets than the instrumental reference, possibly related to the “negative mean asynchrony” phenomenon. Regarding sound, results revealed systematic differences across participants in the duration (snare) and intensity (snare and hi-hat) of strokes played using the different timing styles. Pattern also had an impact: drummers generally played the rhythmically more complex pattern 2 louder than the simpler pattern 1 (snare and kick). Overall, our results lend further evidence to the hypothesis that both temporal and sound-related features contribute to the indication of the timing of a rhythmic event in groove-based performance.
Journal Articles
Music Perception (2020) 38 (1): 46–65.
Published: 09 September 2020
... update their ratings if later events in the music changed their opinion. When participants were satisfied with their ratings, they could press the “Next” button in order to proceed to the next screen. If ratings were missing, participants were prompted to provide the missing ratings. When all ratings...
Abstract
Music often triggers a pleasurable urge in listeners to move their bodies in response to the rhythm. In music psychology, this experience is commonly referred to as groove . This study presents the Experience of Groove Questionnaire , a newly developed self-report questionnaire that enables respondents to subjectively assess how strongly they feel an urge to move and pleasure while listening to music. The development of the questionnaire was carried out in several stages: candidate questionnaire items were generated on the basis of the groove literature, and their suitability was judged by fifteen groove and rhythm research experts. Two listening experiments were carried out in order to reduce the number of items, to validate the instrument, and to estimate its reliability. The final questionnaire consists of two scales with three items each that reliably measure respondents’ urge to move (Cronbach’s α = .92) and their experience of pleasure ( α = .97) while listening to music. The two scales are highly correlated ( r = .80), which indicates a strong association between motor and emotional responses to music. The scales of the Experience of Groove Questionnaire can independently be applied in groove research and in a variety of other research contexts in which listeners’ subjective experience of music-induced movement and enjoyment need to be addressed: for example the study of the interaction between music and motivation in sports and research on therapeutic applications of music in people with neurological movement disorders.
Journal Articles
Music Perception (1991) 8 (4): 405–430.
Published: 01 July 1991
...Dalia Cohen; Avner Erez This study investigates the possibility of measuring cognitive responses to musical stimuli. The experiments were based on measurements of the event- related potential (ERP) of three electroencephalographic electrodes. The musical stimuli consisted of five-tone pitch...
Abstract
This study investigates the possibility of measuring cognitive responses to musical stimuli. The experiments were based on measurements of the event- related potential (ERP) of three electroencephalographic electrodes. The musical stimuli consisted of five-tone pitch patterns ( constant intensity, duration, and timbre), based on the Western tonal system. Subjects compared reference patterns with comparison patterns, which were either identical to the reference pattern or nonidentical because of predetermined changes in the comparison patterns. The patterns were presented in auditory or visual mode in a slow or a fast version. The results show a striking cognitive response to nonidentical tones in the comparison patterns and to a lesser extent to exceptional tones even in the reference patterns. Different levels of response were detected according to the type of pattern. We also found some evidence of factors contributing to "subjective equivalence" between different patterns. The correlation between results from this study and those from earlier studies based on the same measuring technique with different kinds of input data or using different methods and techniques is discussed.
Journal Articles
Music Perception (2020) 37 (5): 373–391.
Published: 10 June 2020
... immediately signals its overall type. If rondo form is aurally identifiable only through the refrain’s return—an event that first occurs toward the middle of a piece—then it follows that the opening of a rondo will not reveal its large-scale form for listeners. Of course, eighteenth-century scores do not...
Abstract
Sonata and rondo movements are often defined in terms of large-scale form, yet in the classical era, rondos were also identified according to their lively, cheerful character. We hypothesized that sonatas and rondos could be categorized based on stylistic features, and that rondos would involve more acoustic cues for happiness (e.g., higher average pitch height and higher average attack rate). In a corpus analysis, we examined paired movement openings from 180 instrumental works, composed between 1770 and 1799. Rondos had significantly higher pitch height and attack rate, as predicted, and there were also significant differences related to dynamics, meter, and cadences. We then conducted an experiment involving participants with at least 5 years of formal music training or less than 6 months of formal music training. Participants listened to 120 15-second audio clips, taken from the beginnings of movements in our corpus. After a training phase, they attempted to categorize the excerpts (2AFC task). D-prime scores were significantly higher than chance levels for both groups, and in post-experiment questionnaires, participants without music training reported that rondos sounded happier than sonatas. Overall, these results suggest that classical formal types have distinct stylistic and affective conventions.
Journal Articles
Music Perception (2020) 37 (5): 373–391.
Published: 10 June 2020
... event that first occurs toward the middle of a piece—then it follows that the opening of a rondo will not reveal its large-scale form for listeners. Nonetheless, the differences between rondo and sonata movements (as actual pieces of music) arguably go beyond the differences between rondo and...
Abstract
Sonata and rondo movements are often defined in terms of large-scale form, yet in the classical era, rondos were also identified according to their lively, cheerful character. We hypothesized that sonatas and rondos could be categorized based on stylistic features, and that rondos would involve more acoustic cues for happiness (e.g., higher average pitch height and higher average attack rate). In a corpus analysis, we examined paired movement openings from 180 instrumental works, composed between 1770 and 1799. Rondos had significantly higher pitch height and attack rate, as predicted, and there were also significant differences related to dynamics, meter, and cadences. We then conducted an experiment involving participants with at least 5 years of formal music training or less than 6 months of formal music training. Participants listened to 120 15-second audio clips, taken from the beginnings of movements in our corpus. After a training phase, they attempted to categorize the excerpts (2AFC task). D-prime scores were significantly higher than chance levels for both groups, and in post-experiment questionnaires, participants without music training reported that rondos sounded happier than sonatas. Overall, these results suggest that classical formal types have distinct stylistic and affective conventions.
Journal Articles
Music Perception (2020) 37 (4): 347–358.
Published: 11 March 2020
... practices of ritual, protest, and the enactment of identity that span the range from speech to song and allows consideration of the manner in which such activities serve to ground collectives. We consider how the musical elements in joint speech such as rhythm, melody, and instrumentation are related to the...
Abstract
Speech and song have frequently been treated as contrasting categories. We here observe a variety of collective activities in which multiple participants utter the same thing at the same time, a behavior we call joint speech. This simple empirical definition serves to single out practices of ritual, protest, and the enactment of identity that span the range from speech to song and allows consideration of the manner in which such activities serve to ground collectives. We consider how the musical elements in joint speech such as rhythm, melody, and instrumentation are related to the context of occurrence and the purposes of the participants. While music and language have been greatly altered by developments in media technologies—from writing to recordings—joint speech has been, and continues to be, an integral part of practices, both formal and informal, from which communities derive their identity. The absence of joint speech from the scientific treatment of language has made language appear as an abstract intellectual and highly individualized activity. Joint speech may act as a corrective to draw our attention back to the voice in context, and the manner in which collective identities are enacted.