Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Journal
Article Type
Date
Availability
1-11 of 11
Keywords: timbre
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2020) 38 (2): 106–135.
Published: 25 November 2020
... that match previous listening experiences in the lab, suggesting reactivation of those experiences. Previous studies suggested that timbre and brief rhythmic patterns may influence metrical restoration. However, variations in the magnitude of effects in different experiments suggest that other factors...
Abstract
What factors influence listeners’ perception of meter in a musical piece or a musical style? Many cues are available in the musical “surface,” i.e., the pattern of sounds physically present during listening. Models of meter processing focus on the musical surface. However, percepts of meter and other musical features may also be shaped by reactivation of previously heard music, consistent with exemplar accounts of memory. The current study explores a phenomenon that is here termed metrical restoration : listeners who hear melodies with ambiguous meters report meter preferences that match previous listening experiences in the lab, suggesting reactivation of those experiences. Previous studies suggested that timbre and brief rhythmic patterns may influence metrical restoration. However, variations in the magnitude of effects in different experiments suggest that other factors are at work. Experiments reported here explore variation in metrical restoration as a function of: melodic diversity in timbre and tempo, associations of rhythmic patterns with particular melodies and meters, and associations of meter with overall melodic form . Rhythmic patterns and overall melodic form, but not timbre, had strong influences. Results are discussed with respect to style-specific or culture-specific musical processing, and everyday listening experiences. Implications for models of musical memory are also addressed.
Journal Articles
Music Perception (2019) 37 (2): 134–146.
Published: 01 December 2019
...Weixia Zhang; Fang Liu; Linshu Zhou; Wanqi Wang; Hanyuan Jiang; Cunmei Jiang T imbre is an important factor that affects the perception of emotion in music. To date, little is known about the effects of timbre on neural responses to musical emotion. To address this issue, we used ERPs to...
Abstract
T imbre is an important factor that affects the perception of emotion in music. To date, little is known about the effects of timbre on neural responses to musical emotion. To address this issue, we used ERPs to investigate whether there are different neural responses to musical emotion when the same melodies are presented in different timbres. With a cross-modal affective priming paradigm, target faces were primed by affectively congruent or incongruent melodies without lyrics presented in the violin, flute, and voice. Results showed a larger P3 and a larger left anterior distributed LPC in response to affectively incongruent versus congruent trials in the voice version. For the flute version, however, only the LPC effect was found, which was distributed over centro-parietal electrodes. Unlike the voice and flute versions, an N400 effect was observed in the violin version. These findings revealed different patterns of neural responses to musical emotion when the same melodies were presented in different timbres, and provide evidence for the hypothesis that there are specialized neural responses to the human voice.
Journal Articles
Music Perception (2019) 37 (1): 57–65.
Published: 01 September 2019
... Regents of the University of California 2019 singing vowels timbre relative pitch psychoacoustics A number of factors extraneous to fundamental frequency have influence over the perception of relative pitch. For example, the perception of relative pitch can be influenced by pitch...
Abstract
N ote - to - note changes in brightness are able to influence the perception of interval size. Changes that are congruent with pitch tend to expand interval size, whereas changes that are incongruent tend to contract. In the case of singing, brightness of notes can vary as a function of vowel content. In the present study, we investigated whether note-to-note changes in brightness arising from vowel content influence perception of relative pitch. In Experiment 1, three-note sequences were synthesized so that they varied with regard to the brightness of vowels from note to note. As expected, brightness influenced judgments of interval size. Changes in brightness that were congruent with changes in pitch led to an expansion of perceived interval size. A follow-up experiment confirmed that the results of Experiment 1 were not due to pitch distortions. In Experiment 2, the final note of three-note sequences was removed, and participants were asked to make speeded judgments of the pitch contour. An analysis of response times revealed that brightness of vowels influenced contour judgments. Changes in brightness that were congruent with changes in pitch led to faster response times than did incongruent changes. These findings show that the brightness of vowels yields an extra-pitch influence on the perception of relative pitch in song.
Journal Articles
Music Perception (2017) 35 (2): 144–164.
Published: 01 December 2017
...Sven-Amin Lembke; Scott Levine; Stephen McAdams Achieving a blended timbre between two instruments is a common aim of orchestration. It relates to the auditory fusion of simultaneous sounds and can be linked to several acoustic factors (e.g., temporal synchrony, harmonicity, spectral relationships...
Abstract
Achieving a blended timbre between two instruments is a common aim of orchestration. It relates to the auditory fusion of simultaneous sounds and can be linked to several acoustic factors (e.g., temporal synchrony, harmonicity, spectral relationships). Previous research has left unanswered if and how musicians control these factors during performance to achieve blend. For instance, timbral adjustments could be oriented towards the leading performer. In order to study such adjustments, pairs of one bassoon and one horn player participated in a performance experiment, which involved several musical and acoustical factors. Performances were evaluated through acoustic measures and behavioral ratings, investigating differences across performer roles as leaders or followers, unison or non-unison intervals, and earlier or later segments of performances. In addition, the acoustical influence of performance room and communication impairment were also investigated. Role assignments affected spectral adjustments in that musicians acting as followers adjusted toward a “darker” timbre (i.e., realized by reducing the frequencies of the main formant or spectral centroid). Notably, these adjustments occurred together with slight reductions in sound level, although this was more apparent for horn than bassoon players. Furthermore, coordination seemed more critical in unison performances and also improved over the course of a performance. These findings compare to similar dependencies found concerning how performers coordinate their timing and suggest that performer roles also determine the nature of adjustments necessary to achieve the common aim of a blended timbre.
Journal Articles
Music Perception (2017) 34 (3): 313–318.
Published: 01 February 2017
... lower). In an exposure phase, 90 participants ( M = 4.1 years training, SD = 3.9) rated their liking of 24 melodies—6 each in voice, piano, banjo, and marimba. After a short break, they heard the same melodies plus 24 timbre-matched foils (6 per timbre) and rated their recognition of each melody...
Abstract
Children and adults, with or without music training, exhibit better memory for vocal melodies (without lyrics) than for instrumental melodies (Weiss, Schellenberg, Trehub, & Dawber, 2015; Weiss, Trehub, & Schellenberg, 2012; Weiss, Trehub, Schellenberg, & Habashi, 2016; Weiss, Vanzella, Schellenberg, & Trehub, 2015). In the present study, we compared adults’ memory for vocal and instrumental melodies, as before, but with two additional singers, one female (same pitch level as the original female) and one male (7 semitones lower). In an exposure phase, 90 participants ( M = 4.1 years training, SD = 3.9) rated their liking of 24 melodies—6 each in voice, piano, banjo, and marimba. After a short break, they heard the same melodies plus 24 timbre-matched foils (6 per timbre) and rated their recognition of each melody. Recognition was better for vocal melodies than for melodies in every other timbre, replicating previous findings. Importantly, the memory advantage was comparable across voices, despite the fact that liking ratings for vocal melodies differed by singer. Our results provide support for the notion that the vocal advantage in memory for melodies is independent of the idiosyncrasies of specific singers or of vocal attractiveness, arising instead from enhanced processing of a biologically significant timbre.
Journal Articles
Music Perception (2016) 33 (4): 446–456.
Published: 01 April 2016
..., Piazza dell’Ateneo Nuovo 1, U6, 20126 Milan, Italy. E-mail: mado.proverbio@unimib.it 27 9 2014 10 2 2015 © 2016 by The Regents of the University of California 2016 ERPs expertise music timbre cortex INSTRUMENT-SPECIFIC EFFECTS OF MUSICAL EXPERTISE ON AUDIOVISUAL...
Abstract
A large body of evidence has shown that musicians’ brains differ in many ways from nonmusicians’ brains due to the particularly intense and prolonged sensorimotor training involved. Not much is known about the effects of the specific musical instrument played on brain processing of audiovisual information. In this study the effect of musical expertise was investigated in professional clarinetists and violinists. One hundred and eighty videos showing fragments of musical performances played on a violin or a clarinet were presented to musicians of G. Verdi Milan Conservatory and age-matched controls. Half of the musicians were violinists, the other half were clarinetists; event-related potentials (ERPs) were recorded from 128 scalp sites and analyzed. Participants judged how many notes were played in each clip. The task was extremely easy for all participants. Over prefrontal areas an anterior negativity response was found to be much larger in controls than in musicians, and in musicians for the unfamiliar over the familiar musical instrument. Furthermore, a later central negativity response showed a lack of note numerosity effect in the brains of musicians for their own instrument, but not for unfamiliar instrument. The data indicate that music training is instrument-specific and that it profoundly affects prefrontal encoding of music-related information and auditory processing.
Journal Articles
Music Perception (2014) 31 (3): 288–296.
Published: 01 February 2014
... 30 4 2013 © 2014 by The Regents of the University of California 2014 amplitude envelope temporal structure auditory stimulus timbre tone envelope SURVEYING THE TEMPORAL STRUCTURE OF SOUNDS USED IN MUSIC PERCEPTION MICHAEL SCHUTZ & JONATHAN M. VAISBERG McMaster Institute for Music...
Abstract
Recent work from our lab illustrates amplitude envelope’s crucial role in both perceptual (Schutz, 2009) and cognitive (Schutz & Stefanucci, 2010) processing. Consequently, we surveyed the amplitude envelopes of sounds used in Music Perception , categorizing them as either flat (i.e., trapezoidal shape), percussive (aka “damped” or “decaying”), other, or undefined . Curiously, the undefined category represented the largest percentage of sounds observed, with 35% lacking definition of this important property (approximately 27% were percussive, 27% flat, and 11% other). This omission of relevant information was not indicative of general inattention to methodological detail. Studies using tones with undefined amplitude envelopes generally defined other properties such as spectral structure (85%), duration (80%), and even model of headphones/speakers (65%) at high rates. Consequently, this targeted omission is intriguing, and suggests amplitude envelope is an area ripe for future research.
Journal Articles
Music Perception (2012) 30 (1): 49–70.
Published: 01 September 2012
...Tuomas Eerola; Rafael Ferrer; Vinoo Alluri considerable effort has been made towards understanding how acoustic and structural features contribute to emotional expression in music, but relatively little attention has been paid to the role of timbre in this process. Our aim was to investigate the...
Abstract
considerable effort has been made towards understanding how acoustic and structural features contribute to emotional expression in music, but relatively little attention has been paid to the role of timbre in this process. Our aim was to investigate the role of timbre in the perception of affect dimensions in isolated musical sounds, by way of three behavioral experiments. In Experiment 1, participants evaluated perceived affects of 110 instrument sounds that were equal in duration, pitch, and dynamics using a three-dimensional affect model (valence, energy arousal, and tension arousal) and preference and emotional intensity. In Experiment 2, an emotional dissimilarity task was applied to a subset of the instrument sounds used in Experiment 1 to better reveal the underlying affect structure. In Experiment 3, the perceived affect dimensions as well as preference and intensity of a new set of 105 instrument sounds were rated by participants. These sounds were also uniform in pitch, duration, and playback dynamics but contained systematic manipulations in the dynamics of sound production, articulation, and ratio of high-frequency to low-frequency energy. The affect dimensions for all the experiments were then explained in terms of the three kinds of acoustic features extracted: spectral (e.g., ratio of high-frequency to low-frequency energy), temporal (e.g., attack slope), and spectro-temporal (e.g., spectral flux). High agreement among the participants' ratings across the experiments suggested that even isolated instrument sounds contain cues that indicate affective expression, and these are recognized as such by the listeners. A dominant portion (50-57%) of the two dimensions of affect (valence and energy arousal) could be predicted by linear combinations of few acoustic features such as ratio of high-frequency to low-frequency energy, attack slope, and spectral regularity. Links between these features and those observed in the vocal expression of affects and other sound phenomena are discussed.
Journal Articles
Music Perception (2011) 28 (3): 265–278.
Published: 01 February 2011
...Mathieu Barthet; Philippe Depalle; Richard Kronland-Martinet; Søølvi Ystad In a Previous Study, Mechanical and Expressive clarinet performances of Bach's Suite No. II and Mozart's Quintet for Clarinet and Strings were analyzed to determine whether some acoustical correlates of timbre (e.g...
Abstract
In a Previous Study, Mechanical and Expressive clarinet performances of Bach's Suite No. II and Mozart's Quintet for Clarinet and Strings were analyzed to determine whether some acoustical correlates of timbre (e.g., spectral centroid), timing (intertone onset interval), and dynamics (root mean square envelope) showed significant differences depending on the expressive intention of the performer. In the present companion study, we investigate the effects of these acoustical parameters on listeners' preferences. An analysis-by-synthesis approach was used to transform previously recorded clarinet performances by reducing the expressive deviations from the spectral centroid, the intertone onset interval and the acoustical energy. Twenty skilled musicians were asked to select which version they preferred in a paired comparison task. The results of statistical analyses showed that the removal of the spectral centroid variations resulted in the greatest loss of musical preference.
Journal Articles
Music Perception (2010) 28 (2): 135–154.
Published: 01 December 2010
... Strings were recorded. Timbre, timing, dynamics, and pitch descriptors were extracted from the recorded performances. The data were processed using a two-way analysis of variance, where the musician's expressive intentions and the note factors were defined as the independent variables. In both musical...
Abstract
This study deals with the acoustical factors liable to account for expressiveness in clarinet performances. Mechanical and expressive performances of excerpts from Bach's Suite No. II and Mozart's Quintet for Clarinet and Strings were recorded. Timbre, timing, dynamics, and pitch descriptors were extracted from the recorded performances. The data were processed using a two-way analysis of variance, where the musician's expressive intentions and the note factors were defined as the independent variables. In both musical excerpts, a strong effect of the expressive intention was observed on the timbre (attack time, spectral centroid, odd/even ratio), timing (intertone onset intervals) and dynamics (root mean square envelope) descriptors. The changes in the timbre descriptors were found to depend on the position of the notes in the musical phrases. These results suggest that timbre, as well as timing and dynamics variations, may mediate expressiveness in the musical messages transmitted from performers to listeners.
Journal Articles
Music Perception (2008) 25 (3): 253–255.
Published: 01 February 2008
... identified and catalogued, and revisions are implemented in a form of an installer. ©© 2008 By the Regents of the University of California timbre audio sample musical instruments sound sample music database TUOMAS EEROLA AND RAFAEL FERRER University of Jyväskylä, Finland AN OVERVIEW OF THE MAIN...
Abstract
AN OVERVIEW OF THE MAIN INSTRUMENT SAMPLE libraries used in psychoacoustics, sound analysis, and instrument classification research is presented. One of the central libraries, the McGill University Master Samples (MUMS, Opolko & Wapnick, 2006) is reviewed in detail. This library has over 6000 sound samples representing most classical and popular musical instruments and a wide variety of articulation styles.A closer scrutiny revealed a conspicuous amount of labeling errors, intonation inaccuracies, and the absence of an integrated database. These errors are identified and catalogued, and revisions are implemented in a form of an installer.