Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Subjects
Journal
Article Type
Date
Availability
1-7 of 7
Barbara Tillmann
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2014) 32 (1): 11–32.
Published: 01 September 2014
Abstract
Previous research ( Music Perception , 2002, Issue 2) demonstrated improvement in recognition memory across delays increasing from 5 to 15 s while listening to novel music, attributable to a decline in false alarms to similar lures. We hypothesize that this improvement results from delayed binding of features. At short delays, targets and similar lures are easily confused because they share individual features such as melodic contour and musical key. Binding those features into a coherent memory representation—such as encoding the pitch level at which the contour is attached to the scale—reduces that confusion and hence false alarms to similar lures. Here we report eight experiments in which we explore the conditions under which this continued encoding occurs, and test specific hypotheses concerning the particular features involved. These phenomena involve the binding of complex features of nonverbal material, and are explained in terms of theoretical descriptions of the features and the representations resulting from binding. We envisage future studies investigating this binding phenomenon with neurophysiological methods in the study of cognition in aging.
Journal Articles
Music Perception (2013) 30 (4): 419–425.
Published: 01 April 2013
Abstract
Recognition memory for details of musical phrases (discrimination between targets and similar lures) improves for up to 15 s following the presentation of a target, during continuous listening to the ongoing piece. This is attributable to binding of stimulus features during that time interval. The ongoing-listening paradigm is an ecologically valid approach for investigating short-term memory, but previous studies made use of relatively mechanical MIDI-produced stimuli. The present study assessed whether expressive performances would modulate the previously reported finding. Given that expressive performances introduced slight differences between initially presented targets and their target-test items, expressive performance could make the task more difficult overall than did the previously used mechanical renderings. However, results revealed an even stronger improvement for the expressive pieces than for the mechanical pieces. The pattern of results was observed for participants varying in their level of musical experience, though the difference between expressive and mechanical conditions was more pronounced for the less-experienced participants. Overall, our study showed that the memory improvement phenomenon extends to more realistic musical material, which includes expressive timing characteristics of live performance.
Journal Articles
Music Perception (2012) 29 (3): 255–267.
Published: 01 February 2012
Abstract
we investigated working memory (WM) performance for tone sequences that either respected musical regularities (tonal sequences) or did not (atonal sequences) using a forward and a backward recognition task. Participants indicated whether two sequences were the same or different, with “same” being defined as all tones played correctly in either the same order (forward task) or backward order (backward task). For the forward task, nonmusician and musician participants showed better performance for tonal than for atonal sequences, therefore supporting the hypothesis that musically structured material increased WM performance during maintenance of tone information. For the backward task, neither nonmusicians nor musicians showed better performance for tonal compared to atonal sequences. Our findings suggest that musical structure influences WM for tones during maintenance (forward recognition task), but not during manipulation (backward recognition task).
Journal Articles
Music Perception (2009) 26 (3): 211–221.
Published: 01 February 2009
Abstract
THE MUSICAL PRIMING PARADIGM ALLOWS FOR INVESTIGATION of listeners' expectations based on their implicit knowledge of tonal stability. To date, priming data are limited to reports of facilitated processing for tonic over nontonic events. The special status of the tonic as a cognitive reference point brings into question the subtlety of listeners' tonal knowledge: Is the facilitated processing observed in priming studies limited to tonic events, or is tone processing influenced by subtler tonal contrasts? The present study investigated tonal priming for mediants (the third scale degree) over leading tones (the seventh scale degree) presented in melodic contexts. Experiment 1 used a timbre discrimination task and Experiment 2 an intonation task. Facilitated processing was observed for the more tonally stable mediants over the less stable leading tones, thus showing that priming effects are not limited to pairs of tonal degrees including the tonic. This finding emphasizes the subtlety of nonexpert listeners' tonal knowledge.
Journal Articles
Music Perception (2008) 25 (4): 271–283.
Published: 01 April 2008
Abstract
OUR STUDY INVESTIGATED THE PERCEPTION of pitch and time dimensions in chord sequences by patients with cerebellar damage. In eight-chord sequences, tonal relatedness and temporal regularity of the chords were manipulated and their processing was tested with indirect and direct investigation methods (i.e., priming paradigm in Experiment 1; subjective judgments of completion and temporal regularity in Experiments 2 and 3). Experiment 1 replicated a musical relatedness effect despite cerebellar damage (see Tillmann, Justus, & Bigand, 2008) and Experiment 2 extended it to completion judgments. This outcome suggests that an intact cerebellum is not mandatory to access tonal knowledge. However, data on temporal manipulations suggest that the cerebellum is involved in the processing of temporal regularities in music. The comparison between task performances obtained for the same sequences further suggests that the altered processing of temporal structures in patients impairs the rapid development of musical expectations on the time dimension.
Journal Articles
Music Perception (2008) 25 (4): 331–343.
Published: 01 April 2008
Abstract
RECENTLY, WE POINTED OUT THAT A SMALL number of individuals fail to acquire basic musical abilities, and that these deficiencies might have neuronal and genetic underpinnings. Such a musical disorder is now termed "congenital amusia," an umbrella term for lifelong musical disabilities that cannot be attributed to mental retardation, deafness, or lack of exposure. Congenital amusia is a condition that is estimated to affect 4% of the general population. Despite this relatively high prevalence, cases of congenital amusia have been difficult to identify.We present here a novel on-line test that can be used to identify such cases in 15 minutes, provided that the cohort of the participant is taken into account. The results also confirm that congenital amusia is typically expressed by a deficit in perceiving musical pitch but not musical time.
Journal Articles
Music Perception (2003) 20 (3): 283–305.
Published: 01 March 2003
Abstract
We investigated the spontaneous detection of "wrong notes" in a melody that modulated continuously through all 24 major and minor keys. Three variations of the melody were composed, each of which had distributed within it 96 test tones of the same pitch, for example, A 2 . Thus, the test tones would blend into some keys and pop out in others. Participants were not asked to detect or judge specific test tones; rather, they were asked to make a response whenever they heard a note that they thought sounded wrong or out of place. This task enabled us to obtain subjective measures of key membership in a listening situation that approximated a natural musical context. The frequency of observed "wrong-note" responses across keys matched previous tonal hierarchy results obtained using judgments about discrete probes following short contexts. When the test tones were nondiatonic notes in the present context they elicited a response, whereas when the test tones occupied a prominent position in the tonal hierarchy they were not detected. Our findings could also be explained by the relative salience of the test pitch chroma in short-term memory, such that when the test tone belonged to a locally improbable pitch chroma it was more likely to elicit a response. Regardless of whether the local musical context is shaped primarily by "bottom-up" or "topdown" influences, our findings establish a method for estimating the relative salience of individual test events in a continuous melody.