Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Journal
Article Type
Date
Availability
1-2 of 2
Erika Skoe
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Music Perception (2015) 32 (5): 445–459.
Published: 01 June 2015
Abstract
Acoustic periodicity is an important factor for discriminating consonant and dissonant intervals. While previous studies have found that the periodicity of musical intervals is temporally encoded by neural phase locking throughout the auditory system, how the nonlinearities of the auditory pathway influence the encoding of periodicity and how this effect is related to sensory consonance has been underexplored. By measuring human auditory brainstem responses (ABRs) to four diotically presented musical intervals with increasing degrees of dissonance, this study seeks to explicate how the subcortical auditory system transforms the neural representation of acoustic periodicity for consonant versus dissonant intervals. ABRs faithfully reflect neural activity in the brainstem synchronized to the stimulus while also capturing nonlinear aspects of auditory processing. Results show that for the most dissonant interval, which has a less periodic stimulus waveform than the most consonant interval, the aperiodicity of the stimulus is intensified in the subcortical response. The decreased periodicity of dissonant intervals is related to a larger number of nonlinearities (i.e., distortion products) in the response spectrum. Our findings suggest that the auditory system transforms the periodicity of dissonant intervals resulting in consonant and dissonant intervals becoming more distinct in the neural code than if they were to be processed by a linear auditory system.
Journal Articles
Music Perception (2008) 25 (5): 419–428.
Published: 01 June 2008
Abstract
Studying similarities and differences between speech and song provides an opportunity to examine music's role in human culture. Forty participants divided into groups of musicians and nonmusicians spoke and sang lyrics to two familiar songs. The spectral structures of speech and song were analyzed using a statistical analysis of frequency ratios. Results showed that speech and song have similar spectral structures, with song having more energy present at frequency ratios corresponding to those ratios associated with the 12-tone scale. This difference may be attributed to greater fundamental frequency variability in speech, and was not affected by musical experience.Higher levels of musical experience were associated with decreased energy at frequency ratios not corresponding to the 12-tone scale in both speech and song. Thus, musicians may invoke multisensory (auditory/vocal-motor) mechanisms to fine-tune their vocal production to more closely align their speaking and singing voices according to their vast music listening experience.