Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
NARROW
Format
Journal
Article Type
Date
Availability
1-1 of 1
Catherine M. Warrier
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Journal:
Music Perception
Music Perception (2008) 25 (5): 419–428.
Published: 01 June 2008
Abstract
Studying similarities and differences between speech and song provides an opportunity to examine music's role in human culture. Forty participants divided into groups of musicians and nonmusicians spoke and sang lyrics to two familiar songs. The spectral structures of speech and song were analyzed using a statistical analysis of frequency ratios. Results showed that speech and song have similar spectral structures, with song having more energy present at frequency ratios corresponding to those ratios associated with the 12-tone scale. This difference may be attributed to greater fundamental frequency variability in speech, and was not affected by musical experience.Higher levels of musical experience were associated with decreased energy at frequency ratios not corresponding to the 12-tone scale in both speech and song. Thus, musicians may invoke multisensory (auditory/vocal-motor) mechanisms to fine-tune their vocal production to more closely align their speaking and singing voices according to their vast music listening experience.