Attitudes to the relationship between music and deafness suffer from two related misconceptions: the enduring assumption that hearing is central to musical experience in conjunction with an extreme impression of deafness as total aural loss; and, more recently, the tendency to reduce deaf listening to tactility, as narratives about inborn sensory acuities among the deaf proliferate in the popular imaginary. Increasingly, deafness symbolizes a set of sensory polarities that obscure an intrinsic diversity of musical experiences from which musicology stands to gain, a diversity that encompasses members of Deaf culture and non-culturally deaf people alike, and that is signaled through the person-centered compound “d/Deaf.” My article builds on recent music scholarship on disability to offer a pluralistic understanding of music and deafness. Beginning with Scottish deaf percussionist Evelyn Glennie, I investigate a range of d/Deaf accounts of music, including those of Deaf sign language users, hearing aid wearers, and cochlear implant recipients, and of people with music-induced hearing loss. Deafness resists automatic entry points into music, unsettling any straightforward hierarchy of the senses. Deaf people reflect on the musical status of aurality in markedly different ways, just as they offer a complex understanding of vision and touch. For instance, vision is a highly versatile listening strategy and is often more reliable than vibration; touch is feasible because of its contextual dependence on visual cues, and is further tied to a set of material and environmental variables. Ultimately, I argue that d/Deaf listeners enrich customary notions of musical expertise: deafness belongs in musicology as a diverse set of experiences within the full spectrum of listening.
quit shoving silence down my throat
Christine Sun Kim1
In February 2003 internationally renowned Scottish percussionist Dame Evelyn Glennie gave a landmark TED Talk. Using live musical demonstration, she described her signature technique of “touching the sound,” a nuanced form of vibrational listening that engages the whole body as a “resonating chamber” by which to sense, distribute, and digest the sounds while simultaneously integrating visual cues, movement, and imagination. Swiftly moving barefoot about her percussion kit, Glennie detailed for her live audience how and where she felt the different pitches and sounds resonating in her body—the chest, the stomach, the tip of the pinkie finger. At one point she invited audience members to explore their physical connection to sound by using their hands to create the sounds and sensations associated with different meteorological phenomena. “Now, I don't mean just the sound; I mean really listen to that thunder within yourselves. And please try to create that through your clapping,” she instructed. Glennie's TED appeal was a striking one: sound is more than meets the ear; it is a multisensory experience.2
Through her performances and public outreach Glennie has established herself in the popular consciousness as an expert listener. “My aim, really, is to teach the world to listen. That's my only real aim in life” and “my role on this planet is to bring the power of sound,” she earnestly proclaims.3 Perhaps unexpectedly, Glennie is also profoundly deaf, in the sense that she cannot “hear” sound below ninety-one decibels. She thus challenges the “common misconception that deaf people live in a world of silence” by virtue of her renown as a professional musician.4 She has, however, long resisted self-identifying as “deaf” or “disabled,” in an effort to dissociate from the politics of deaf identity and the stereotypes disability begets, and ultimately to highlight the critical merit of her musical achievements over the seeming novelty of her deafness.5 Indeed, her reception typically espouses a romantic view of deaf perception with headlines such as “How Do We Listen When We're Unable to Hear?” and “Evelyn Glennie Feels the Sound of Silence,” as online viewers marvel at the “deaf lady who can hear more than you.”6 Too often the universalizing tone of her reception—one echoed in her TED Talk's definitive title “How to Truly Listen”—belies the intricacies of her labors and the uniqueness of her circumstances. The result is an overgeneralized view of music and deafness that resonates neither with the percussionist's claims nor with the experiences of other deaf listeners.
Evelyn Glennie is but one of countless expert listeners whom musicology has yet to fully reckon with. Until recently, musicologists had little knowledge of the musical experiences of d/Deaf people. Indeed, deafness has long served as the universally accepted disqualifying impediment to musical engagement and apprehension. Deafness is believed to be the menace that plagued our beloved Beethoven, and it endures as the ultimate symbol of his transcendent musical genius.7 But the notion that deafness precludes musical understanding is fundamentally a misconception, as music scholars Anabel Maler, Jeannette Jones, and Joseph Straus have recently argued, one that relies on an exclusively aural conception of sound and a disproportionately extreme impression of hearing loss.8 This groundbreaking scholarship reminds us that deaf people have long engaged with music through tactile, visual, and kinesthetic stimuli as an alternative to normal hearing. More recently, as narratives about the potential for inborn sensory acuities among the deaf proliferate in the cultural imaginary, there has been a tendency to reduce deaf listening to tactility and vibration; increasingly, deafness symbolizes a set of sensory polarities that stand up neither to the findings of empirical neuroscience research nor to the lived experiences of deafness. This article thus pushes beyond naturalized conceptions of sound, extreme constructions of hearing loss, and sensory ideals, drawing on first-person musical accounts from members of the culturally Deaf community, hearing aid wearers and cochlear implant recipients, and musicians and concertgoers with music-induced hearing loss. Their testimony amounts to a diverse record of musical experiences that fall squarely within the full spectrum of listening. Music research presumes normal intact hearing to be the bare minimum requirement for cultivating listening expertise, yet d/Deaf listeners challenge the primacy of aurality relative to the other senses, and ultimately reveal that hearing need not be a prerequisite for or the basis of listening expertise. In fact, musicology stands to gain from deafness.
Experiences of deafness vary in relation to a set of shifting audiological and cultural-linguistic parameters. Accordingly, throughout the article I adopt the practice of using an uppercase “D” to refer to those Deaf people who identify with the linguistic customs and minority standpoint of Deaf culture, a global community united by its use of sign languages.9 By contrast, I use a lowercase “d” to refer to those who are non-culturally deaf or hard of hearing; these people typically communicate using phonetic language, often with the support of a hearing aid or cochlear implant. The people-centered compound “d/Deaf” is intended to signal the full spectrum of auditory and sociocultural constructions of deafness. These myriad conceptions and experiences of deafness shape d/Deaf attitudes toward music and musical experience.
Scholars in Deaf studies have recently coined the term “Deaf Gain” to counter the negative connotations of the term “hearing loss”—that is, to supplant a construction of deafness rooted in biological deficit with one rooted in biocultural diversity, and ultimately to draw attention to the unique cultural, sensory, and creative gains that deafness and Deaf culture afford. Deaf Gain, write H-Dirksen L. Bauman and Joseph J. Murray, is an “ethical advance” bestowing “a greater appreciation of the deep value of human diversity rather than human monoculture. Freeing ourselves from the shackles of normalcy, we are now more able to see how Deaf Gain can change the ways in which we appreciate the gifts of all humans.”10 This paradigmatic shift in thinking about deafness as gain is of vital importance to musicology: d/Deaf listeners resist straightforward sensory hierarchies, reject normalizing listening paradigms, enrich our understanding of music's ontological contours, and transform prevailing notions of musical expertise.
One initial gain that permeates this article is methodological. Musicology is increasingly engaged with online source material, and this article draws new critical attention to the value of digital media in d/Deaf communities, demonstrating how we might use insights gleaned from online sources to augment critical understandings of music and deafness. I draw extensively on first-person testimony found in various public online venues, including the well-known Alldeaf discussion forum, celebrated d/Deaf blogs such as Deaf World as Eye See It and TERPATRON 9000, and mainstream platforms such as YouTube and Twitter. The perspectives contained therein are heterogeneous, candid, emboldened, and unencumbered by the set of interview prompts typical of more traditional ethnographic fieldwork. As disability studies scholar Beth Haller argues, the Internet provides disabled people with new platforms of self-representation as users take to blogs, discussion boards, and social media to share their unique perspectives in their own words and on their own terms, supplanting dominant narratives and ultimately reshaping the public discourse on disability.11
A Primer on Deafness and Disability
Before we return to Glennie and embark upon a full account of d/Deaf musical experiences, an introduction to key concepts in Deaf studies and disability studies is needed in order to contextualize this new area of musicological research. There is no typical experience of deafness, and deaf people do not form a single, homogeneous social group. Rather, d/Deaf people relate to “deafness” in vastly different ways: deafness entails a combination of individual audiological characteristics, linguistic preferences, identity politics, and in some cases technological constraints—what amount to an idiosyncratic set of variables that shape musical experiences in profound ways.
The sensory contours of deafness vary considerably both within and between d/Deaf individuals. “Hearing loss” exists on an audiological spectrum ranging from mild to profound; its type/cause, configuration, magnitude (degree), and age of onset varies from person to person.12 Hearing loss magnitude is expressed in decibels (dB)—the absolute unit for describing the intensity of sound (loudness)—relative to average hearing sensitivity thresholds (“normal hearing”), which range from zero to twenty decibels, depending on the sound frequency (pitch) measured in hertz (Hz). “Profound deafness” is thus the standard term for denoting a hearing loss threshold of ninety-one decibels and above; profoundly deaf people generally cannot hear sounds below this volume. Profound deafness is considered the most extreme form of hearing loss and is contrasted with “mild” hearing loss, the threshold for which ranges from twenty-six to forty decibels.13 At any degree, hearing loss can differ between right and left ears, and hearing thresholds vary according to frequency: some people have greater sensitivity at low frequencies, others at high frequencies. Thus, d/Deaf people seldom live in a world of absolute aural silence. Many d/Deaf people, including those who are profoundly deaf, have residual hearing, which amounts to measurable, natural hearing enabling them to hear a certain degree of auditory stimuli. The significance and function of residual hearing is, however, necessarily individual. Vision, touch, and kinesthetic stimuli figure prominently in d/Deaf sensory experiences, and are informed by a host of materially and socially bound parameters including specific linguistic preferences, identity politics, technological constraints, and environmental dynamics. And d/Deaf people harness their sensory abilities in both conscious and intuitive ways that often defy straightforward explanation.
In North America historical tensions between oralist and manualist deaf pedagogies fostered two sharply divided approaches to deaf language use and identity, a pedagogical “war” that began in the mid- to late nineteenth century and persisted until the late twentieth century.14 Manualism is the practice of educating the deaf using sign language; it was integral to the genesis of Deaf culture in the West and has been central to its survival. Oralism, by contrast, is a method that teaches the exclusive use of speech and lip reading, an ideology that rose to prominence in America in the 1860s, coinciding with the height of the eugenics movement and all but superseding manualism.15 Proponents of oralism, most notably Alexander Graham Bell, sought to eradicate a then emerging subaltern deaf culture by suppressing sign language use with the aim of assimilating the deaf into hearing culture.16 As American Deaf historian Rebecca Edwards explains, oralist pedagogy dominated until the 1970s because of its social appeal: “the power of speech would free deaf people from the supposedly narrow constraints of the Deaf community.”17 In order to promote the exclusive use of the spoken vernacular, oralist educators punished unruly deaf students who signed, often resorting to violent tactics. Deaf signers understandably saw the oralist mission as nothing less than “an assault on their way of life.”18 (In Britain the history of deaf pedagogy has been even more strongly oriented toward oralism than in the United States since the inception of British oralist schools in the eighteenth century.)19
In light of these fraught historical circumstances, contemporary Deaf cultural movements in the West center first and foremost on a rejection of oralism: members of Deaf culture communicate primarily in sign language and subscribe to a unique set of identity politics.20 American Sign Language (ASL) is used throughout the United States and in anglophone Canada. Because the establishment of manualism in America is owed to French influence, ASL evolved through “language contact”: it combines the parent French Sign Language (“Langue des signes française,” or LSF) with preexisting local American signing systems.21 Quebec Sign Language, known in French as “Langue des signes québécoise” (LSQ), is used in francophone communities across Quebec, Ontario, and New Brunswick, and originated from the contact of ASL and LSF in French-speaking Canada throughout the nineteenth century.22 (That ASL and British Sign Language (BSL) differ considerably underscores how little correspondence there is between sign language and the spoken vernacular—in this case, English; sign languages have grammatical structures that are distinct from phonetic languages.)23 From the standpoint of Deaf culture, deafness is not a disability; rather, to be “Deaf” is to belong to a cultural-linguistic minority, a “visual variety of the human race.”24 By contrast, non-culturally deaf people usually communicate using oral speech and lip reading, often with the support of a hearing aid or cochlear implant.
In contrast to their Deaf counterparts, non-culturally deaf people may seek to “pass” as hearing by adapting to the norms of the hearing world; this often requires that they compensate for the limits of their assistive technologies by undertaking an intricate set of invisible labors—maintaining clear sight lines for lip reading, eye contact, body language, and so on—as they strive for discretion in their social interactions. (As hearing aid and cochlear implant design becomes increasingly inconspicuous, social discretion becomes even more viable for deaf users.)25 An emerging group of deaf cochlear implant recipients see themselves as cyborgs at the vanguard of post-humanism, a movement that media theorist Mara Mills dubs “deaf futurism.”26 People with mild to moderate hearing loss often prefer the neutral designation “hard of hearing” to the more pathologizing term “hearing-impaired” as a way of distinguishing themselves from those who are profoundly deaf and of dissociating from Deaf culture. Yet d/Deaf identification does not readily correspond to degree of hearing loss, but is rather reflective of sociocultural outlook.27 Just as contemporary Deaf culture encompasses all types and degrees of hearing loss, there is no consensus on the precise audiological parameters for the designations “deaf,” “hard of hearing,” and “hearing-impaired,” and they are often used interchangeably.28 Moreover, the formerly hard and fast correspondence between deaf language use and identity is beginning to evolve: an increasing number of deaf people sign and speak, opting for cultural-linguistic fluidity over the antagonism of generations past.29 But for many in the Deaf community language use remains inextricably bound to identity, where “voicing” is an act that affirms oralist ideals, thereby violating the core values of Deaf culture.30
To complicate matters, Deaf culture has a complex relationship with disability identity. The pathologizing construction of deafness that Deaf culture opposes is reflective of the biological determinism in medical discourse on disability more generally (i.e., the medical model of disability) that disability studies has long sought to counter. Accordingly, the cultural model of deafness is not unlike the social model of disability advanced by disability studies: disability is not a biologically inherent defect but rather a form of difference determined through social and environmental mechanisms, and indeed scholars and activists affirmatively claim disability as a valuable minority identity.31 However, in its persistent and explicit rhetorical disavowal of disability—i.e., deafness is not a disability—the cultural model of deafness reinscribes the stigma associated with disability and its inferiority relative to other positions of marginality.32 At the same time, many Deaf people depend on the legal protection conferred through such institutions as the Americans with Disabilities Act (and its international counterparts) for workplace accommodation, communications supports, and medical services, despite their insistence that the Deaf experience does not correspond to disability. Inconsistencies exist on both sides of the Deaf/disability divide, however. Although the social model of disability embraces a non-pathologizing view of deafness, the “disability rights movements and disability studies have been slow to recognize the ways in which hearing and speaking confer privilege” and the role of spoken languages in disability oppression more generally, as Susan Burch and Alison Kafer explain.33
Music, more than any other art form, reifies the intersections between deafness and disability by virtue of the aural: the inescapably sonic foundation of music in conjunction with the enduring misconception that deafness entails total aural loss positions deafness as music's ultimate disability. But it is not that the acoustical properties of music are intrinsically prohibitive for d/Deaf listeners. Rather, it is the value ascribed to aurality and its primacy in music discourse relative to the other senses that obscures and invalidates d/Deaf musical experiences. Aurality is but a naturalized mainstay of music; it by no means accounts for all musical experiences.
Yet as popular culture begins to entertain the prospect of deaf musicality, there is a tendency to overstate the sensory extremes of deafness. For instance, popular science reporting frequently reduces d/Deaf listening to vibration alone, succumbing to an overgeneralized, polarizing construction of deaf perception. Countless sensationalist headlines make inflated claims about the extent and ubiquity of inborn sensory adaptations among the deaf, often relying on oversimplified aural analogs: “Super Powers for the Blind and Deaf,” “Deaf People Hear Touch?,” “Brains of Deaf People Rewire to ‘Hear’ Music,” “Feel the Music: Deaf People Use ‘Mind's Ear’ to Process Vibrations,” “Deaf People ‘Develop Super-Vision to Compensate,’” and “Deaf People ‘Feel Touch’ with Hearing Part of Brain.”34 Similarly, the many multisensory intricacies of Glennie's practice receive limited attention in the mainstream portrayals of her listening in which vibration/touch enjoys pride of place, as evinced by seductive headlines such as “Evelyn Glennie Feels the Sound of Silence”35 and by the titles of her autobiography Good Vibrations (1990) and the critically acclaimed documentary Touch the Sound: A Sound Journey with Evelyn Glennie (2004). Increasingly, deafness symbolizes an alluring set of material polarities: the expectation is that deaf people experience music as total aural silence and pure tactile sensation. As the material limits of deafness assume new symbolic currency, “structures of power are funneled into sound ideals,” to borrow Nina Eidsheim's words.36
In actuality, “cross-modal plasticity,” the neural phenomenon to which the headlines above most often refer, has been found to be consistently evident only in instances of prelingual deafness—that is, deafness present before infant language acquisition. In these cases the auditory cortex assumes sensory processing tasks associated with other modalities, leading to “supranormal” performance of discrete visual and tactile functions. Thus, rather than cross-modal plasticity resulting in a “generalized overall improvement” to the intact sensory modalities, as popular science would have it, “only specific features of the replacement modality are affected.”37 The suggestion that deaf people compensate for hearing loss through extraordinary sensory “super powers” epitomizes the related “overcoming [disability] narrative” and the “supercrip” trope: it perpetuates the idea that deafness can and should be overcome through personal triumph and remarkable compensation in another area, transferring the burden of stigma from society to the disabled individual.38 In particular, the deaf supercrip masquerades as a more charitable conception of deafness by shifting the focus from sensory deficit and stigma to sensory gain and inborn talent, undermining the real social obstacles that d/Deaf people face.39 Neuroscientist Christina Karns has observed that the frequent portrayal of cross-modal plasticity as a “superhuman” ability “makes supergood fiction, but it would never work in real life.”40
Deaf people have long known that they perceive the world in unique ways, experiences that arguably elude scientific explanation. For instance, that deafness bestows heightened visual and tactile acuities is a notion central to the premise of Deaf Gain; but these acuities are, above all, nurtured through cultural practice.41 As Carol Padden and Tom Humphries write, “Deaf people's practices of ‘seeing’ are not necessarily natural or logical, in the sense that they have a heightened visual sense, but their ways of ‘seeing’ follow from a long history of interacting with the world in certain ways––in cultural ways.”42 And the same is true of touch, as Deaf studies scholar Donna Jo Napoli suggests in a recent essay.43 It is interesting to note that conceptions of “touch” in Deaf culture surpass straightforward attention to vibration. Sign language engages the whole of the somatosensory system, and Deaf social interactions are unique for their emphasis on touch: shoulder tapping is commonly used to get another person's attention and to initiate a face-to-face interaction; signers establish physical proximity with one another and use mutual touch to maintain connection while conversing; and touching is used to signal one's intention to take a turn in a signed conversation.44 In both popular science and music research the perpetual association of deafness with disability—as the physical condition of total aural loss and a set of sensory superpowers made possible by that loss—eclipses what are otherwise manifold physical, linguistic, and cultural expressions of deafness. In its recourse to these oversimplified sensory hierarchies, Evelyn Glennie's reception arguably furthers these conceptual barriers.
“The Deaf Percussionist Who Listens with Her Whole Body”
Glennie is above all a skilled professional musician with an impressive career spanning several decades. She has performed with numerous symphony orchestras, commissioned hundreds of new works for solo percussion, and received several accolades, including three Grammys and two appointments to the Order of the British Empire, as Officer (OBE, 1993) and as Dame Commander (DBE, 2007).46 Her recordings span classical and contemporary genres, and her love of improvisation has led to several memorable musical collaborations, most notably with Icelandic electronic vocalist Björk in 1996–97 and with experimental guitarist Fred Frith in 2007.47
Glennie has long maintained that her deafness is irrelevant to her performance—“something that bothers other people far more than it bothers me”—and that its appeal detracts from her musical achievements.48 Straus claims that particularly when a musician's disability is visible or public, as in Glennie's case, the performance of music and the performance of disability are intertwined: disability, as a stigmatized form of bodily difference, “engulfs” the musician's performance and reception.49 Furthermore, Blake Howe asserts that “the cultural scripts associated with both performances shape each other, so that it becomes difficult or even impossible to disentangle them: culturally marked, disability informs the music performance, while music performance in turn informs the disability.”50 In music, deafness is particularly susceptible to this treatment, since it otherwise symbolizes music's veritable absence. In this sense Glennie is the seeming embodiment of the platitude that music is a universal language, and indeed her example has been instrumentalized in this very way.51 Our “propensity to music,” argues the late neurologist Oliver Sacks in his wildly popular Musicophilia: Tales of Music and the Brain (2007), is an innate sensibility encoded in the human genome.52 As evidence of music's genetic preeminence Sacks highlights instances of ostensibly surprising musical competence in the face of disability, disease, and neurological injury, establishing continuity with his previous representations of disability. In his earlier An Anthropologist on Mars (1989) he writes, “Defects, disorders, diseases … can play a paradoxical role, by bringing out latent powers, developments, evolutions, forms of life, that might never be seen, or even be imaginable, in their absence.”53 According to this dubious logic, deafness uniquely reflects the extremes of music's genetic primacy. Sacks hypothesizes, “Even profoundly deaf people may have innate musicality. Deaf people often love music and are very responsive to rhythm, which they feel as vibration, not as sound. The acclaimed percussionist Evelyn Glennie has been profoundly deaf since the age of twelve.”54 Similarly, in their book on the psychology of music Andreas Lehmann, John Sloboda, and Robert Woody speculate, “If music is a universal capacity of the human brain, it is important to ask whether anything could ever go wrong with a brain to render it incapable of dealing with music. We know from some astonishing life histories (e.g., the percussionist Evelyn Glennie) that even profound deafness does not automatically exclude high levels of musical achievement.”55
This discourse reinforces the usual antithetical terms of music and deafness to further the universality of its hypothesis: “even profoundly deaf people,” “even profound deafness.” That these writers infer exclusively from Glennie's example with an emphasis on “acclaim” and “high levels of musical achievement” is telling: the percussionist has involuntarily set a daunting precedent. She here serves as a nonpareil symbol of music's universality in a manner akin to the more general appropriation of disabled narratives and images for inspirational fodder in popular culture, or what disability activists call “inspiration porn.”56 Ultimately, Glennie's reception and portrayals of deafness in popular science are bound by a common narrative thread: the expectation that deaf people compensate for hearing loss in extraordinary ways. This expectation reveals more about the stigma associated with disability, perennial fantasies about music's universality, and the mythical status of the brain and the senses than it does about the percussionist's musicianship.
Not surprisingly, the universalizing tone of Glennie's popular reception engenders mistrust in the minds of certain audience members. In particular, some d/Deaf people find Glennie's assertions about listening through touch unrealistic, if not alienating. One user on the popular Alldeaf online discussion forum sums up his impressions: “not exactly a simple declaration re the DEAF and music. Not every DEAF person can ‘feel’ music like a select few eg Evelyn Glennie.”57 He continues, “I have tested the theory of being able to ‘feel music by putting my hand on a loud speaker playing music’—vibrations only! I don't identify as music.”58 Some in the Deaf community even question whether Glennie is deaf, citing her improbable professional musical success and lack of communication supports (e.g., auditory assistive technology, live interpreter) as evidence. In a review of Glennie's performance with the Liverpool Philharmonic Orchestra for the disability arts festival DaDaFest in 2012, one British Sign Language user and member of the UK Deaf community clarified for his Deaf viewers, “There are those in the Deaf Community in the UK querying if Evelyn is really Deaf because she plays music. Perhaps they have never met her, but I saw in my own eyes that she needed communication support to fully understand the questions asked by the members of the audience.”59
Regrettably, the fact that Glennie often passes as hearing is a source of controversy among certain hearing listeners who mistakenly evaluate her spoken voice as a reflection of her degree of hearing loss. Just as journalists marvel at her impeccable speech and flawless lip reading skills, the comments sections of some of her YouTube videos feature accusations that “she seems to overcompensate with her enunciation,” or alternatively that “she articulates too well for a deaf person” and even “plays up her hearing disability for publicity.”60 Responding to an online thread inquiring whether the combination of “deafness and perfect speech” such as Glennie's is truly possible, audiologist Jeffrey Sirianni clarifies that, while the percussionist is definitely not a fraud (as some on the thread had suggested), deaf people in the spotlight “tend to be the exception rather than the rule.” He continues, “I have a problem with the public display of such exceptional cases. IMHO [in my humble opinion], they give parents of hearing-impaired children a false hope.”61 These reactions relate to a paradoxical set of anxieties concerning disability, stigma, and passing more generally: while visibly disabled people are denigrated for their failure to conform to the inconstant physical terms of “normalcy,” invisibly disabled people are often regarded with an air of suspicion for their failure to manifest their disabilities in clear physical terms.62 Indeed, the policing of Glennie's voice in relation to her hearing loss is fundamentally misguided. Even in this new era of deaf education, deaf voices are subject to ongoing scrutiny: speech language pathology favors oral communication (over sign language) and seeks to equip deaf patients with fluid, articulate speech.63 And although there are certain characteristic, highly stigmatized markers of “deaf speech,” such as poorly modulated speech and volume control, these are ultimately unreliable markers of deafness.64 In fact, inasmuch as it affords discretion, mastery of oral speech can provoke further scrutiny, despite the expectation that deaf people should aspire to this able-bodied ideal. This suspicion is notably acute in the absence of a hearing aid or cochlear implant or a set of characteristic physical markers, as Glennie's reception evinces.65
The percussionist's relationship to deafness is ultimately more complex than her reception allows. In particular, her renown as a deaf musician surpasses her interests in deafness and d/Deaf people. While she embraces a non-pathologizing view of hearing loss, she does not self-identify as “deaf,” preferring the term “deafened.” She has on occasion espoused an unsympathetic, even antagonistic view of Deaf culture, one that is arguably reflective of the predominance of oralism in her native Great Britain and its influence on prevailing conceptions of deafness.66 She frequently describes her relationship to deafness through a narrative of resilience and overcoming: “When I lost my hearing I chose to adapt and integrate myself into a mainstream school. From my perspective the choice was either to be pigeonholed as disabled or to find a way [to] open up a new career as the world's first full time solo percussionist.”67 In the face of increasing media attention, her oft-quoted “Hearing Essay” (1993) intervenes in her public narrative “to set the record straight and allow people to enjoy the experience of being entertained by an ever evolving musician rather than some freak or miracle of nature.”68 She elaborates on her misgivings: “I don't know very much about deafness. What's more, I'm not particularly interested. … In this essay I have tried to explain something which I find very difficult to explain. Even so, no one really understands how I do what I do. Please enjoy the music and forget the rest.”69 Glennie has long warned that those inspired by her achievements might have false hope. In an interview of 1994 she described having abandoned previous musical outreach with deaf children because parents often “expected miracles … they felt that if I could play an instrument, all deaf people should be able to play an instrument, and this is a fact. And of course, it can't happen.”70 In Parade magazine she clarified her self-concept: “I don't see myself as a deaf person. … Rather, I'm a hearing person who happened to lose her hearing. It occurred gradually, so I was able to adjust to each level. I couldn't make myself into a deaf person and say, ‘Oh, I can't do this’ and ‘I can't do that.’”71 Her insinuation that identifying as “deaf” signals a defeatist mentality is bound to anger some, though in recent years her attitude toward Deaf culture has softened as she has begun to learn sign language. In 2008 she noted, “I've only now thought about what sign language really means, what it is, and what I feel it can bring to my particular situation.”72 This news was applauded by the UK Deaf Community, as spokesperson for the Scottish Council on Deafness Nicola Noon explained: “People felt she had shunned the deaf community, but she will be congratulated for this.”73 Nevertheless, her most recent press materials omit mention of “deafness,” “deaf,” and “disability,” focusing instead on her status as a pioneering percussionist: she is neither a willing nor an altogether welcome ambassador for the d/Deaf. But she ultimately offers a complex picture of deaf identity, one that exposes the tensions between audiological and cultural constructions of deafness, and the uneasy partnership of deafness and disability.
Moreover, the universalizing aspirations of her mainstream reception belie what is in actuality a distinctive set of listening techniques. Glennie had an exceptional musical education by comparison with most d/Deaf children. Because she was already an accomplished young musician with perfect pitch when her hearing began to diminish at age eight, she had a strong musical frame of reference on which she could draw. She switched from piano to percussion in an effort to retain her existing musical skills, since the instruments’ low registral qualities and notably tactile dimensions offered considerable opportunities for cultivating new sensory awareness. Under the tutelage of her percussion instructor, Ron Forbes, Glennie began experimenting: playing barefoot and removing her then newly acquired hearing aids allowed her to more readily sense the vibrations in different parts of her body, forgoing dependence on her ears.74
To be sure, touch is chief among the senses in Glennie's listening paradigm. She famously writes that
Hearing is basically a specialized form of touch. Sound is simply vibrating air which the ear picks up and converts to electrical signals, which are then interpreted by the brain. The sense of hearing is not the only sense that can do this, touch can do this too. If you are standing by the road and a large truck goes by, do you hear or feel the vibration? The answer is both. With very low frequency vibration the ear starts becoming inefficient and the rest of the body's sense of touch starts to take over. For some reason we tend to make a distinction between hearing a sound and feeling a vibration, in reality they are the same thing. … Deafness does not mean that you can't hear, only that there is something wrong with the ears. Even someone who is totally deaf can still hear/feel sounds.75
But for Glennie touch is feasible because of its contextual dependence on the other senses. For instance, she initially had difficulty in tuning the timpani, but eventually came to associate incremental changes in the tautness of the drumhead with individual pitches, and also used her perfect pitch to ascertain the desired note. She explains, “the fact that I can hear the precise pitch of a note in my head and place it exactly in relation to other notes has been a tremendous advantage.”76 Vision also figures prominently in Glennie's listening process: “We can also see items move and vibrate. If I see a drum head or cymbal vibrate or even see the leaves of a tree moving in the wind then subconsciously my brain creates a corresponding sound.”77 Through the synchronization of visual cues with corresponding imagined sounds, the image and its movement thus serve as an index of sorts; the visual cue automatically triggers the “sound.” Finally, even as a profoundly deaf person Glennie still has a certain degree of residual hearing that, she explains, contributes to her perception of sound, a fact that is seldom acknowledged in public accounts of her deafness.78
Thus in Glennie's model “touch” also encompasses vision, movement, imagination, and sometimes even hearing, a multisensory endeavor that Straus dubs “deaf hearing”: “hearing as seeing, hearing as feeling, hearing as movement, hearing as silent, out-of-time contemplation—deaf hearing provides an alternative to normal hearing.”79 In fact, musicians and scholars have long sought to interrogate the primacy of the aural, drawing critical attention to the inescapable materiality of sound and the dynamic role of the senses therein in ways that resonate with Glennie's approach. For instance, sound studies scholar Steven Connor argues that in general “the senses communicate with each other in cooperations and conjugations that are complex, irregular, and multilateral,” what he terms “intersensoriality.”80 Like Glennie, he furthermore contends that hearing is touching: the skin—the primary mechanism of touch—envelops hearing and the other senses. And touch lingers in music: physical postures are imprinted in instruments, and sounds impress upon us as we imagine the physical coactivation of body and instrument: “we take [music] into us, hear it in the mode of producing it, in an instrumental coenesthesia.”81 Similarly, the late experimental composer Pauline Oliveros begs listeners to tune into their somatic experiences of sound as part of a larger exercise in cultivating mindfulness through listening, a meditative practice she calls “Deep Listening.” In one of her Sonic Meditations she instructs the performer to “Take a walk at night. Walk so silently that the bottoms of your feet become ears,” instructions that recall Glennie's frequent remarks about the body's capacity to be a resonating chamber.82 And in an effort to break with “music's naturalized cornerstones” and a priori definitions, particularly “the figure of sound” and the tendency to reduce “the thick event of music to a singular sensory mode, aurality,” Eidsheim posits listening (and singing) as “vibrational practice.”83 But for Eidsheim, as for Oliveros and Glennie, sensing sound is not limited to vibration: vibration is rather a conceptual vehicle for understanding music as the transfer of energy across time, space, and bodies, and for the relational and affective dynamics of musical experience. Lest the allure of vibration impose a set of normalizing theoretical constraints, Michele Friedner and Stefan Helmreich caution against idealizing vibration as a common sensory experience, explaining that “vibration is rather always already itself a kind of mediation. It may produce shared experience, but it does not therefore produce identical experience; even within ‘one’ individual, sense ratios and relations may shift and mix synesthetically. Phenomenologies of vibration are not singular.”84
Finally, much like Oliveros's Deep Listening philosophy, Glennie's paradigm is rooted in a conception of “listening” that surpasses sensory perception: its conceptual utility extends to social interactions and affective encounters, facilitating what Glennie calls “social cohesion.” As she claims to “teach the world to listen,” the percussionist stresses that “listening is about more than just hearing; it is about engaging, empowering, inspiring and creating bonds. True listening is a holistic act.”85 Should her altruism strike as patronizing, her emphasis is not on conforming to a set of normalizing material sensibilities, but rather on cultivating an openness to alternatives.
Ultimately, Glennie's discourse is as polarizing as it is instructive. It encapsulates the conceptual challenges that arise when deafness enters the realm of music: deep-seated misconceptions regarding its sensory extremes in relation to the prestige of aurality on the one hand and the increasing romance associated with vibration on the other, as well as a set of universalizing narratives that threaten to constrain its expression. By pushing beyond the romance of her mainstream appeal we begin to understand that Glennie's musicianship is by no means universal: she harnesses a set of idiosyncratic multisensory listening techniques that she has consciously developed over the course of several decades. “Touch” engages a network of coordinated sensory labors, and “listening” is not simply a physical act but an affective endeavor. Glennie's expertise combines existing sensory acuities such as perfect pitch, automatic compensations such as inferring sounds from an object's visible movement, and deliberate adaptations such as sensing pitch through differentiated touch. After many years of dedicated practice these categories give way to intuition, forming a process that is “very difficult to explain,” as Glennie herself affirms.
Listening beyond Sensory Ideals
My investigation extends to three groups of listeners: members of Deaf culture, non-culturally deaf listeners (particularly users of auditory assistive technologies), and an emerging group of musicians and concertgoers with music-induced hearing loss. At first blush these listeners could not be more different, both in terms of how they identify with deafness and disability and in the ways they understand and experience music. Their experiences present certain striking commonalities, however, which often correspond to existing albeit customarily undervalued dimensions of music itself (rather than to some typical experience of deafness). In particular, vision assumes new musical power in these accounts as it relates to the sense of touch; as a naturalized listening strategy inherent in the practice of score reading; or in the way visual-spatial cues and notated symbols figure as musical expression in the absence of aural and tactile stimuli. These three groups of listeners also highlight that musical experience is necessarily physically mediated, whether through technologies or across human bodies. Above all they provide further insight into what it means (and what it will mean) to truly listen beyond the limits of hearing, as “normal hearing” becomes an increasingly unstable audiological and social category.
There are a variety of attitudes toward music in Deaf culture, due in part to the lack of consensus among Deaf people as to how music relates to Deaf identity. Anabel Maler's work reveals that music figured prominently in nineteenth-century American deaf pedagogy, both in the United States and in parts of Canada. Initially, oralist educators used music as a tool for assimilating deaf students into hearing culture, but in the second half of the century they became increasingly suspicious that deaf music making was mechanical and morally corrupt, turning as a result toward technologies that could facilitate more “normal” ways of musical engagement among their students.86 Indeed, Jonathan Sterne and Mara Mills have detailed the troubling if pivotal role played by deaf people in the development of modern sound reproduction technologies.87 Music was equally contentious among manualist educators, who wanted to use it to nurture non-aural means of listening among deaf students but were understandably anxious about the cultural links between music making and passing.88
Despite the fraught legacy of music in American d/Deaf history, there is a long-standing tradition of music making within the American Deaf community (and in other Deaf communities throughout the West) centered on the practice of song signing. In American Deaf culture, song signing is a form of musico-poetic expression that originates in the community's storytelling traditions; in Deaf storytelling and poetry, storytellers arrange signs aesthetically to follow a sort of “rhythmical cadence.”89 By extension, in signed renditions of musical songs the signer will supply ASL (or another sign language) alongside a recorded or live musical performance to communicate lyrics and musical features such as tempo, rhythm, and register, as Maler demonstrates in her analyses.90 The recent proliferation of signed performances of popular songs on YouTube by both d/Deaf and hearing signers has contributed to the genre's popularity within the international Deaf community. Televised singing competitions such as the 2015 Eurovision finals have featured live song signing, thrusting what was once an obscure cultural practice into the international limelight.91 In the hands of native signers, in particular, song signing performances exceed mere translation where the visual-spatial contours of ASL shed new light on the musical and poetic dimensions of the song, transgressing the conventional structural demarcation of verse and chorus.92 Sign language rappers such as Sean Forbes and Signmark, all-deaf bands such as Beethoven's Nightmare, and other musicians belonging to D-PAN (Deaf Professional Arts Network) also enjoy widespread popularity within the Deaf community. Jeannette Jones shows that these musicians imbue their performances with Deaf activism through the artful integration of sign language and music, promoting a distinct culturally Deaf mode of listening in which vibration, visual cues, and imagined hearing coalesce—what Jones terms “hearing Deafly.”93 She explains further that ASL distinguishes between hearing and visual modes of listening by placing the same Bent-3 handshape alongside the ears and eyes respectively (see Figure 1).94 Indeed, rapper Sean Forbes writes, “When I sign rap music, I try to follow the beat with my body. … I try to paint a picture with my hands. You really have to see me to get me.”95
Song signing and visual cues are an important musical device at Deaf raves, clubbing events organized by and for Deaf people at which music is played at notoriously high volumes.96 Musical tracks are typically selected for the prominence of their bass lines, while lighting is designed to showcase onstage performances by song signers, comics, and dancers, and also to ensure that dancers can communicate on the dance floor in sign language. Deaf clubber Ashton Phillip explains furthermore that “it would be hard for deaf people to have a good time without lighting.”97 Deaf DJ Troi “Chinaman” Lee echoes these sentiments: “We express visually and we love feeling the vibrations and vibes of the people.”98 Similarly, the psychedelic jam band the Grateful Dead has a long tradition of accommodating the unique listening preferences of their devoted d/Deaf fans—a special class of Deadheads called “Deafheads”—through live song signing in the famous live concert space known as the Deaf Zone. This is an area several meters from the stage where balloons, cups, streamers, and other handheld props are connected to speakers with strings so that Deafheads can engage with vibrational feedback. “Clear sight lines” are also established to highlight live song signing and sign language interpretation, to present close-up footage of the band to facilitate proper lip reading of song lyrics, and to allow for signing between listeners.99
Even in non-Deaf musical settings where song signing is not part of the performance tradition, sign language offers Deaf concertgoers a communicative advantage over their hearing counterparts. For instance, the long-standing tendency in metal and electronic subcultures toward “deafening volumes” at live shows in combination with a notable shift toward infrasound—that is, low frequency sounds below the threshold of human hearing perceived largely through vibration—has attracted an emerging class of self-identifying “Deaf” metalheads.100 Just as vibrotactile, “heady” listening is a defining element of these live performances, and indeed part of the subculture's ritual, it can make spoken communication ineffective, and near impossible. Deaf metalhead Sean Vriezen elaborates:
I have to admit that I enjoy being able to speak freely during a show in such a way that I don't interrupt what I'm listening to. If someone were to talk with me with their voice during a show I would be annoyed that they were trying to talk over the music. Using sign language to communicate allows me to take in everything at the same time; I am able to talk about the music or the band without taking away from the show. …
… [S]igning is great at distances, with loud background noise, concerts, clubs, through windows, underwater. …
… When the ambient noise is as loud as it is, the inability to communicate aurally renders us all “deaf” anyhow.101
Vriezen highlights the potential for music to be disabling among hearing listeners, particularly in those frequent and prolonged moments when it effectively drowns out phonetic speech: hearing loss, in that sense, is a relative condition, just as disability is socially and environmentally bound. Under these circumstances visual cues and Deaf linguistic codes transcend the sensory limitations that immersive sound otherwise imposes. At “earsplitting” volumes, Deaf listeners are at a considerable advantage.
In choreographed dancing, vision is typically a more reliable and consistent modality than touch: vibrational feedback is variable and inconsistent when there is significant movement involved. Directors of the legendary Dance Company at the all-deaf Gallaudet University note that even with a state-of-the-art heavy bass sound system, movement and acoustical properties naturally obstruct the perception of musical vibrations: “Many people have the misconception that deaf people ‘hear’ by feeling vibrations through the floor. How is this possible, especially if a person is moving and jumping so that they do not keep in continuous contact with the floor? What if the floor is not wood, but solid concrete?”102 Gallaudet Company dancers rely primarily on sign counts in order to ascertain rhythmic patterns and master individual dance steps, using residual hearing and underfoot vibrations to a lesser degree.103 Deaf Dancing with the Stars sensation Nyle DiMarco notes that when it comes to dancing he experiences music by watching his dance partner Peta Murgatroyd: “I'm actually very visual. … Peta brings out the performance. She's a performer. I feel like I can see the music and can see how the character of the music actually flows. For me, that's music to my eyes.”104 For DiMarco, like the Gallaudet Company dancers, vibration is a less practical and potentially unsettling option: “One time Peta tried to turn the music up loud enough for me to feel it, but when I felt it and we tried to dance to it, it threw the whole routine off. … I'm used to not being able to hear, so for me it was contradictory to my world.”105 More generally, Deaf sound artist Christine Sun Kim explains that amplified vibrational feedback, while a seemingly useful device for d/Deaf listeners, can be physically and emotionally disorienting. Her visual art piece Feedback Aftermath (2012) was inspired by what she suspects was the post-traumatic stress disorder she incurred after prolonged exposure to loud audio feedback in the studio, a “disconcerting” experience that caused her extreme unease. “Most hearing people don't experience that. You have warning signals. If your ears hurt, you leave the room, you stop, you step away,” she explains. “I don't have those signals, so I went past all warnings and experienced feedback to the full degree.”106
In Kim's performance art, visual-spatial cues alone can constitute music. Indeed, as Glennie teaches the world to listen, Kim is deliberately “unlearning sound etiquette”—the seldom acknowledged social conventions governing our human production of and interaction with sound, hearing norms Kim painstakingly learned and internalized over time. She elaborates: “I know exactly how to behave in certain situations, such as being super quiet when someone's asleep in the house, or how you're expected to laugh aloud at stand-up comedy shows. … I'm trying to unlearn what I've been taught by others and trying to find my own definition of both sound and silence.”107 As Kim unlearns sound etiquette she reveals that listening is always a multisensory endeavor, though sound is not a prerequisite for music. Music can be an exclusively visual-spatial experience. For instance, her Face Opera II (2013) features a chorus of Deaf performers who “sing” using a series of coordinated silent ASL facial expressions.108 Kim observes that, in the absence of sonic cues, facial expressions play a defining musical role in operatic singing, a fact that an exclusive focus on music as sound overlooks.109
As an extension of her interest in the visual aspects of music making, Kim's artworks interrogate the primacy of the score, creatively exploring the discrepancies between the material dimensions of music and the visual terms of its representation, as in a recent series of whimsical one-page hand-drawn scores. Without staff, clef, key signature, time signature, or bar lines, Muffled Club Music (2016) comprises three successive slurred groups of single quarter notes more or less equally spaced; it includes movement between low and mid-range quarter notes, and several sudden registral leaps (see Figure 2). Kim tweeted that the score reminded her of the way closed-captioned descriptions of music in films are oftentimes lazily executed, hence the score's programmatic title. As a nondescript television caption, “muffled club music” either assumes that “hearing-impaired” viewers have little conception of music, or takes for granted that they have a general sense of what this type of music might sound like. But in Kim's score the written cue “muffled” has physical implications for the sound's origin, establishing a distance between the listener and the dance floor that musical notation alone fails to convey. “Muffled” is all the more significant because of its relation to stereotypical impressions of hearing loss—the idea that for deaf people sound is at best muffled, muted, stifled, and so forth. The percussive quality of Kim's score, its steady pulse, and the rise and fall of the (bass) line are reflective of the rhythmic quality of electronic dance music; the slurs arguably muffle what is an otherwise detached series of pitched beats. Crucially, the music is muffled not by virtue of Kim's deafness, but because of her physical distance from the music's source. Thus she notes that, as a set of visual symbols, musical notation ultimately belies the sonic contours of music, and that written language poses a similar problem when it comes to relaying the visual-spatial dimensions of sign language: “It's impossible to entirely capture a [musical] note on paper, which is very much like ASL. [Music and ASL] both have much more in common than you might think.”110
In her hand-drawn piece How to Measure Loudness (2014) Kim harnesses the inherent limits of dynamic markings to turn “hearing loss” on its head (see Figure 3). The transcript reads as a personalized decibel chart with accompanying example sounds, but to signal degrees of loudness Kim substitutes the usual unit of measurement (dBs) with a recognizable musical symbol, “f.” Among the many written cues are “fffffffff hot sweaty concert,” “fff subway announcement,” “f silence into speech,” and “mf sleep,” a comical take on the vague example sounds supplied in more conventional decibel charts. If “dB” is the sign for an absolute unit used to measure the intensity of sound in objective terms, “f” is merely a general visual musical cue denoting loudness; it does not correspond to an objective measure. Dynamics and incremental changes in dynamics—for example, the transition from mf to f—are relative, arbitrary, and known primarily through subjective frames of reference: in music, dynamics are specific to the player, her instrument, and the dynamic trajectory of a given score. By extension, Kim signals that we gauge loudness through intimate physical sensations and states, social interactions, and the sounds of technologies and environments.111 Ultimately her examples reverse the usual terms of hearing loss from deficit to gain. That “95 decibels and above”—the transcript's only decibel reference—corresponds to eleven “f”s, an absurdly loud dynamic marking, is of vital importance given that profoundly deaf people are believed not to hear anything below ninety decibels. By equating the lower threshold of her hearing with the upper limits of loudness measured in “f”s, Kim signals just how profound her conception of sound truly is. Finally, the prominence of the voice (i.e., “voice box”) in the transcript calls attention to the status of voice in Deaf culture. The apogee of the transcript reads, “voice lost in oblivion,” alluding perhaps to the literal silencing effects of loud sound as the spoken voice is rendered inaudible, echoing Vriezen's observations. But the example also signals the preeminence of the spoken voice in symbolic constructions of subjecthood and audist ideology, an overwhelming clamor that threatens to drown out and silence those who do not speak in normative terms.112 Kim's message is powerfully amplified through the transcript's overt conical shape, an unmistakable visual reference to certain cultural tokens of aural power: the contours of a gramophone horn, a loudspeaker, the shape of the human ear.
Whether through song signing, live performances, choreographed dancing, or contemporary art, members of Deaf culture deepen our awareness of music's ontological contours. And from these varied musical accounts stem several larger points. First, touch often depends on vision to round out musical experience. Whereas visual cues are adaptable and relatively constant, vibration is bound by the material constraints of objects, environments, and amplification technologies, and viable only insofar as the precise musical context allows. At concerts and dance events with onstage song signing, the visual-spatial dynamics of ASL and other nonlinguistic visual cues are significant for practical reasons: they tie vibrational listening to a poetic gloss, endowing otherwise variable sensations with concrete meaning. In this way, song signing models the contextual interdependence of vision and touch in musical experience. In certain musical contexts, visual cues assume considerable authority, whether as the focal point of song signing, as the guiding rhythmic and coordinating strategy in choreographed dance, or as the core of musical expression and experience (as in performance art). Where the musical score is concerned, however, visual cues are a limited representational strategy: notation fails to fully capture the acoustical and spatial parameters of music. At the same time, the relative value of dynamic markings perhaps more readily corresponds to the subjective experience of sound's “loudness” than to objective measurements in decibels. Kim's musical output thus brings new meaning to composition and the practice of score analysis.
Despite these rich musical experiences among Deaf listeners, there are some in the Deaf community who feel ambivalent about music, particularly if it interferes with their cultural values. Members of the Alldeaf forum have debated the merits of music and song signing at length. Some express apprehension over the proliferation of unskilled hearing song signers on YouTube, who, they feel, are appropriating tokens of Deaf culture in order to harness its novel appeal.113 The viral attention given to sign language interpretation in televised singing competitions such as Eurovision in the hearing world has likewise proved controversial among members of the Deaf community, including a number of sign language interpreters. Some worry that the preoccupation with the spectacle of sign language as choreography in addition to the celebrity of certain hearing interpreters eclipses linguistic meaning and the artistic output of native signers.114
Finally, for some Deaf people, music is fundamentally at odds with the primacy of vision in Deaf culture. The famous 1910 proclamation of George Veditz, pioneering leader of the National Association of the Deaf (NAD), that deaf people are “first, last, and all the time the people of the eye” remains a cornerstone of contemporary Deaf identity.115 The comments of Deaf blogger J. Parrish Lewis speak to some of these complexities:
It almost seems dangerous to say that I love music, because not everyone will understand and I will be judged. While the majority of the Deaf Community will say they don't enjoy music at all, there are plenty of us that do love music. Even when we cannot hear it. …
In the Deaf Community, we usually don't talk about it. Usually it's got to be paired with an ASL video signing the song before most will express an appreciation for it, and it's usually for the ASL. This is not wrong, and I don't at all have a problem with anyone appreciating only the ASL half of the song. Everyone's got their likes and dislikes.116
In this sense, music is a somewhat unique conceptual battleground for contemporary Deaf identity politics. Willing listeners explore new orientations toward Deaf culture as they harness the listening techniques and expressive strategies afforded by their cultural minority standpoint. As song signing enters popular culture, however, certain members of the Deaf community are understandably apprehensive: song signing must be practiced by and for the Deaf and not co-opted for hearing entertainment. For others, the deeply ingrained associations between music and aurality are automatically prohibitive on account of Deaf cultural mores. Finally, Deaf listeners such as Vriezen also highlight the relative terms of hearing loss and its associated disabilities: amplified sound can disable hearing listeners, while Deaf listeners are equipped to communicate effectively above and beyond “deafening” volumes. But for other Deaf listeners, musical interest and enjoyment is not necessarily a reflection of Deaf identity. As Deaf blogger Benjamin Simpson explains, “Like all pleasures in life, some are enjoyed more by some individuals than others. Not all hearing individuals love music and the same applies in the Deaf community.”117
Among the second group of listeners, non-culturally deaf listeners, musical experiences are equally diverse. Like their Deaf counterparts, non-culturally deaf listeners often harness visual cues in compelling ways. Barbara Stenross, author of Missed Connections: Hard of Hearing in a Hearing World, shares the story of her self-identifying hard-of-hearing friend Karen, who admits that while she finds music difficult to appreciate, closed-captioning for televised vocal performances and even the mouthing of song lyrics make for a more meaningful musical experience. Stenross quotes from a conversation with Karen, who describes the intimacies of mouthing: “In high school, I had a girlfriend that did a lot of singing to me. What I mean by that is, she didn't actually sing, she would mouth the words on the radio to me in the car. I'd ride along and she'd mouth the songs for me.”118 For some late-deafened musicians who are literate in musical notation, score reading can trigger memories of timbre, pitch, and the physical sensations associated with playing different instruments. Profoundly deaf musician and hearing aid wearer Paul Whittaker explains that “music means nothing at all to me unless I see a score: I then read that and know in my head exactly what that music ‘sounds’ like.”119 Whereas Whittaker describes this process as one of unconscious adaptation, fellow deaf musician Nigel Osborne explains that score reading for memory retrieval is a technique he painstakingly taught himself: “It took me quite a long time to train myself to do that. What I'm doing is I'm drawing on my memories and knowledge of the sounds and colours of instruments and their different ranges, as well as what the pitches sound like and what durations, how long they last, and I'm putting that all together in my head.”120
Score reading is bound to be a type of “listening” with which many musicologists can identify. The score has long enjoyed aesthetic prominence in our discipline; at its most basic, it is a set of visual codes and instructions for physically realizing an organized set of sounds. To a trained musician, the score can silently convey specific sounds and material sensations. In his elaboration on Edward T. Cone's discussion of score reading Fred Everett Maus writes, “experienced score-readers do not just look at visual symbols; we use them as a starting point for remembering and imagining sound. … [A] performer has the task of bringing musical events into being, and a score-reader does this too, at least in imagination.”121 Whittaker and Osborne thus draw new attention to what is otherwise a naturalized component of our listening expertise.
For certain non-culturally deaf people, listening is often technologically mediated through auditory assistive technologies—in effect, prostheses. Indeed, hearing aid wearers and cochlear implant recipients contend with a unique set of variables when engaging with music. Hearing aid type (analog versus digital), make, model, and programming can dramatically influence musical perception and enjoyment. Hearing aids and cochlear implants are designed chiefly to facilitate the perception of speech and verbal communication. Since the inception of digital hearing aid technology in the 1990s, new hearing aids typically use a compression technique (called Wide Dynamic Range Compression, WDRC) to boost speech sounds, adjusting the speech signal input range by automatically applying more gain for quieter sounds and less gain for louder sounds. Because music has a significantly larger dynamic and frequency range than speech, digital hearing aids are often ill equipped to process musical input, sometimes causing pitch distortion, noise cancellation, and unpleasant frequency feedback for the wearer.122 These effects are likely particularly acute when wearers of digital hearing aids participate in situations of interactive music making such as rehearsals, where different musical frequencies mix sporadically with speech. By contrast, pre-1990 analog-style hearing aids have a wider frequency range and use linear amplification (instead of compression), which many longtime hearing aid wearers believe respond more effectively to the unique acoustical properties of musical signals than the newer digital-style hearing aids.123 Whittaker notes that transitioning from his twenty-year-old analog aids to a newer digital model was a physically and socially disorienting experience, since the compression on the new device rendered musical sounds tinny. He explains, “playing the piano and organ was so unpleasant, aurally, whilst I was simply unable to hear my choir properly and had to rely on them telling me if they were right or not.”124 In many cases, the technological and physiological challenges of managing the sensory experience of music prove cumbersome and overwhelming for the hearing aid wearer. Audiologists Robert Fulford, Jane Ginsborg, and Alinka Greasley write that, in the future, “the challenge for manufacturers and digital signal processing engineers will be to develop technologies that improve music listening experiences whilst retaining and prioritising the amplification of human speech.”125
Audiologists are likewise engaged in studies to help improve the perception of music for cochlear implant recipients.126 Engineer Les Atlas explains that because “there is no easy way to encode pitch as an electrical stimulation pattern,” current cochlear implant models are poorly equipped to process music.127 One of Stenross's hard-of-hearing informants notes that she was musically active “before going deaf” but that her cochlear implant had drastically altered her perception of music: “even though I have a CI [cochlear implant] and can communicate beautifully, music is still garbage to me.”128 As Mara Mills suggests, cochlear implants necessarily inscribe the audiological abilities of deaf listeners, a characterization that extends to musical enjoyment to a certain degree.129 Cochlear implant recipient Michael Chorost has written extensively about his own musical experiences in relation to improvements to cochlear implant hardware/software design over the last two decades. He explains that ultimately the “variations between user experiences present real perplexities for researchers who want to develop better software. The experience of music is inevitably subjective.”130 For non-culturally deaf people who use auditory technologies, then, not only is “hearing” technologically mediated, but musical experiences rooted in hearing depend on the capacities and limits of the prosthetic device, the compatibility between device and user, and the unique musical preferences of the user. “Hearing” music through hearing aids or a cochlear implant remains a precarious endeavor.
This article's final group of listeners approach deafness in rather unique terms relative to other d/Deaf listeners: as formerly “hearing” people they encounter deafness as a result of their voluntary musical activities. Late-deafened musicians and listeners face considerable physical and social obstacles as they come to terms with the everyday experiences of hearing loss as a disability in a culture—music—in which the aural reigns supreme. Ultimately, professional musicians and regular concertgoers of all stripes are at high risk for developing different types of hearing impairment, including tinnitus, hyperacusis (an acoustic shock injury that results in an extreme sensitivity to sounds), and diplacusis (experiencing different pitches/timings in each ear).131 Sterne characterizes music-induced hearing damage as an extension of what he calls “audile scarification”—that is, “the participation in the everyday urban life of advanced capitalism.” He elaborates:
[Audile scarification] is both a form of inscription on the body, and a mode of compliance. To participate in a loud music performance, to subject oneself to the roar of an airplane engine or bathroom air dryer, to attend a sporting event. All of these practices ask something of their attendees’ bodies; they mark them. To submit oneself to an event like this is to consent to a certain potential for audile scarification.132
More specifically, George McKay has written about the prevalence of noise-induced hearing loss among heavy metal and heavy rock musicians and concertgoers, as well as among regular earbud users, which he frames as “situations in which popular music can function as a disabling culture.”133 He argues that because sustained volumes of over 120 decibels (in live rock shows) have long been industry standard, hearing loss is inevitable. Further, hearing loss is consistently understood as part of the wear and tear and hypermasculine grit of heavy rock and heavy metal subcultures. Bands such as the Who, Slade, and Kiss framed physical tolerance for their deafening volumes as part of their music's joint pleasure/pain imperative; physical intolerance, by comparison, was assumed to reflect primarily on old age and a general lack of hipness.134 Like Sterne, McKay asserts that this listening is often voluntary, and that it is likewise physically demanding. Perhaps as an outgrowth of this long-standing romanticization, there is increasing fascination in electronic music subcultures with the sensory extremes of hearing loss as it concerns the phenomenon of high volumes. For instance, following a recent music festival performance in Toronto, Stephen O'Malley, frontman of the notoriously loud drone metal band Sunn O))), tweeted to followers, “Deaf becomes you,” not necessarily as an expression of solidarity with potential deaf fans, but as a provocation: deafness is a material condition that hearing listeners can embody through the band's music, both temporarily, at live shows, and permanently, as the long-term progressive effects set in.135
Personal musical narratives on hearing loss can contradict the fantasies of musical subcultures, however. For instance, in 1999 Princeton musicologist Peter Jeffery made headlines when he sued members of the alternative rock band the Smashing Pumpkins, earplug manufacturers North Protects, the city of Connecticut, and the New Haven Coliseum when he developed what he alleges was chronic tinnitus after attending one of the band's live shows with his twelve-year-old son in 1997.136 Willing listeners, by contrast, might repeatedly subject their ears to deafening volumes for the sake of personal enjoyment and/or to achieve a sense of communal belonging, but privately suffer the consequences, framing their hearing loss in less romantic terms than those supplied by the prevailing generic discourse. In the EDM (electronic dance music) scene it is becoming increasingly socially acceptable among concertgoers to wear earplugs at live shows as a way of safeguarding against hearing loss/damage. For instance, in a recent article in Magnetic Magazine promoting special concertgoing ear filters, one writer and EDM enthusiast expressed a desire to protect his/her hearing without compromising musical enjoyment or the scene's penchant for loud volumes:
I expect live music, and DJ shows to be loud, but it's gotten to a point that the ear ringing has become fairly intense after these concerts. I like my hearing as I'm sure you do, and continued exposure to these types of high decibels has a bad ending for all of us who don't protect ourselves; we lose our hearing slowly but surely. It's easy to get caught up in the music and just say to yourself, “next time I'll wear earplugs, this one show won't hurt me …” The question is, how many times have you done that? Noise-induced hearing damage is very real and something you need to pay attention to, especially in EDM culture.137
Music-induced hearing loss is also ubiquitous in the world of classical music. In contrast to the hard-core mentality of rock and heavy metal subcultures, there is little romance associated with hearing loss in classical music aside from Beethoven's case. (Sterne notes furthermore that this “high/low culture binary often works in reverse when it comes to hearing protection.”)138 In April 2016 the well-known British violist Chris Goldscheider went public with his lawsuit against the Royal Opera House. The violist claims that by seating him directly ahead of the brass section in a 2012 staging of Wagner's Die Walküre the orchestra caused “his hearing [to be] irreversibly damaged.” Goldscheider explains that sound frequently reached 137 decibels, what the court documents characterize as “an immediate and permanent traumatic threshold shift.”139 Janet Horvath, a former professional cellist with hyperacusis, relates that as rehearsals and performances suddenly became physically intolerable her “own sense of identity crumbled. It was excruciating that what I loved so much could bring me so much pain.”140 Similarly, longtime composer Michael Berkeley writes about coming to terms with what he hoped would be temporary hearing loss:
I cling to the view that my condition will improve. There has been an increase in volume, particularly with speech, but not so much in the hearing of music—which continues to sound ugly and disparate. Catching a piano piece on the radio the other day I asked: “What on earth is this? It sounds like Ligeti crossed with Nancarrow.” It turned out to be Schumann. Were I to be facing a lifetime of this, I would be in despair.141
The enduring stigma associated with hearing aids regrettably outweighs their audiological benefits for some professional musicians, for whom discretion is as much of a priority as sound amplification: “You don't want to turn up to work and find 80 or so musicians, your colleagues in the pit and you turn up with one of these great old NHS [National Health Service] things—you know, there is a stigma attached to that.”142 In this sense, the fiercely competitive dynamics of classical music can arguably perpetuate a culture of shame surrounding hearing loss, making the visible physical disclosure that comes with wearing hearing aids seem disadvantageous. And the cost of smaller, more discreet models can often be prohibitive, particularly where health insurance coverage for hearing aids and cochlear implants is already limited and eligibility conditional.
Resources on music-induced hearing loss and hearing damage for unionized professional musicians and music industry employees, whether in a popular or classical milieu, vary from union to union. The Musicians’ Union (MU) in the United Kingdom actively promotes awareness of hearing loss among its members, and has a robust set of online resources including strategies for safeguarding against hearing damage; literature on hearing self-surveillance, types of hearing loss, and claims to deafness; and comprehensive information on employee rights in relation to noise regulations.143 Crucially, it also holds employers accountable to Sound Advice, a set of music-industry-specific noise compliance guidelines written by a working group that includes members of the BBC Symphony Orchestra and the Royal Opera House.144 Employer compliance measures include assessing risks from noise; taking action to reduce noise exposure that exceeds legal limits; supplying employees with adequate training; and equipping players with musicians’ earplugs when noise levels exceed specific limits and action categories. The BBC's “Musicians’ Guide to Noise and Hearing” (2011), a guide that “aims to facilitate dialogue and empower all musicians and managers” in relation to noise regulations, recommends that employers provide musicians and stage managers with acoustic screens and treatments when necessary, and allow sufficient acoustical rest periods.145 By contrast, the American Federation of Musicians (AFM) of the United States and Canada—a union primarily made up of classical musicians with a large Symphonic Department—offers minimal online resources on hearing loss.146 In the cases of both the MU and the AFM, however, official union policy on hearing loss does not necessarily correspond to local institutional values, as Goldscheider's lawsuit against the Royal Opera House attests. Player status, seniority, and contract type would likewise influence the ways players choose to manage and disclose their hearing loss.
This emerging discourse on music-induced hearing loss uniquely models the tensions between injury and disability in the context of music performance. Whereas there is a long-established discourse within classical music on the prevention and treatment of repetitive strain injury, literature on music-induced hearing loss remains conspicuously absent. For instance, with the exception of tinnitus, music-induced hearing loss is not among the conditions customarily addressed by the Alexander Technique, perhaps the best-known therapeutic method for musicians.147 This discrepancy corresponds to the association between repetitive strain injury and recovery relative to the assumption that hearing loss is permanent and progressive. (In actuality repetitive strain injuries are typically chronic and debilitating.) And yet hearing loss does not mean the end of music, as attested by the motto of the Association of Adult Musicians with Hearing Loss: “proving the loss of hearing does not mean the loss of music.”148 More generally, Sterne's “audile scarification” draws a connection between hearing loss incurred through the sonic mechanisms of capitalism and trauma: sonic experiences leave a lasting imprint on the body's physical (and psychological) contours. But this physical imprint also inscribes new possibility, as the accounts of d/Deaf listeners demonstrate. As hearing loss becomes increasingly common among musicians and concertgoers we would do well to accommodate it as a hearing difference and adjust our cultural norms accordingly, as Sterne suggests, rather than treating it as a permanent and irrevocable disability. Above all, listeners with music-induced hearing loss demonstrate that “normal hearing” is a precarious condition in more ways than one. Normal hearing is physically temperamental, in that our ears are tender, sensitive organs. They are at the mercy of our sonic environments, our recreational activities, our physical well-being, and our age. In these ways we are always susceptible to audile scarification: “normal” hearing is thus an unstable audiological as well as social category.
Conclusion: Musicology Gains from Deafness
Current popular discourse on deafness reinforces a long tradition of making assumptions about deaf people: deaf people experience the world as total aural silence and pure visual-tactile sensation; deaf people automatically aspire to hearing norms; and, through their inborn sensory acuities, deaf people compensate for hearing loss in extraordinary ways. Glennie involuntarily serves as an icon onto which these fantasies are projected. This is a symbolism that cheapens her musical achievements, obscures the complexities of her own relationship to deafness, and subjects her to ongoing scrutiny and mistrust. The universalizing tone of her mission “to teach the world to listen” attaches to her deafness in ways that exceed her own commitments to deafness, making her unique experiences susceptible to generalization. But venerating Glennie as the paragon of deaf musicality is problematic, because such a paragon was never viable in the first place.
Glennie and other d/Deaf listeners reveal first and foremost that sensory perception is more complex and less extreme than popular conceptions allow. The senses intermingle and vary within individual sensory experiences and d/Deaf people conceptualize these experiences in myriad ways, rendering straightforward sensory hierarchies futile. In particular, there is a discrepancy between the allure of vibration and the realities of vibrational listening. The feasibility of touch/vibration as a listening strategy depends on a host of logistical variables, such as the material properties of a given acoustical space, instrumental register, the degree and method of amplification, and music's precise expressive function. For instance, vibration does not readily serve the unique demands of choreographed dancing, since it facilitates neither a consistent perception of musical pulse nor an internalization of rhythmic patterns when movement is involved, whereas in recreational settings such as Deaf raves or heavy metal shows vibration is central to the vibe of the club. But listeners also have different physical and psychological vibrational thresholds: prolonged exposure to amplified music, frequency feedback, or even being positioned directly in front of the brass section in an orchestra can trigger unease and physical disorientation in some listeners. In the end, vibration on its own is an inconstant sensation. It requires consistent mediation in order to be perceptible, being best transmitted through physical objects such as subwoofer speakers (a rather conventional technology) or through creative handheld props such as those used by Deafheads. Finally, d/Deaf people rarely privilege vibration over other sensory modalities, while, for some, vibration on its own does not qualify as music.
Vision is a highly versatile listening strategy. Visual cues can contextualize and augment tactile sensations, giving them concrete meaning as in Glennie's paradigm. For culturally Deaf listeners, the visual-spatial parameters of ASL in particular provide an unstable physical sensation (vibration) with linguistic frames of reference. Deaf visual listening practices also exquisitely model a process of embodied mediation, lending new significance to musical collaboration: song signers use their bodies to add an enriching linguistic and cultural gloss that registers in the minds and bodies of Deaf insider audience members. And the visual aspect of sign language transcends the “deafening” magnitude of loud music, as Deaf concertgoers continue to communicate above and beyond the threshold. For some d/Deaf people, vision is altogether a more reliable listening technique than vibration: strategies such as mouthing and lip-reading song lyrics, closed-captioning, and even the practice of song signing itself can exist independently of vibrational (or aural) feedback. In fact, when visual cues are involved, vibrational feedback is sometimes incidental.
The importance of bringing these historically marginalized perspectives to light notwithstanding, hearing is integral to many deaf peoples’ experiences of music: for some, hearing remains the most efficient and familiar way to engage with music, even as hearing loss becomes ever more common and renders the category of “normal” hearing unstable. Late-deafened musicians understandably strive for continuity with previous musical experiences centered chiefly on hearing. For many professional musicians, hearing loss and hearing damage can prove physically and socially disabling, significantly detracting from musical enjoyment and often cutting to the core of their identities. Under these circumstances, adaptation of any kind, whether through the use of assistive technologies or conscious sensory compensation, seems a daunting task, especially when resources and professional incentives are limited. Cochlear implant recipients and hearing aid wearers likewise aspire to hearing norms in their perception of music; the built-in constraints of existing technologies delimit this experience. If current digital hearing aid models remain by and large ill equipped to amplify the unique acoustical properties of musical signals alongside speech, there is widespread demand for devices that facilitate more robust musical hearing. Finally, the fact that some culturally Deaf people reject music on the grounds of its fundamental conflict with the visual premise of Deaf identity reflects on the deep-seated cultural linkage of music with aurality. Crucially, hearing is not itself hegemonic; it is rather the cultural values ascribed to hearing—in this case the assumed interdependence of music and hearing—that overlook and devalue listeners who do not have access to normative frameworks.
Deaf culture does not espouse a single view of music. In the Deaf community, music can provide meaningful creative expression, sensory pleasure, and cultural fulfillment. But it can also threaten the semantic value of sign language and potentially threaten the visual orientation of Deaf culture, a valid stance that no amount of “touching the sound” can undo. Indeed, inasmuch as this article champions a more pluralistic understanding of music and deafness, I also stress that music need not be universally appealing. Deaf accounts of music as unglamorous, banal, and in particular unpleasant undermine such romantic aspirations. Ultimately, music's appeal is automatically contingent neither on hearing ability nor on the availability of listening paradigms.
Musicology gains from deafness in fundamental ways. First, d/Deaf musicians and listeners enrich our methods. As music increasingly circulates and proliferates online, so too do listeners: the Internet is a living digital archive of musical experience. But this is not simply “reception study” or a polling of contrasting musical opinion; it is a testament to the power of online media in marginalized communities, and to ways in which their perspectives will shape our scholarship in the future. Personal blogs, YouTube videos, interviews, and public discussion forums constitute valuable source material that contains expert testimony: these listeners reveal that the senses operate in myriad ways, that deafness is not reducible to a single listening paradigm, and that music is more than sound. This is musicological text at its finest.
As d/Deaf listeners resist theoretical abstraction, they get to the ontological heart of music. Scholars have long problematized music scholarship's recourse to aesthetic autonomy. In his landmark work on the meanings of performing and listening, Christopher Small argues that “neither the idea that musical meaning resides uniquely in musical objects nor any of its corollaries bears much relation to music as it is actually practiced throughout the human race.”149 Music is an activity grounded in the social. “Musicking” is a “human encounter,”150 or, as Georgina Born explains, music is “immanently social,” such that musicology must be relational:
the conceptual gains of the “impossible totality” project outweigh the risks of hegemonic intellection; unless we cast our nets wide and speak our analytic minds, as it were, there is no chance for others (and Others) to answer back. …
… [T]he development of a relational musicology depends upon a break with dominant conceptions not only of what counts as music to be studied, but how it should be studied.151
Deafness only deepens musicology's sense of what music is—its social, relational, and material contours. Music does not simply exceed the limits of aurality; it exceeds the acoustical parameters of sound itself. “Sound” can be a primarily visual-spatial experience as we watch objects and bodies vibrate and move as music passes through them. In certain radical instances, visual cues and silent coordinated gesture are wholly constitutive of musical expression, as in the case of the silent facial singing in Christine Sun Kim's Face Opera II. And deafness also gets at what is already there—the inherent musicality of sign language, the significance of the visual in establishing sight lines at concert venues, the expressive dimension of the face in singing, and the analytic primacy of the score, an inescapably visual medium. Deafness tells us that the score can serve as a useful index: listeners read and subsequently imagine previously internalized pitches and timbres, a familiar process for musicians and scholars of music. But the score's symbolic dimensions are also limited, for better or worse. Whereas notation cannot fully account for the materiality of music, or specify a physical orientation between listener and musical source, Kim highlights that the arbitrary nature of dynamic markings better reflects our subjective perceptions of loudness than the absolute measures of a decibel chart.
More generally, deafness highlights the contextual interdependence of the senses as they govern musical experiences: vision, touch, and hearing are merely idealized types; rarely do they operate in isolation. The senses are enmeshed in a material constellation of synchronized and successive activations. Ultimately, deafness demonstrates that listening encompasses a full spectrum of sensory experiences, musical contexts, individual preferences, cultural practices, and social experiences—what amounts to an ever-evolving set of listening states.
Most importantly, d/Deaf listeners reveal that the value and prestige associated with naturalized understandings of musical skill and expertise are maintained through arbitrary authority, particularly with respect to listening. Even as postmodernist musicians, composers, and scholars interrogate the aesthetic autonomy of music and its corollaries, critical, disciplined listening is a mainstay of musicology. It is what sets music scholars apart. Music scholars continue to distinguish between passive and active listening modes, the former an unconscious, uncritical recreational form of listening, the latter a conscious, critical, and thereby more meaningful mode that music scholars cultivate through years of training. Indeed, in 2004 Andrew Dell'Antonio wrote that structural listening—a term first proposed by Rose Subotnik to critique the formalism that undergirds disciplinary listening practices—endures as “a disciplinary commonplace in the academic study of Western art music, and a pedagogical staple of undergraduate education in music history and theory.”152 Structural listening privileges aesthetic autonomy: listeners yield to the abstract power of the music, making objective judgments about its formal parameters and internal logic with moralizing effect. Structural listening is also rooted in the aural. But structural listening is a mastery worth dismantling, as Dell'Antonio and his colleagues made clear. Similarly, in 2011 Joseph Straus asserted that music theory enforces “prodigious hearing”: “for the most part, implicit listeners in traditional music theory are prodigious figures, with extensive training and vast knowledge of the musical literature. … The implied listeners in traditional music theory inhabit prodigiously capable bodies.”153 (This may be compared with Straus's description of “disablist hearing” quoted above.)154 Whether as structural listening, prodigious hearing, or what Maus called “the disciplined subject of musical analysis,”155 these are the conventional terms of our listening expertise.
In Small's formulation, musicking resists the ascribing of greater value to active than to passive listening modes by rendering the distinctions irrelevant; all music involves action, therefore all listening is active.156 While wholeheartedly agreeing with Small, I would stress that d/Deaf listeners have always been active listeners in the original sense: not on account of music's involving action or because deafness somehow automatically bestows heightened sensory acuities, but as a consequence of their inferior social status in a predominantly hearing world. They have always listened more carefully in order to master the social terms of the hearing world, though they rarely defer to its authority. Their propensity for active listening extends to their musical experiences.
I return to my initial assertion that musicology has yet to fully reckon with d/Deaf listeners, who can be expert listeners in the truest sense. They describe listening to music as a process involving conscious, painstaking labor, ongoing physical and technological adaptations, unconscious inborn sensory acuities, and intuitive strategies nurtured through cultural practice. And these approaches are not mutually exclusive. Furthermore, there is constant slippage between unconscious skill and conscious practice as listening habits are mastered and naturalized over time. Is this process really so different from the way we cultivate disciplined listening as musicologists? The fundamental difference lies in the value we ascribe to our listening strategies relative to those of other listeners, d/Deaf or otherwise. I am not suggesting that deafness reveals that musical “expertise” is relative or subjective. More precisely, it is that deafness calls us to a pluralistic understanding of what listening expertise entails.
Our discipline is at a moment of reckoning. The American Musicological Society becomes ever more inclusive through its ongoing efforts in outreach, funding opportunities, and examination of disciplinary shortcomings. Our scholarship is increasingly diverse: gender, race, sexuality, and now disability are significant parts of our critical purview. At the same time, we know all too well that lingering racial injustices, institutional prejudices, gendered biases, and microaggressions remain, described by the Society's former president Ellen T. Harris as painful “accounts of marginalization.”157 Marginalization is part of musicology's inheritance, which is rooted in cultural imperialism. This is the same cultural imperialism that has allowed white privilege to go unchecked and has determined, to borrow Born's pithy words, “what counts as music to be addressed, what's in and what's out,” who does and does not qualify as a listener, and what does and does not qualify as musical expertise.158 It is not that d/Deaf listeners are necessarily vulnerable; they are supremely marginalized. This was apparent in recent debates in the Society's blog, Musicology Now, in which deafness and blindness were deployed as metaphors for ignorance of and indifference toward musicology's implicit racial biases.159 All manner of musical experiences belong to the full spectrum of listening, and therefore to our scholarship. Deaf people have a stake in musicology. Not because they tell us what we want to hear, affirm deeply cherished ideals, or share a universal love of music; but because they challenge us to listen anew, beyond symbolic constructions, universalizing discourses, naturalized sounds, and handed-down sensory hierarchies. Deafness and d/Deaf people belong in musicology, and we would do well to take our cues from their expertise.