Spatial hearing loss

From Wikipedia the free encyclopedia

Spatial hearing loss
SpecialtyAudiology

Spatial hearing loss refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. Poor sound localization in turn affects the ability to understand speech in the presence of background noise.[1]

People with spatial hearing loss have difficulty processing speech that arrives from one direction while simultaneously filtering out 'noise' arriving from other directions. Research has shown spatial hearing loss to be a leading cause of central auditory processing disorder (CAPD) in children. Children with spatial hearing loss commonly present with difficulties understanding speech in the classroom.[1] Spatial hearing loss is found in most people over 70 years of age, and can sometimes be independent of other types of age related hearing loss.[2] As with presbycusis, spatial hearing loss varies with age. Through childhood and into adulthood it can be viewed as spatial hearing gain (with it becoming easier to hear speech in noise), and then with middle age and beyond the spatial hearing loss begins (with it becoming harder again to hear speech in noise).

Localization mechanism[edit]

Sound streams arriving from the left or right (the horizontal plane) are localised primarily by the small time differences of the same sound arriving at the two ears. A sound straight in front of the head is heard at the same time by both ears. A sound to the side of the head is heard approximately 0.0005 seconds later by the ear furthest away. A sound halfway to one side is heard approximately 0.0003 seconds later. This is the interaural time difference (ITD) cue and is measured by signal processing in the two central auditory pathways that begin after the cochlea and pass through the brainstem and mid-brain.[3] Some of those with spatial hearing loss are unable to process ITD (low frequency) cues.

Sound streams arriving from below the head, above the head, and over behind the head (the vertical plane) are localised again by signal processing in the central auditory pathways. The cues this time however are the notches/peaks that are added to the sound arriving at the ears by the complex shapes of the pinna. Different notches/peaks are added to sounds coming from below compared to sounds coming from above, and compared to sounds coming from behind. The most significant notches are added to sounds in the 4 kHz to 10 kHz range.[4] Some of those with spatial hearing loss are unable to process pinna related (high frequency) cues.

By the time sound stream representations reach the end of the auditory pathways brainstem inhibition processing ensures that the right pathway is solely responsible for the left ear sounds and the left pathway is solely responsible for the right ear sounds.[5] It is then the responsibility of the auditory cortex (AC) of the right hemisphere (on its own) to map the whole auditory scene. Information about the right auditory hemifield joins with the information about the left hemifield once it has passed through the corpus callosum (CC) - the brain white matter that connects homologous regions of the left and right hemispheres.[6] Some of those with spatial hearing loss are unable to integrate the auditory representations of the left and right hemifields, and consequently are unable to maintain any representation of auditory space.

An auditory space representation enables attention to be given (conscious top-down driven) to a single auditory stream. A gain mechanism can be employed involving the enhancement of the speech stream, and the suppression of any other speech streams and any noise streams.[7] An inhibition mechanism can be employed involving the variable suppression of outputs from the two cochlea.[8] Some of those with spatial hearing loss are unable to suppress unwanted cochlea output.

Those individuals with spatial hearing loss are not able to accurately perceive the directions different sound streams are coming from and their hearing is no longer 3-dimensional (3D). Sound streams from the rear may appear to come from the front instead. Sound streams from the left or right may appear to come from the front. The gain mechanism can not be used to enhance the speech stream of interest from all other sound streams. Those with spatial hearing loss need target speech to be raised by typically more than 10 dB when listening to speech in a background noise compared to those with no spatial hearing loss.[9]

Spatial hearing ability normally begins to develop in early childhood, and then continues to develop into early adulthood. After the age of 50 years spatial hearing ability begins to decline.[10] Both peripheral hearing and central auditory pathway problems can interfere with early development. With some individuals, for a range of different reasons, maturation of the two ear spatial hearing ability may simply never happen. For example, prolonged episodes of ear infections such as “glue ear” are likely to significantly hinder its development.[11]

Corpus callosum[edit]

Many neuroscience studies have facilitated the development and refinement of a speech processing model. This model shows cooperation between the two hemispheres of the brain, with asymmetric inter-hemispheric and intrahemispheric connectivity consistent with the left hemisphere specialization for phonological processing.[12] The right hemisphere is more specialized for sound localization,[13] while auditory space representation in the brain requires the integration of information from both hemispheres.[14]

The corpus callosum (CC) is the major route of communication between the two hemispheres. At maturity it is a large mass of white matter and consists of bundles of fibres linking the white matter of the two cerebral hemispheres. Its caudal and splenium portions contain fibres that originate from the primary and second auditory cortices, and from other auditory responsive areas.[15] Transcallosal interhemispheric transfer of auditory information plays a significant role in spatial hearing functions that depend on binaural cues.[16] Various studies have shown that despite normal audiograms, children with known auditory interhemispheric transfer deficits have particular difficulty localizing sound and understanding speech in noise.[17]

The CC of the human brain is relatively slow to mature with its size continuing to increase until the fourth decade of life. From this point it then slowly begins to shrink.[18] LiSN-S SRT scores show that the ability to understand speech in noisy environments develops with age, is beginning to be adult like by 18 years and starts to decline between 40 and 50 years of age.[19]

tbd
CC density (and myelination) increases during childhood, and into early adulthood, peaking and then decreasing during the fourth decade.
tbd
Spatial Hearing Advantage (dB) continues to increase through childhood and into adulthood. It then begins to decrease again during the fourth decade.

Roles of the SOC and the MOC[edit]

The medial olivocochlear bundle (MOC) is part of a collection of brainstem nuclei known as the superior olivary complex (SOC). The MOC innervates the outer hair cells of the cochlea and its activity is able to reduce basilar-membrane responses to sound by reducing the gain of cochlear amplification.[20]

In a quiet environment when speech from a single talker is being listened to, then the MOC efferent pathways are essentially inactive. In this case the single speech stream enters both ears and its representation ascends the two auditory pathways.[5] The stream arrives at both the right and left auditory cortices for eventual speech processing by the left hemisphere.

In a noisy environment the MOC efferent pathways are required to be active in two distinct ways. The first is an automatic response to the multiple sound streams arriving at the two ears, while the second is a top-down corticofugal attention driven response. The purpose of both is an attempt to enhance the signal to noise ratio between the speech stream being listened to and all other sound streams.[21]

The automatic response involves the MOC efferents inhibiting the output of the cochlear of the left ear. The output of the right ear is therefore dominant and only the right hemispace streams (with their direct connection to the speech processing areas of the left hemisphere) travel up the auditory pathway.[22] With children the underdeveloped Corpus Callosum (CC) is unable, in any case, to transfer auditory streams arriving (from the left ear) at the right hemisphere to the left hemisphere.[23]

With adults with a mature CC, an attention driven (conscious) decision to attend to one particular sound stream is the trigger for further MOC activity.[24] The 3D spatial representation of the multiple streams of the noisy environment (a function of the right hemisphere) enables a choice of the ear to be attended to. As a consequence, instruction may be given to the MOC efferents to inhibit the output of the right cochlear rather than the left cochlear.[8] If the speech stream being attended to is from the left hemispace it will arrive at the right hemisphere and access speech processing via the CC.

tbd
Noisy Environment: The MOC efferents automatic response is to inhibit the left ear cochlea thus favouring the sounds arriving at the right ear. This is the right ear advantage (REA).
tbd
Noisy Environment: An attention driven optional response with the MOC efferents inhibiting the right ear cochlea. The sounds arriving at the left ear are favoured.

Diagnosis[edit]

Spatial hearing loss can be diagnosed using the Listening in Spatialized Noise – Sentences test (LiSN-S),[25] which was designed to assess the ability of children with central auditory processing disorder (CAPD) to understand speech in background noise. The LiSN-S allows audiologists to measure how well a person uses spatial (and pitch) information to understand speech in noise. Inability to use spatial information has been found to be a leading cause of CAPD in children.[1]

Test participants repeat a series of target sentences which are presented simultaneously with competing speech. The listener's speech reception threshold (SRT) for target sentences is calculated using an adaptive procedure. The targets are perceived as coming from in front of the listener whereas the distracters vary according to where they are perceived spatially (either directly in front or either side of the listener). The vocal identity of the distracters also varies (either the same as, or different from, the speaker of the target sentences).[25]

Performance on the LISN-S is evaluated by comparing listeners' performances across four listening conditions, generating two SRT measures and three "advantage" measures. The advantage measures represent the benefit in dB gained when either talker, spatial, or both talker and spatial cues are available to the listener. The use of advantage measures minimizes the influence of higher order skills on test performance.[1] This serves to control for the inevitable differences that exist between individuals in functions such as language or memory.

Dichotic listening tests can be used to measure the efficacy of the attentional control of cochlear inhibition and the inter-hemispheric transfer of auditory information. Dichotic listening performance typically increases (and the right-ear advantage decreases) with the development of the Corpus Callosum (CC), peaking before the fourth decade. During middle age and older the auditory system ages, the CC reduces in size, and dichotic listening becomes worse, primarily in the left ear.[26] Dichotic listening tests typically involve two different auditory stimuli (usually speech) presented simultaneously, one to each ear, using a set of headphones. Participants are asked to attend to one or (in a divided-attention test) both of the messages.[27]

The activity of the medial olivocochlear bundle (MOC) and its inhibition of cochlear gain can be measured using a Distortion Product Otoacoustic Emission (DPOE) recording method. This involves the contralateral presentation of broadband noise and the measurement of both DPOAE amplitudes and the latency of onset of DPOAE suppression. DPOAE suppression is significantly affected by age and becomes difficult to detect by approximately 50 years of age.[28]

tbd
Spatial Hearing Advantage (dB) slowly increases through childhood and into adulthood.
tbd
The left ear disadvantage slowly decreases through childhood and into adulthood. The right ear advantage still exists as children move into early adulthood.
tbd
The amplitude of contralateral DPOAE suppression decreases with ageing.
tbd
By early adulthood the left ear disadvantage is negligible. The right ear advantage re-establishes itself from middle to old age, primarily due to the faster falling of the left ear performance.

Research[edit]

Research has shown that PC based spatial hearing training software can help some of the children identified as failing to develop their spatial hearing skills (perhaps because of frequent bouts of otitis media with effusion).[29] Further research is needed to discover if a similar approach would help those over 60 to recover the loss of their spatial hearing. One such study showed that dichotic test scores for the left ear improved with daily training.[30] Related research into the plasticity of white-matter (see Lövdén et al. for example)[31] suggests some recovery may be possible.

Music training leads to superior understanding of speech in noise across age groups and musical experience protects against age-related degradation in neural timing.[32] Unlike speech (fast temporal information), music (pitch information) is primarily processed by areas of the brain in the right hemisphere.[33] Given that it seems likely that the right ear advantage (REA) for speech is present from birth,[22] it would follow that a left ear advantage for music is also present from birth and that MOC efferent inhibition (of the right ear) plays a similar role in creating this advantage. Does greater exposure to music increase conscious control of cochlear gain and inhibition? Further research is needed to explore the apparent ability of music to promote an enhanced capability of speech in noise recognition.

Bilateral digital hearing aids do not preserve localization cues (see, for example, Van den Bogaert et al., 2006)[34] This means that audiologists when fitting hearing aids to patients (with a mild to moderate age related loss) risk negatively impacting their spatial hearing capability. With those patients who feel that their lack of understanding of speech in background noise is their primary hearing difficulty then hearing aids may simply make their problem even worse - their spatial hearing gain will be reduced by in the region of 10 dB. Although further research is needed, there is a growing number of studies which have shown that open-fit hearing aids are better able to preserve localisation cues (see, for example, Alworth 2011)[35]

See also[edit]

References[edit]

  1. ^ a b c d Cameron S and Dillon H; The Listening in Spatialized Noise – Sentences Test: Comparison to prototype LISN test and results from children with either a suspected (central) auditory processing disorder of a confirmed language disorder; Journal of the American Academy of Audiology 19(5), 2008
  2. ^ Frisina D and Frisina R; Speech recognition in noise and presbycusis: relations to possible neural mechanisms; Hearing Research 106(1-2), 1997
  3. ^ Dobreva M, O’Neill W and Paige G; Influence of Aging on Human Sound Localization; Journal of Neurophysiology 105, 2011
  4. ^ Besta V, Carlile S, Jin C and Van Schaik A; The role of high frequencies in speech localization; Journal of the Acoustical Society of America 118(1), 2005
  5. ^ a b Della Penna S, Brancucci A, Babiloni C, Franciotti R, Pizzella V, Rossi D, Torquati K, Rossini PM, Romani GL; Lateralization of Dichotic Speech Stimuli is Based on Specific Auditory Pathway Interactions; Cerebral Cortex 17(10), 2007.
  6. ^ At A, Spierer L, Clarke S; The role of the right parietal cortex in sound localization: a chronometric single pulse transcranial-magnetic stimulation study; Neuropsychologia 49(9), 2011
  7. ^ Kerlin J, Shahin A and Miller L; Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”; Journal of Neuroscience 30(2), 2010
  8. ^ a b Srinivasan S, Keil A, Stratis K, Osborne A, Cerwonka C, Wong J, Rieger B, Polcz V, Smith D; Interaural attention modulates outer hair cell function; Eur J Neurosci. 40(12), 2014
  9. ^ Glyde H, Hickson L, Cameron S, Dillon H; Problems hearing in noise in older adults: a review of spatial processing disorder; Trends in Amplification 15(3), 2011
  10. ^ Cameron S, Glyde H and Dillon H; Listening in Spatialized Noise - Sentences Test (LiSN-S): Normative and Retest Reliability Data for Adolescents and Adults up to 60 Years of Age; Journal of the American Academy of Audiology 22, 2011
  11. ^ Farah R, Schmithorst V, Keith R, Holland S; Altered white matter microstructure underlies listening difficulties in children suspected of auditory processing disorders; Brain and Behavior 4(4), 2014
  12. ^ Bitan et al.; Bidirectional connectivity between hemispheres occurs at multiple levels in language processing, but depends on sex; Journal of Neuroscience 30(35), 2010
  13. ^ Spierer et al.; Hemispheric competence for auditory spatial representation; Brain 132, 2009
  14. ^ Grothe et al.; Mechanisms of Sound Localization in Mammals; Physiol Rev 90, 2010
  15. ^ Lebel C, Caverhill-Godkewitsch S, Beaulieu C; Age-related regional variations of the Corpus Callosum identified by diffusion tensor tractography; Neuroimage 52(1), 2010
  16. ^ Hausmann M, Corballis M, Fabri M, Paggi A, Lewald J; Sound lateralization in subjects with callosotomy, callosal agenesis, or hemispherectomy; Brain Res Cogn Brain Res 25(2), 2005
  17. ^ Bamiou D et al.; Auditory interhemispheric transfer deficits, hearing difficulties, and brain magnetic resonance imaging abnormalities in children with congenital aniridia due to PAX6 mutations; Arch Pediatr Adolesc Med 161(5), 2007.
  18. ^ Sala S, Agosta F, Pagani E, Copetti M, Comi G, Filippi M; Microstructural changes and atrophy in brain white matter tracts with aging; Neurobiology of Aging 33(3), 2012
  19. ^ Glyde H, Cameron S, Dillon H, Hickson L, Seeto M; The effects of hearing impairment and aging on spatial processing;; Ear & Hearing 34(1), 2013
  20. ^ Cooper N, Guinan J; Efferent-Mediated Control of Basilar Membrane Motion; J. Physiol. 576.1, 2006
  21. ^ Smith D and Keil, A; The biological role of the medial olivocochlear efferents in hearing; Front. Syst. Neurosci. 25, 2015
  22. ^ a b Bidelman G and Bhagat S; Right-ear advantage drives the link between olivocochlear efferent 'antimasking' and speech-in-noise listening benefits; NeuroReport 26(8), 2015
  23. ^ Kimura D; From ear to brain; Brain Cogn. 76(2), 2011
  24. ^ Lehmann A, Schonwiesner M; Selective Attention Modulates Human Auditory Brainstem Responses: Relative Contributions of Frequency and Spatial Cues; PLoS ONE 9(1), 2014
  25. ^ a b "LiSN-S, Cameron & Dillon, 2009". Nal.gov.au. 2011-05-02. Retrieved 2011-07-02.
  26. ^ Lavie L, Banai K, Attias J, Karni A; How difficult is difficult? Speech perception in noise in the elderly hearing impaired; Jnl Basic Clin Physiol Pharmacol 25(3), 2014
  27. ^ Musiek F and Weihing J; Perspectives on dichotic listening and the corpus callosum; Brain Cogn. 76(2), 2011
  28. ^ Konomi U, Kanotra S, James A, Harrison R; Age related changes to the dynamics of contralateral DPOAE suppression in human subjects; Journal of Otolaryngology–Head and Neck Surgery 43(15), 2014
  29. ^ Cameron S, Dillon H; Development and Evaluation of the LiSN & Learn Auditory Training Software for Deficit-Specific Remediation of Binaural Processing Deficits in Children: Preliminary Findings; Jnl Am Acad Audiol 22(10), 2011
  30. ^ Bless J, Westerhausen R, Kompus K, Gudmundsen M, Hugdahl K; Self-supervised, mobile-application based cognitive training of auditory attention: a behavioural and fMRI evaluation; Internet Interventions 1(3), 2014
  31. ^ Lövdén et al.; Experience-dependent plasticity of white-matter microstructure extends into old age; Neuropsychologia 48(13), 2010
  32. ^ Parbery-Clark et al.; Musical experience offsets age-related delays in neural timing; Neurobiol. Aging 33(7), 2012
  33. ^ Tervaniemi M, Hugdahl K; Lateralization of auditory-cortex functions; Brain Research Reviews 43, 2003
  34. ^ Van den Bogaert et al.; Horizontal localization with bilateral hearing aids: Without is better than with; J. Acoust. Soc. Am. 119(1), 2006.
  35. ^ Alworth L.; Effect of Occlusion, Directionality and Age on Horizontal Localization; Doctoral Dissertation, 2011

External links[edit]