Vertical sound localization

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Animals with the ability to localize sound have a clear evolutionary advantage. This is achieved in the mammalian brain by combining phase, frequency, amplitude and direction information contained within the sound signals. This information, combined with cues from other sensory modalities, facilitates localization of the source. While auditory localization in the horizontal plane (azimuth) requires binaural (ITD and ILD) differences, localizing elevation (zenith) is dependent on just one ear. Given that both ears on the interaural axis are located at the same height, changes in elevation produce no interaural time or intensity differences between the left and right ears. Here we discuss some of nature's evolutionary solutions to the problem of vertical localization, ranging from the simple, extra-cortical technique of tilting one's head, to the neural correlates of spectral analysis within the brain.

How sound reaches the brain[edit]

Sound is the perceptual result of mechanical vibrations traveling through a medium such as air or water. Through the mechanisms of compression and rarefaction, sound waves travel through the air, bounce off the pinna and concha of the exterior ear, and enter the ear canal. The sound waves vibrate the tympanic membrane (ear drum), causing the three bones of the inner ear to vibrate, which then sends the energy through the oval window and into the cochlea where it is changed into a chemical signal by hair cells in the organ of corti, which synapse onto spiral ganglion fibers that travel through the cochlear nerve into the brain.

It has been shown that human subjects can monaurally localize high frequency sound but not low frequency sound. Binaural localization, however, was possible with lower frequencies. This is likely due to the pinna being small enough to only interact with sound waves of high frequency.[1] It seems that people can only accurately localize the elevation of sounds that are complex and include frequencies above 7,000 Hz, and a pinna must be present.[2]

The cone of confusion[edit]

Most mammals are adept at resolving the location of a sound source using interaural time differences and interaural level differences. However, no such time or level differences exist for sounds originating along the circumference of circular conical slices, where the cone's axis lies along the line between the two ears.

Consequently, sound waves originating at any point along a given circumference slant height will have ambiguous perceptual coordinates. That is to say, the listener will be incapable of determining whether the sound originated from the back, front, top, bottom or anywhere else along the circumference at the base of a cone at any given distance from the ear. Of course, the importance of these ambiguities are vanishingly small for sound sources very close to or very far away from the subject, but it these intermediate distances that are most important in terms of fitness.

These ambiguities can be removed by tilting the head, which can introduce a shift in both the amplitude and phase of sound waves arriving at each ear. This translates the vertical orientation of the interaural axis horizontally, thereby leveraging the mechanism of localization on the horizontal plain. Moreover, even with no alternation in the angle of the interaural axis (i.e. without tilting one's head) the hearing system can capitalize on interference patterns generated by pinnae, the torso, and even the temporary re-purposing of a hand as extension of the pinna (e.g., cupping one's hand around the ear).

As with other sensory stimuli, perceptual disambiguation is also accomplished through integration of multiple sensory inputs, especially visual cues. Having localized a sound within the circumference of a circle at some perceived distance, visual cues serve to fix the location of the sound. Moreover, prior knowledge of the location of the sound generating agent will assist in resolving its current location.

Role of the pinna[edit]

Gray904.png

The exterior ear is a variable device which is used to vertically localize sound. It contains distinct areas (the pinna and concha) which are unique to the individual. Of the complex sound that enters the ear, some frequencies are amplified by the pinna while others are attenuated.[3] This creates spectral notches in the signal that reaches the brain which are then used to distinguish between different angles of incidence on the vertical plane. "The extracted notch frequencies are related to the physical dimensions and shape of the pinna".[4] Spectral notches are also created by the head and the rest of the body, but this review will focus on the pinna.[5]

Because the inner ear is organized tonotopically rather than spatially, sound localization relies on the neural processing of these spectral notches (implicit acoustic cues).[6] In an ongoing process, these cues seem to be calibrated using other sensorimotor systems in order for the spectral notches to convey useful localization information. Thus, vertical sound localization is a case of spectral pattern-recognition. The visual system appears to be crucial in cue calibration by providing accurate spatial feedback. Because the auditory system is calibrated to interpret spectral patterns based on the specific pinna of the individual, when pinna are altered the system must re-learn how to recognize the new spectral patterns for localization.[6]

Modification of the pinna[edit]

In a study in which subjects wore pinna molds that dramatically altered their spectral cues, they still received consistent spectral information that allowed them to recalibrate their spectral pattern-recognition system.[6] The peaks and notches of the incoming sound are at different frequency bands for each sound elevation before and after modification. Baseline behavior was determined by response to white noise stimulation at different elevations.[6]

Immediately after inserting the molds, subjects perceived all sounds as coming from roughly the same elevation (approximately ear-level), regardless of actual elevation.[6] Over time localization abilities became normal, supposedly due to the recalibration of spectral cues. After removal of the molds, the subjects were able to perform normally without a learning period. This suggests that the spectral pattern-recognition representations were maintained separately.

The importance of the pinnae in vertical sound localization has also been shown in pinna-occlusion experiments[7] and in narrow-band sound localization studies.[6][8]

Head-related transfer function[edit]

The head-related transfer function (HRTF) describes how a given sound wave input is filtered by the diffraction and reflection properties of the head, pinna, and torso before the sound even reaches the inner ear where it is transduced into an electrical signal which the brain can interpret. This pre-filtering helps determine the source's elevation . The HRTF describes how a sound from a specific location will arrive at the ear drum. Spectral notches (modifications of the complex sound wave) encode source location and may be captured via an impulse response which relates the source location and the ear location. This impulse response is termed the head-related impulse response (HRIR), where the HRTF is the Fourier transform of the HRIR.[9]

Brain structures involved in vertical sound localization[edit]

Auditory nerve fibers carry sound information, including the frequency and intensity of spectral notches, from the inner ear to the central nervous system.[10] All of these neurons synapse onto the cochlear nucleus, which is split into dorsal and ventral regions.

Dorsal cochlear nucleus[edit]

The dorsal cochlear nucleus (DCN) contains type IV cells which are particularly excited by noises containing spectral notches, implicating them in the processing of vertical sound localization.[11] In addition, lesion studies in cats have also shown that the DCN is crucial for vertical sound localization.[12][13]

Inferior colliculus[edit]

The DCN projects to the central nucleus of the inferior colliculus (IC), a site of great convergence in the CNS.[14] The IC also receives information about the horizontal direction of a sound source, meaning it is the first place in the brain that has enough information to localize a sound source.[15]

See also[edit]

References[edit]

  1. ^ ROBERT A. BUTLER and RICHARD A. HUMANSKI. Localization of sound in the vertical plane with and without high-frequency spectral cues. Perception & Psychophysics 1992,51(2), 182-186.http://www.springerlink.com/content/630l383367067tu0/fulltext.pdf
  2. ^ Suzanne K. Roffler and Robert A. Butler. Factors That Influence the Localization of Sound in the Vertical Plane. J. Acoust. Soc. Am. Volume 43, Issue 6, pp. 1255-1259 (1968). http://asadl.org/jasa/resource/1/jasman/v43/i6/p1255_s1
  3. ^ Paul M. Hofman, Jos G.A. Van Riswick and A. John Van Opstal. Relearning sound localization with new ears. nature neuroscience volume 1 (5) September 1998. http://www.mbfys.ru.nl/~johnvo/papers/nn98.pdf
  4. ^ Vikas C. Raykar et al. EXTRACTING THE PINNA SPECTRAL NOTCHES IN THE HRTF. The Journal of the Acoustical Society of America, Volume 118, Issue 1, pp. 364-374, July 2005. http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/Current_research/pinna_spectral_notches_web/pinna_spectral_notches.html
  5. ^ BENEDIKT GROTHE, MICHAEL PECKA, AND DAVID McALPINE. Mechanisms of Sound Localization in Mammals. Physiol Rev 90: 983–1012, 2010; doi:10.1152/physrev.00026.2009. http://www.cogsci.ucsd.edu/~ajyu/Teaching/Cogs160_sp12/Papers/grothe10.pdf
  6. ^ a b c d e f Paul M. Hofman, Jos G.A. Van Riswick and A. John Van Opstal. Relearning sound localization with new ears. nature neuroscience volume 1 (5) September 1998. http://www.mbfys.ru.nl/~johnvo/papers/nn98.pdf
  7. ^ Oldfield, S. R. & Parker, S. P. Acuity of sound localization: a topography of auditory space. II. Pinna cues absent. Perception 13, 601–617 (1984)
  8. ^ Middlebrooks, J. C. Narrow-band sound localization related to external ear acoustics. J. Acoust. Soc. Am. 61, 2607–2624 (1992)
  9. ^ Wiki Page. HRTF. http://en.wikipedia.org/wiki/Head-related_transfer_function
  10. ^ POON, PAUL W. F., BRUGGE, JOHN F. Sensitivity of Auditory Nerve Fibers to Spectral Notches. JOURNAL OF NEUROPHYSIOLOGY Vol. 70, No. 2, August 1993.U.S.A.
  11. ^ Spirou, G. A., Young, E. D. Organization of dorsal cochlear nucleus type IV unit response maps and their relationship to activation by bandlimited noise. American Physiological Society. 1991.
  12. ^ D.P Sutherland, R.B Masterton, K.K Glendenning, Role of acoustic striae in hearing: Reflexive responses to elevated sound-sources, Behavioural Brain Research, Volume 97, Issues 1–2, 1 December 1998, Pages 1-12, ISSN 0166-4328, 10.1016/S0166-4328(98)00008-4. (http://www.sciencedirect.com/science/article/pii/S0166432898000084)
  13. ^ Bradford J May, Role of the dorsal cochlear nucleus in the sound localization behavior of cats, Hearing Research, Volume 148, Issues 1–2, October 2000, Pages 74-87, ISSN 0378-5955, 10.1016/S0378-5955(00)00142-8. (http://www.sciencedirect.com/science/article/pii/S0378595500001428)
  14. ^ Davis, Kevin A., Ramachandran, Ramnarayan, May, Bradford J. Auditory Processing of Spectral Cues for Sound Localization in the Inferior Colliculus. 2002. Journal of the Association for Research in Otolaryngology
  15. ^ Slee, Sean J., and Young, Eric D. Information conveyed by inferior colliculus neurons about stimuli with aligned and misaligned sound localization cues. 2011. Journal of Neurophysiology.doi: 10.1152/jn.00384.2011