This article is within the scope of WikiProject Biophysics, a collaborative effort to improve the coverage of Biophysics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Material from the associated project or article page was split to Sound localization in owls. The page history of the associated project or article page now serves as the attribution history for part of the contents of that page.
What we really need now is some citations. Major disadvantage of doing everything off the cuff... Please help. --Chinasaur 10:23, 27 Sep 2004 (UTC)
Interesting article, good work. Why do you feel you need citations in the text? Linking to other WP articles is preferred to external references, if that is what you wanted to see. -- Solitude 13:16, Sep 29, 2004 (UTC)
I think it could definately use a references section, for those looking for some more technical reading if nothing more. It's also good to have the most important articles/books in a subject listed.
Distance localization and frequency attunation
As a live sound technician, what we are always told is that it's the midrange which is attenuated with distance, more than the treble. I wonder if someone can find a source for this. 220.127.116.11 02:08, 24 March 2007 (UTC)
Sorry I'm seeing your question months later... I'm a live sound tech but I've never heard that midrange attenuates more than treble with distance in air. Higher frequencies are attenuated in air more than low freqs. Humidity plays a big part in sound absorption in air but it doesn't change the fact that highs are absorbed more than lows. Here's an chart and here's an online discussion by sound guys about the phenomenon (more images, too.) Distance makes sound localization more difficult due to two factors: a) high frequencies are less strong and b) intensity differences between the two ears approach zero. Binksternet 16:57, 3 December 2007 (UTC)
Binaural cues: claim of nanosecond time resolution
The "Binaural cues" section includes a statement that Ormia ochracea, with its unique mechanically connected opposite ears, achieves a resolution of nanosecond time differences. However, the references given don't appear to support such a claim. The references discuss the way a time difference of a few microseconds at the insect's ears is amplified to tens of microseconds in mechanical response, and to hundreds of microseconds in neural firing activity (which is then, just, sufficient for the brain to interpret). But nanosecond time differences are not discussed.
If anyone knows a reference demonstrating sub-microsecond discrimination - which could acceptably take the adjective "nanosecond" (although "sub-microsecond" is surely better for hundreds of nanoseconds!) - can they please add such a reference here?
If no such reference can be found, I think we should use "microsecond". - Or, perhaps, a phrase like "of a few microseconds", to clarify we do at least mean that and not merely sub-millisecond (hundreds of microseconds). Iain David Stewart (talk) 01:21, 24 April 2008 (UTC)
"As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation." This uncited claim is inconsistent with recent findings that suggest humans can localize sounds on the horizontal plane all the way down to at least 25HZ.