|Sound localization received a peer review by Wikipedia editors, which is now archived. It may contain ideas you can use to improve this article.|
|A fact from Sound localization appeared on Wikipedia's Main Page in the Did you know? column on 29 September 2004. The text of the entry was as follows: "Did you know
|The content of Vertical sound localization was merged into Sound localization. That page now redirects here. For the contribution history and old versions of the redirected page, please see ; for the discussion at that location, see its talk page. (2014-07-27)|
What we really need now is some citations. Major disadvantage of doing everything off the cuff... Please help. --Chinasaur 10:23, 27 Sep 2004 (UTC)
- Interesting article, good work. Why do you feel you need citations in the text? Linking to other WP articles is preferred to external references, if that is what you wanted to see. -- Solitude 13:16, Sep 29, 2004 (UTC)
- I think it could definately use a references section, for those looking for some more technical reading if nothing more. It's also good to have the most important articles/books in a subject listed.
Distance localization and frequency attunation
As a live sound technician, what we are always told is that it's the midrange which is attenuated with distance, more than the treble. I wonder if someone can find a source for this. 18.104.22.168 02:08, 24 March 2007 (UTC)
- Sorry I'm seeing your question months later... I'm a live sound tech but I've never heard that midrange attenuates more than treble with distance in air. Higher frequencies are attenuated in air more than low freqs. Humidity plays a big part in sound absorption in air but it doesn't change the fact that highs are absorbed more than lows. Here's an chart and here's an online discussion by sound guys about the phenomenon (more images, too.) Distance makes sound localization more difficult due to two factors: a) high frequencies are less strong and b) intensity differences between the two ears approach zero. Binksternet 16:57, 3 December 2007 (UTC)
Binaural cues: claim of nanosecond time resolution
The "Binaural cues" section includes a statement that Ormia ochracea, with its unique mechanically connected opposite ears, achieves a resolution of nanosecond time differences. However, the references given don't appear to support such a claim. The references discuss the way a time difference of a few microseconds at the insect's ears is amplified to tens of microseconds in mechanical response, and to hundreds of microseconds in neural firing activity (which is then, just, sufficient for the brain to interpret). But nanosecond time differences are not discussed.
If anyone knows a reference demonstrating sub-microsecond discrimination - which could acceptably take the adjective "nanosecond" (although "sub-microsecond" is surely better for hundreds of nanoseconds!) - can they please add such a reference here?
If no such reference can be found, I think we should use "microsecond". - Or, perhaps, a phrase like "of a few microseconds", to clarify we do at least mean that and not merely sub-millisecond (hundreds of microseconds). Iain David Stewart (talk) 01:21, 24 April 2008 (UTC)
Evaluation for low frequencies
"As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation." This uncited claim is inconsistent with recent findings that suggest humans can localize sounds on the horizontal plane all the way down to at least 25HZ.
- "Localization and Image Size Effects for Low Frequency Sound". Audio Engineering Society Convention 118. May 2005. Retrieved 26 September 2013.