|This article is of interest to the following WikiProjects:|
- 1 DAT
- 2 Content from 'Field of Psychoacoustics'
- 3 Background
- 4 2^(12/12)
- 5 Psychoacoustics
- 6 Magnitude resolution
- 7 Telephone bandwidth
- 8 Branchlist template added -see talk
- 9 Updated references to link to paper.
- 10 Perception of Low Frequencies
- 11 Pre-echo
- 12 Beat frequency
- 13 Are you being deliberately obtuse?
- 14 Lack of audio samples
- 15 Images
- 16 perceptual coding
- 17 frequency resolution
- 18 Distance perception, the maturity of sound
- 19 Psychoacoustics
- 20 I can't believe it's not butter! Huh?
It'd be nice to see a blurb about how this replies to digital audio technology, specifically in how the MP3 format utilizes psychoacoustics to remove aspects of sound that are not crucial to the perception of the sound. Just a blurb, though. I'm curious, and I think it'd make the article seem more relevant to a lot of people.
Content from 'Field of Psychoacoustics'
I append, from page redirected here. Charles Matthews 09:49, 6 May 2004 (UTC)
I removed the following because it is gibberish: "Sound is a continuous analog signal which, assuming infinitely small air molecules, can theoretically contain an infinite amount of information, thus being an infinite number of frequencies, each containing both magnitude and phase information." If you would like to include words to the effect of the deleted sentence, please reword them from airy-fairy to something that actually has meaning, humanities majors need not apply. As far as I am aware, molecules are finite physical particles with finite quantifiable properties. Theoretically Wikipedia and this article do not in fact exist and the keyboard is typing me, and to an extent Newton would have agreed, but it's rather pointless to sully an otherwise useful article with humanities nonsense. Perhaps if it really needs to be stated one could add a footnote with the referencing article - way down the bottom, it's simply too irrelevent and self-indulgent to place in the introduction.
This whole section is really confusing and difficult to follow. I would like to see if someone could try to help me make it somewhat easier to understand for readers who are relatively new to this subject. In all honesty this article would probably scare new readers about this subject and kind of put a technically incomprehensible name on it.
- Can't help with the gibberish above. Deleting the italicized part of the following:
- Also, the psycho-acoustic phantom effect is distinct from the physiology-acoustic phantom effect. It is the estimation of masking threshold level.
- Needs more explanation and backup before this casual reader can make sense of it. __Just plain Bill (talk) 02:23, 13 August 2008 (UTC)
This page should have a link to psychophysics, since psychoacoustics is a branch of psychophysics (the same goes for vision, olfactory, tactile, etc.).
One odd psycho acoustic phenomenon is that of time delay. Play the same music from two opposite speakers. You can move left to right until you hear both speakers at equal volume, and it seems to be coming from the center. Now delay the signal from one speaker by, say, 30 msec. Not only does the delayed music disappear, but you need to move at least 3/4 of the distance toward the delayed speaker to hear it at all! What you do hear in the center location is a sense of space, the "acoustics" of a large room or hall.
I was considering adding the second half of this: "These nerve pulses then travel to the brain where they are perceived, 'and occasionally become distorted due to an individual's preconceptions or psychological state.'" While this is an obvious truth, it made me wonder if there is a known disorder where the body converts sound waves into action potentials inaccurately (without hearing loss). This section seems like an important place to address these variables. Perhaps someone more educated can touch on this.
The comment above is logical if it is accurate, which it seems to be. I feel adding it to the end of the current intro as: "It can be further categorized as a branch of Psychophysics." works well for linking purposes. While it could easily be merged with the previous sentence as: "More specifically, it is the branch of Psychophysics..."--this produces the need to click on Psychophysics before one can understand that description. It could also be added at the bottom where it would be easily lost in the sea of "See also"...
After a few confused minutes thinking about the first sentence, I realized it did, in fact, make sense. The subject is illuminated further in the following sentence)s. The problem was the ending of the first sentence and the beginning of the second. The period is replaced by a semicolon, with the hope of keeping the momentum. We then spur them on with "..in other words.." (This is all an attempt to prevent the few confused minutes alone with the first sentence.).
For the following:
When the fundamental frequency of a note (or tone) is multiplied by 2^(1/12), the result is the frequency of the next higher semitone. Going 12 notes higher (an octave) is the same as multiplying the frequency by 2^(12/12), which is the same as doubling the frequency.
Someone changed the 2^(12/12) to 2^(1/12). This is incorrect as (2^(1/12))^12 is 2^(12/12) which is 2^1, which would be the same as doubling.
Psychoacoustics can be defined simply as the psychological study of hearing. The aim of psychoacoustic research is to find out how hearing works. In other words, the aim is to discover how sounds entering the ear are processed by the ear and the brain in order to give the listener useful information about the world outside.
Psychoacoustics is not concerned with how sounds produce a particular emotional or cognitive response. We leave these aspects to the cognitive psychologists and stick to the basics. Having said that, psychoacoustics is a very broad area, and while there is a large overlap with physiology at one end, at the other end we sometimes appeal to mainstream psychology in order to account for our more complex experimental results.
Note:This definition has been taken from this page: http://privatewww.essex.ac.uk/~cplack/Psycho.html For more information you can look this page.
Related subjects: Another interesting page where this term is well explained is Chris Plack earpage: http://privatewww.essex.ac.uk/~cplack/welcome.html
The article also implies that psychoacoustics is completely (or at least mostly) concerned with sound reproduction. Clearly incorrect--see any textbook on the subject, such as the one by Yost, or Green's older book, or even Moore's. 188.8.131.52 (talk) 22:36, 7 September 2009 (UTC)
Frequency resolution is about 2 Hz in the mid range? Around what frequency? How about magnitude resolution? Someone said somewhere we can only hear 3 dB of difference, but I know from experience that it is better than that. Anyone have some better numbers? - Omegatron 19:57, May 20, 2004 (UTC)
- I can hear 1dB diff- just about- by listening hard. But 3 db is what most people can easily notice. Try it with Cool Edit or some other sound package on your computer/ Maybe people can hear less than 1 db diff with training.--Light current 05:54, 16 March 2006 (UTC)
- I came to this article specifically looking for magnitude resolution information, and was disappointed to find none.
- Light current can hear 1 dB difference in some unspecified sound, which is better than no information at all, I guess.
- In JPEG compression, higher frequencies are quantized more coarsely than lower frequencies, with the finest resolution reserved for the "DC" term -- there are more "perceivable brightness levels" at a given low frequency than a given high frequency.
- Is the same true for sound?
- If I play a series of quarter-notes on middle C on the piano, alternating between one loudness level and one just slightly louder, what difference in loudness is the "barely perceivable difference in loudness"?
- How many such differences add up to the full range from "not quite audible" to "painfully loud"? I.e., how coarsely can I quantize the "middle C" frequency in a MP3 compressor without anyone noticing?
- What frequency of pure sine wave tone has the most "perceivable loudness levels"?
- Forgive me for raising so many questions.
- --184.108.40.206 (talk) 15:57, 28 August 2009 (UTC)
- Many of your questions are so specific that the article doesn't cover them. The middle C quarter notes question is one you'll have to perform as an experiment. One of your questions is answered here: At Equal-loudness contour you can see that a tone of between 3000 and 4000 Hz will be perceived by humans as the same level even when it is lower in power. Binksternet (talk) 16:38, 28 August 2009 (UTC)
"Should this have said 50Hz to 3500Hz? 500 seems incredibly high; it means the A above middle C is too low a note to be transmitted by a telephone, since that A is 440 Hz."
400 to 3400, says howstuffworks and one other random site. seems fine to me. - Omegatron 21:54, Jun 17, 2004 (UTC)
- While the fundamental frequency of middle C (or the A above it) might be too low for transmission, its harmonics wouldn't be -- which is presumably the reason why it is mentioned along with auditory illusions such as phantom fundamentals. A2's fundamental at 440Hz might not be reproduced at the receiver, but the the fundamental would nevertheless be sensed by the listener. -- Tlotoxl 01:43, 18 Jun 2004 (UTC)
- Put that in the article! :-) - Omegatron
A similar phenomenon occurs in music. I know of it only through rock music and the use of distortion (because the guitar effect known as "distortion" alters the harmonic series), however, I have heard of it mentioned in orchestra music. The phenomenon occurs like this: when a power chord is inverted (that is, the fifth is placed below the root) and played with a large about of distortion, a "phantom root" is percieved, an octave below the root which is actually played. I have also heard of this "phantom octave" being heard after hearing The President's Own band playing in a highly acoustically tuned room. Immediatly after the band cut off, the room itself was vibrating at an increadibly low pitch, according to a first-hand account. Scheater5
- It wouldnt be the room - it would be your ears\brain. --Light current 04:43, 12 March 2006 (UTC)
Perhaps, but I see no reason why the room couldn't vibrate sympathetically. In addition, the a room vibrating could be "felt," allowing it to be sensed at a much lower frequency than the human ear could detect. Scheater5 01:44, 13 March 2006 (UTC)
- Correct if room is big enough and the fundamental was being generated, but phantom fundamentals are all in the mind--Light current 01:46, 13 March 2006 (UTC)
That may have been true for the incident with the President's Own. However, the unplayed funtamental with the guitar power chords is actually produced. It's a phenomenon with the harmonic series, highlighted by high levels of gain associated with rock guitar. Scheater5 21:24, 15 March 2006 (UTC)
500 seems incredibly high; it means the A above middle C is too low a note to be transmitted by a telephone, since that A is 440 Hz
The lower cutoff frequency of a bandpass filter being at 500 Hz does not mean that frequencies lower than that won't make it through; it only means that they will be attenuated at least 3 dB below similar levels in the passband. Filters with a steep rolloff are tricky and expensive to build. Someone who actually knows something about telephony might want to speak to that... __Just plain Bill (talk) 02:17, 13 August 2008 (UTC)
Branchlist template added -see talk
I've added a Wikipedia:Branchlist template using the Wikipedia:Root page concept. This page is a good demonstration of the need for such a concept, as it duplicates the content of Equal-loudness contour to some extent and does not indicate the presence of detailed pages on Fletcher-Munson curves or Robinson-Dadson curves. I hope you will agree it makes it easier to navigate around and get the whole picture. --Lindosland 11:19, 6 April 2006 (UTC)
From a Google search, I found a PDF of the "Reproducing low-pitched signals through small loudspeakers" paper by Aarts, Larsen and Schobben, so I linked it from the reference that was already on this page. It's not the same publication, but it seems likely to be the same paper, or a revised version thereof. Should the reference be changed to cite the actual publication that I linked to? -- Deven 15:34, 16 August 2006 (UTC)
Perception of Low Frequencies
It's absolutely untrue that the lower limit of hearing is 20Hz. You can easily hear down to single digit frequencies, assuming sufficient SPL. This should be corrected... I'll find citable references.
Pitch is essentially Gestalt perception of oscillation or pulsation, and this is one thing that necessarily delimits the low end of our hearing. Why do we hear pitch at all? Because--in the case of pulsation--we are still processing one pulse while we are registering the next: they blur. There is a certain threshold below which they don't blur, although I suppose theoretically they still might for someone with a very slow rate of neural processing: Thirty-second notes at q=120 are equivalent to a pulse wave of 16cps. TheScotch 10:28, 10 December 2006 (UTC)
Can someone knowledgeable on the subject turn the above article into something that is clearer to the non-specialist. Jackiespeel 16:59, 23 February 2007 (UTC)
Quote: "Another side effect of the ear’s non linear logarithmic response is that sounds which appear on the ear drum in close spectral proximity produce phantom beat notes"
Isn't this just the "beat frequency" that one studies in physics, which is independent of how the human ear works? elpincha 17:24, 26 March 2007 (UTC)
Yes it is. It's caused by destructive interference. This isn't a psychoacoustic effect at all, no matter how trippy it sounds. —Preceding unsigned comment added by 220.127.116.11 (talk) 23:21, 25 January 2008 (UTC)
Are you being deliberately obtuse?
I know this is a scientific subject, but it seems that the writer of the article is being deliberately obtuse. I quote:
"The ear for example, takes a spectral decomposition of sound as part of the process of turning sound into neural stimulus, so certain time domain effects are inaudible."
Now, I'm grappling with what that sentence means. I think I know what you are trying to convey, but your method of speech is stupefying. I am not a scientist, by the way, just someone interested in the subject of sound, so when presented with a sentence like the one above, I think to myself: "Is this guy taking the piss?" I used to write this badly at A-level when I wanted to impress the teacher I wanted to shag. It looks nice, all very academic, but it conveys its meaning poorly.
A tip from a writer: When conveying a difficult idea, be explicit - avoid pompous, vague or technical statements unless you have no choice.
If you really must couch an explanation in such technical jargon, add a second clarifying sentence for us mere mortals who do not understand the scientific jargon. Start a second sentence with 'In other words,' and repeat what you are trying to convey in very simply, easy language. Example:
"In other words, some frequencies are lost when interpreting the physical sound we hear."
I'm not entirely sure if that was what you were trying to say, because your original sentence was so palpably obtuse. But I'm sure you get the gist: if you absolutely, positively have to use technical language, ensure that you follow it up with a layman's explanation as best you can. It doesn't matter if you can't convey what you mean absolutely accurately, because the layman can then go back and read your original technical sentence and try to piece together what you are saying given the context of the easier sentence.
For God's sake, once in a while go back and read what you are writing. This whole article is riddled with raving gibberish and obfuscation. Absolutely unreadable!
- Much of what is obtuse in this article appears to have been added by a one-day editor, User:Tlynch, starting with this edit. Feel free to rewrite the article for simplicity and clarity. It's long been needed. Binksternet (talk) 15:35, 3 August 2008 (UTC)
- That sentence is actually pretty clear to someone technically minded, but you guys have got the bit right, about needing to be clear to a lay reader as well. What it says is, because sound excites an array of frequency-sensitive hair cells, the resulting nervous signals amount to a time-varying spectrum, or frequency-domain signal. The brain never gets a copy of the WAV file, so to speak; the biomechanics of the inner ear have already done some compression. What's lost, Dpolwarth, is not frequencies, but timing information.
- Actually, the sentence he's complaining about is neither clear nor correct to an expert on hearing (me). Rather, it embodies the common misunderstanding that hearing is fundamentally a frequency-domain process, which obscures the truth. That's why I tagged it here, with the intention of revisiting it. It should be written to agree with a good source. Maybe I'll get around to working on it... Dicklyon (talk) 03:07, 16 August 2008 (UTC)
I guess we're on the edges of our seats til then. I don't see where it says that hearing is fundamentally a frequency domain process, even though my paraphrasing pushed it in that direction. Can't deny that there is freq domain processing going on, along with a lot of other stuff. Particulars, please... what truths are obscured? Point me at online refs, or dump some core here, and I'll be happy to do the work of smiting words around to fit in the article. __Just plain Bill (talk) 03:22, 16 August 2008 (UTC)
- I tweaked some of it to agree better with sources. Here is a paper (by me) that might enlighten you further. Dicklyon (talk) 03:49, 16 August 2008 (UTC)
Lack of audio samples
In case no one has noticed, the images on this page seem to be broken (yes, I've tried it on multiple computers and this has not occurred in any other articles). I assume they need to be replaced or fixed. -18.104.22.168 (talk) 23:47, 22 June 2009 (UTC)
The lossy compression article implies that "perceptual coding" can be used for both images and sound. The perceptual coding link currently redirects to psychoacoustics, which only talks about human hearing, audio masking effects etc.
- Surprisingly, there's not much at vision or image compression and no article on visual psychophysics or such. Dicklyon (talk) 23:55, 15 October 2009 (UTC)
I challenge the claim that "Frequency resolution of the ear is 0.36 Hz within the octave of 1,000–2,000 Hz". I've been studying this field for a few years now, and I have never met a person (including highly trained musicians) that can reliably tell the difference between 1000Hz and 1001Hz. Actually, in the general population the JND (difference needed to discriminate correctly 80% of the time) is 1%-15%, while with some training, it can get as low as 0.3% (3Hz around 1000Hz). I looked at the reference in google books, but since some pages are missing, I can't really tell how Harry F. Olson measured it, or even how is "frequency resolution" defined. I'll look it up next time I'm in the library. I would very much appreciate it if anyone can explain the difference. (I also have a website where you can measure your own frequency discrimination threshold, so if you don't believe me, go ahead and check). OfriRaviv (talk) 06:56, 30 December 2009 (UTC)
- You might want to look for a library copy of the Olsen book to see how he achieved results with ten times more resolution than yours. Or, if you feel that Olsen's results are dated or somehow faulty, you should publish. Once you publish then your results can be contrasted with Olsen's. Binksternet (talk) 16:57, 30 December 2009 (UTC)
- It looks like a misinterpretation and miscalculation of the caption on p.249 of Olson, which says 280 steps from 1000 to 2000 Hz. Calculate: (2000-1000)/280 = 3.6 Hz, 10X bigger than the 0.36 in our article. I think it would be safe to say "on the order of 3 to 5 Hz" in that region; or quote the source. 22.214.171.124 (talk) 17:49, 30 December 2009 (UTC)
- I was hoping I'm on to something more important than a typo ;) no really - I also spotted that 280 steps in the text, which agree with my results. but one of the graphs suggest 0.36 (or at least less than 1). I will find the book in the library and report back... OfriRaviv (talk) 08:18, 31 December 2009 (UTC)
- OK. Checked the book. Indeed, it was only a misinterpretation. The results Olson reports are in accordance with what I know. I personally think that a claim like 3.6Hz is inaccurate (or maybe too accurate) since there is a large variability between different people, and different methods of measuring. so +1 for 126.96.36.199's suggested wording. —Preceding unsigned comment added by OfriRaviv (talk • contribs) 20:20, 3 January 2010 (UTC)
regarding minimum pitch discernment, and just to throw another wrench in the mix...
- Frequency resolution of the ear is reported by Olson to be 3.6Hz within the octave of 1,000–2,000Hz. Nevertheless, in double-blind A/B tests conducted in recording studios in Arkansas on March 1, 2011, a trained professional musician demonstrated the definite ability to reliably discern pitch differences generated by calibrated sine wave source as small as one Hertz at one kilohertz. (.1% variance resolution) Tests were conducted in an instantaneous A then B fashion at moderate SPL, using Genelec 1032 loudspeakers as the audio source. The listening distance was five feet. The room's C weighted NC is 12. Frequencies tested ranged from one kilohertz to five kilohertz if five hundred Hertz increments. The oscillator sources were Digirack digital oscillators, set to sine wave reproduction, and capable of calibrated resolution changes as small as 1 Hertz. The bit rate was 24 and the session sample rate was 44.1kHz.
In the tests, the minimum ability to discern pitch change for this individual appeared to be uniformly and firmly fixed at a resolution ratio of .1% of the tested frequency. (i.e. listener achieved 100% acuity for resolution changes of 1Hz at 1kHz, 100% for 3 Hz at 3kHz, 100% for 5 Hz at 5kHz, etc.) In contrast, the listener was uniformly unable to detect pitch variances smaller than the tenth of a percent previously noted. (e.g. 3 Hz changes at 4kHz were 100% undetectable, etc.) The sharpness (no pun intended) of this divide in the data was in and of itself remarkable.
This research suggests that the 3.6Hz minimum resolution claimed by Olson in 1967 may be an average figure gathered from larger population group, or a summation of Olson's overall findings for the frequencies in question, or it may reflect limitations in Olsons' test equipment, or it may simply not express the specific learned capacities and/or giftings of certain individuals. Notwithstanding Olson's somewhat dated findings, these recent experimental data could merely reveal a heightened ability to discern minimum pitch variance unique to this individual's genetic disposition. It may be similar to extreme amplitude acuity demonstrated by individuals who use hearing to compensate for other sensory deficiencies. (e.g. the "blind piano tuner" example) It may be something else again. However, in at least one instance, it is demonstrable that there exists an otherwise unremarkable person who can reliably detect very subtle pitch changes under carefully controlled conditions. (something on the order of .1% of pitch)
More testing is needed. Given the vastly increased access to very accurately calibrated oscillators, high quality loudspeakers, and controlled listening environments in 2011 vs. 1967, It would be interesting to see if others who have access to a similar level of technical expertise and facility can reproduce similar results from musicians to whom they have access. I'm sure Olson would be proud. — Preceding unsigned comment added by CoolBlueGlow (talk • contribs)
- Great stuff. Get it published and we can use it in the article. Binksternet (talk) 21:37, 2 March 2011 (UTC)
Distance perception, the maturity of sound
Why is listening to large stereo speakers playing quality music at low volume completely better than than clipping on some headphones? Why is watching a movie at a theater better than just sitting too close to your television set? No loudspeaker or other artificial sound creation device will ever even hold a candle to orinal sounds. Think about a car going by your barn on the road located a couple hundred feet away. Think of all the reflections, the air, the attenuation that sounds has gone through by the time it reaches you. That sound is mature. It sounds absolutely nothing like the exaggerated and simplified directionality of any surround sound pan. And more channels is far from the answer. And searching every third world county for a more exotic driver material is neither. Another one I always think of especially when it happens is the amazing clap of a piece of wood making square contact with a wood floor. That crack cannot be made by a diaphram of soft stuff, regardless of how flat the response may be or what dB it is capable of, in the event it tries to create the complex harmonics of such a sound it will not perform to it's "specifications," which in my opinion is a boondoggle term. Put simply it will drop intricate and non intricate layers of that sound for the intricate and non intricate layers it prefers. If one was to dedicate their life career into making an absolutely realistic sonic environment they would be a mad scientist scrounging desperately for one more liberal science grant to keep their futile experimentations going. You must not argue that your MartinLogan electrostatics are any better than my old Shure VocalMasters with one speaker broken because this simply cannot be possible. No equation or parameter will quantify the results of a sense. Daniel Christensen (talk) 07:36, 29 June 2011 (UTC)
I put the beat note stuff in this article some years ago to point out that digital sampling theory leaves out frequency components. After an D/A there will be a hard low pass filter to remove artifacts, yet two high frequency waves in music (like the wood slapping example above) will create audible beat notes in the ear. This has to do with psychoacoustics because these beat notes are expected and add to the warmth and reality of music and sound.
I'm glad the logarithmic compression for the phone stuff is still there. Yes, the ear is sensitive to multiplicative changes in amplitude. That is why we use the dB scale when measuring loudness. It follows that a software sound that can be heard in a library will be impossible to hear against a loud background. The soft sound adds to the loud background incrementally, not multiplicatively. 10 compared to 1 is a big number. 10 compared to 10,000 is lost in the noise. I'm not sure the explanation here brings that out.
But something the older article had that is now missing is the affect of that or tuning certain things out. For example when operators of control panels get used to alarms they start ignoring them. This is called 'alarm fatique'. I've experienced it. You get one situation where a faulty sensor is always alarming, and then you learn to just ignore it and miss an important alarm. The same happens to people with their morning alarm clocks - just "tune it out". In both present and past people have tuned out a number of artifacts, including phonograph hiss, tape hiss, cliques and pops, etc. You might wonder how us old folks ever enjoyed those old radio broadcasts or early phonograph records. That is because we heard and remembered the music. MP3s sound just awful, the phasing and many things are just messed up. Young people are accustom to it, and play them without even noticing.
Perceptual cues also come into play. In Austin Texas there are speakers under the seats at the university symphony hall. They augment the symphony play (those microphones hanging in the front are not just for recording). Yet when you talk to people they will swear that the music only came from the orchestra. The visual cues override both the directional and hiss coming from those speakers.
Who in their dreams or even when they hum a song by memory remembers the acoustic artifacts?
I can't believe it's not butter! Huh?
I last read this article about two years ago. I don't recognize it. There's nothing of the alluring reading topics and material and information that was there and really worthwhile. What happened to the section on physiological responses to the sound of another's voice? Isn't that why it's called psycho+acoustics? Please return my favorite article to it's highly insightful condition. If I learn how to navigate wikipedia editing, I'll set a task to locating it's highly original state of greatness. However, if you can help, please, help the article. It has been depreciated. Nicole Mahramus (talk) 14:17, 22 July 2013 (UTC)
- Here's the difference between the article almost three years ago and the article now. Many minor changes and one paragraph added, but nothing major appears to have been removed. You can look further back in the history and see what may have been there earlier. Rivertorch (talk) 19:51, 22 July 2013 (UTC)
- I performed much the same investigation earlier today, and I came up with very little change. I went back even further, looking for the "insightful" version, but I did not find it. Instead, the article kept getting more conjectural and less firmly based in published sources. One excellent change since the old days is the removal of the uncited assertion that Dr. Amar G. Bose invented psychoacoustics. Bose was born in 1929. In 1948 when Bose was 19, respected audio engineer Dr. Harold Burris-Meyer said, "The term 'psycho-acoustics' is new. It is now applied to a whole research department at Harvard, under S. S. Stevens, the Psycho-Acoustic Laboratories. A study is being made of the effect of auditory stimuli on the human organism, on the functions of the human organism, not so much on the emotions." (National Music Council Bulletin, Volume VIII, Number 3, May 1948, "Acoustics and Music", page 1.) During this time, young Bose was concerned primarily with getting good grades on his electrical engineering tests at MIT. He was not interested in acoustics or sound reproduction, and he did not go to Harvard. Binksternet (talk) 22:16, 22 July 2013 (UTC)