Talk:Audio compression (data)
|The contents of the merged into Data compression and it now redirects there. For the contribution history and old versions of the merged article please see its history.page were|
Lossless compression technologies
There's an interesting aside here relating to midi and pianola. Both these techniques are extremely effective lossless compression technologies but solve the problem in a quite different way. User:Rjstott
I have moved the page Audio compression to Audio data compression because I believe it more accurately describes the topic of discussion and leaves the original page for a discussion of audio compression in recording. -- Jul 7, 2003 Ap
Lossy conversion to frequency-domain
Under the section titled "Lossless compression", it is said "...audio waveforms, which are generally difficult to simplify without a (necessarily lossy) conversion to frequency information...". I dispute that conversion to frequency-domain is necessarily lossy. 18.104.22.168 13:06, 15 January 2007 (UTC) Andrew Steer (www.techmind.org)
"In general, latency must be 15 ms or lower for transparent interactivity." This is nonsense. Transaltanic telephones calls always have a latency significantly greater than this, thanks to the distances involved: 15,000,000 round trip / 300,000,000 metres per sec = 50 ms. With modern telephone systems it is difficult to impossible to notice any latency. I have chopped out the above sentence. -- Psychofox 16:45, August 23, 2005
And I think that the sentence should be added back. Wieslaw W's work at McGill shows that very short latency is utterly essential for good "tightness" of a band. A conversation is a completely different ballgame, and even then, latency is shown to cause user confusion. Btw, telephones do not propagate at anything close to the speed of light, the cables are about 1/2c, and there is lots of buffering, etc, in the middle that adds even more latency. Then there's synchronous satellite delays for satellite carried calls... Woodinville (talk) 22:36, 4 February 2008 (UTC)
- I agree. The latency that occurs over transatlantic telephone calls (or on-location news reporters on TV), is very noticeable, and therefore not transparent. Just because people can work around it if they have to doesn't mean it's not a source of annoyance. SharkD (talk) 07:18, 8 June 2009 (UTC)
lzw compression of wav file
I don't know where user Xhamlliku went but I know of no reason lzw compression wouldn't work with a .wav file.. Charlie 11:52, 26 October 2005 (UTC)
Actually, ZLW won't work with a .wav file unless you compress truly enormous volumes of data. LZW regards the sequence 8 4 2 1 as a different sequence than 24 12 6 3. Any kind of linear predictor regard them as a single initial value with the same predictor coefficients. ZLW doesn't consider that kind of source model in the short term, and any of LPC or high-resolution frequency analysis will do very well with such a sequence. Since nearly all audio consists of autoregressive sequences like this, this shows why ZLW won't work so well on reasonable-sized files. Woodinville (talk) 22:34, 4 February 2008 (UTC)
Compression ratios are similar to what?
"The primary users of lossless compression have been audio engineers, audiophiles and those consumers who want to preserve an exact copy of their audio files, in contrast to the irreversible changes from lossy compression techniques such as Vorbis and MP3. Compression ratios are similar to those for lossless data compression (around 50-60% of original size)."
I'm wondering if the second sentence could perhaps be more precisely worded? If the compression ratios are similar for lossy and lossless compression, that would seem to remove the primary incentive to go with lossy compression, which is to save space. My observation is that there is a fairly significant difference in storage space requirements between WM9 lossless compression and the highest quality WMA lossy settings when ripping CDs, which implies the compression ratios aren't all that similar. JohnMajerus 05:51, 6 November 2007 (UTC)
How about something along these lines:
"Lossless audio compression is used by those, if they know what they're doing, who want an exact binary copy of an uncompressed digital audio recording, in less file space. Lossy audio compression is used by those who want a recording that will occupy even less file space (at the expense of irreversible changes to sound quality, which may or may not be appreciated by the user) and / or an ability to playback a recording in a consumer device (in-car audio player, mobile phone, etc.) that only supports the playback of certain lossy audio compression formats (e.g. not being capable of playing .wav files, etc., and / or audio CDs, but being capable of playing MP3, WMA, etc., files). Lossless audio compression ratios are similar to those of lossless data compression ratios (around 50-60% of the original size). Lossless audio compression has an advantage over lossless data compression in that it can be played and / or mixed directly in some audio players and / or software applications, and that it occupies even less space (about a 30% reduction) than lossless data compression alone can offer."
Can anyone simplify this article for simple people, by explaining what formats are most popular (MP3/4, WMA, etc.) and their estimated market share, when a given format is at its optimum (see note below) fixed bit rate (64kbps, etc.), and what formats use in terms of power consumption on playback (when recorded at their optimum). I've been told that audio CDs can be encoded in MP3 at 256kbps with the latest LAME engine, with no perceivable psycho-acoustic difference, is this true, and if it is, then why are there so many new formats? What would be helpful is a comparison chart, detailing the pros and cons of the most popular formats, including some of the lossless ones.
Other important points that the article hasn't addressed is that it's almost impossible to make an exact binary copy of an audio CD (I personally feel that CD ripping is best described as a mystical artform), why lossless compression isn't a wise resort, as it needs to be decompressed first before being playable again (especially if only about a 55% decrease in file size is attained), and why so many portable devices won't play .wav files, but will play MP3s and WMAs.
Note: what I mean here by optimum is the best compromise between data size and sound quality when in competitive comparison with other popular formats, e.g. I've been lead to believe that WMAs at 64kbps sound almost as good as a compact cassette recording, but increasing the rate to 128kbps, which is a 100% increase in file size, will not give a 100% increase in sound quality (more like a 25% increase), so it could be argued that WMAs only have an edge over other formats, when at a rate of 64kbps.
Does (very simple) synthesized music compress better than recordings of live performances? I ask because I know that computer rendered images sometimes compress better than photographs (at least, when using the PNG lossless format). I'm also not referring to music data formats, such as MIDI, which could I suppose be compared to vector-based image formats, which have a minimal file size. SharkD (talk) 07:10, 8 June 2009 (UTC)
There's a word-for-word copy of the lead at . The date on this is 2009. The article history here shows that the text was in place here at Wikipedia in 2008. --Kvng (talk) 13:29, 12 April 2011 (UTC)