From Wikipedia, the free encyclopedia
Jump to: navigation, search

Quality parameter[edit]

The article says: "FLAC allows for a Rice parameter between 0 and 16." I don't know what this means -- and it will be very obscure and unhelpful to the average reader.

It is important to offer readers a practical understanding of the quality parameter in common usage:

"The quality parameter for FLAC refers to the quality of compression, not audio. The audio will stay lossless but you get a better compression with higher quality. Higher quality will take more time to compress however.
See docs
Free Lossless Audio Codec (FLAC): FLAC is a popular lossless, freely available open source encoder. [2] Quality Settings: 0 - 8. Sets the quality of compression (and not sound, which is lossless), 8 meaning most compressed/time/effort."

These resources are relevant and helpful:

What differences among CUETools outputs? libFlake/libFLAC/flake/FLACCL, Moderation—derailed into a verbose discussion about compression levels
" Compression levels
libFLAC has compression levels 0..8, where 0 is the fastest and 8 provides the best compression ratio. libFlake and FlaCuda are tuned differently, so libFlake -5 might in fact compress better than libFLAC -8. They also support additional compression levels 9-11, however their use is not recommended, because those levels produce so called non-subset files, which might not be supported by certain e.g. hardware implementations.
FLAC specifies a subset of itself as the Subset format. The purpose of this is to ensure that any streams encoded according to the Subset are truly "streamable", meaning that a decoder that cannot seek within the stream can still pick up in the middle of the stream and start decoding. It also makes hardware decoder implementations more practical by limiting the encoding parameters such that decoder buffer sizes and other resource requirements can be easily determined. flac generates Subset streams by default unless the "--lax" command-line option is used."

In sum, readers need to understand that the quality setting will affect how long the compression takes, will generally lead to a relatively small difference in compressed file size, have very little impact on decoding time, and not impact the "lossless" aspect at all -- unless taken to such an extreme that incompatibilities might arise. - (talk) 15:16, 12 February 2013 (UTC)

Inconsistency in compression rate[edit]

The article mentions twice a typical rate of 50-60%, and later it says "FLAC achieves compression rates of 30–50% for most music." anoko_moonlight (talk) 04:28, 18 March 2013 (UTC)

Do you mean it has to say 40-50% to be correct? C'mon +-10%. --Kays (talk) 22:28, 15 October 2013 (UTC)
What this community member meant, or so I believe, was that it should state one approximate rate which is agreed upon and not two which partially contradict each other. --lmaxmai (talk) 00:15, 1 August 2016 (UTC)
After FLAC compression, the file size is typically reduced to 50-60% of the original size, which is the same as saying that it is reduced by 40-50%. Although previous versions may have been confusing, I think the current text in the article states this accurately without any contradiction. LiberatorG (talk) 15:24, 1 August 2016 (UTC)

Reference to Wavpack removed, since it uses only public domain technologies[edit]

From the Wavpack website:

WavPack employs only well known, public domain techniques (i.e., linear prediction with LMS adaptation, Elias and Golomb codes) in its implementation. Methods and algorithms that have ever been patented (e.g., arithmetic coding, LZW compression) are specifically avoided. This ensures that WavPack encoders and decoders will remain open and royalty-free.

Additionally, I have confirmed with one of the developers that this status is current.

--Alexanderino (talk) 01:05, 1 November 2014 (UTC)

How does it work?[edit]

There is no indication in the article how the codec achieves its lossless compression. That's a major omission, I'd say. (talk) 21:14, 21 October 2017 (UTC)