Talk:Lempel–Ziv–Oberhumer

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing / Software  
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 ???  This article has not yet received a rating on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Software.
 

"Requires no memory for decompression.", I think this could do with justification/expansion or correct. I haven't come across that many algorithms that require zero memory to operate! Perhaps this should talk about in-place decompression (decompress straight to the final buffer) if that is what it means? Sladen 20:56, 6 November 2006 (UTC)

Yes, I'm familiar with the algorithm and what that meant to say was that it requires no additional memory for compression. I'll fix the article. --Trixter 15:54, 7 November 2006 (UTC)
"Requires no memory for decompression." means that there is no need to allocate any buffers or whatever. It is enough to have source data and destination to place them. All other calculations could fit into CPU registers which are often not part of memory or address space. Hence, Oberhumer's claim is formally valid. There is decompressors who decompress it entirely in CPU registers, using no memory except source buffer holding compressed data and destination buffer holding uncompressed block. They can even partially overlap IIRC so compressed block + extra space transformed to uncompressed block at the end of operation. —Preceding unsigned comment added by 91.77.159.176 (talk) 01:40, 4 May 2009 (UTC)

Maybe there could be a more detailed description of the algorithm on the page? 24.6.254.250 05:00, 29 May 2007 (UTC)

Removed "Algorithm is thread safe" because algorithms always are. The thread safety of the reference implementation is already mentioned in the first sentence jan —Preceding unsigned comment added by 82.83.238.109 (talk) 15:15, 16 October 2007 (UTC)

On what is the claim "On modern architectures, decompression is very fast; in non-trivial cases able to exceed the speed of a straight memory-to-memory copy due to the reduced memory-reads." based? Author's own benchmarks at http://www.oberhumer.com/opensource/lzo/lzodoc.php don't support that speed claim. Skarkkai (talk) 12:23, 5 July 2009 (UTC)

The LZO code itself is a nightmare to read, so if an expert could explain the details of the algorithm in more depth, it would be greatly appreciated, especially details on how LZO acceptably handles non-compressible data. An explanation of why LZO is so much faster in benchmarks than most other *NIX file compressors would be nice as well; the most I can discern through strace and the like is that LZO (as used in lzop) is faster because it works with data in fixed-size chunks, drastically reducing system call counts and eliminating some code for handling special situations. I have nothing more than a run of strace to base this on, though, and would love a clear explanation. 75.138.198.88 (talk) 06:11, 25 February 2012 (UTC)

Raising Lazarus 20 year-old bug[edit]

Might be of interest: [1]. 76.10.128.192 (talk) 17:36, 27 June 2014 (UTC)