Jump to content

Talk:Lempel–Ziv–Oberhumer

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

"Requires no memory for decompression.", I think this could do with justification/expansion or correct. I haven't come across that many algorithms that require zero memory to operate! Perhaps this should talk about in-place decompression (decompress straight to the final buffer) if that is what it means? Sladen 20:56, 6 November 2006 (UTC)[reply]

Yes, I'm familiar with the algorithm and what that meant to say was that it requires no additional memory for compression. I'll fix the article. --Trixter 15:54, 7 November 2006 (UTC)[reply]
"Requires no memory for decompression." means that there is no need to allocate any buffers or whatever. It is enough to have source data and destination to place them. All other calculations could fit into CPU registers which are often not part of memory or address space. Hence, Oberhumer's claim is formally valid. There is decompressors who decompress it entirely in CPU registers, using no memory except source buffer holding compressed data and destination buffer holding uncompressed block. They can even partially overlap IIRC so compressed block + extra space transformed to uncompressed block at the end of operation. —Preceding unsigned comment added by 91.77.159.176 (talk) 01:40, 4 May 2009 (UTC)[reply]

Maybe there could be a more detailed description of the algorithm on the page? 24.6.254.250 05:00, 29 May 2007 (UTC)[reply]

Removed "Algorithm is thread safe" because algorithms always are. The thread safety of the reference implementation is already mentioned in the first sentence jan —Preceding unsigned comment added by 82.83.238.109 (talk) 15:15, 16 October 2007 (UTC)[reply]

On what is the claim "On modern architectures, decompression is very fast; in non-trivial cases able to exceed the speed of a straight memory-to-memory copy due to the reduced memory-reads." based? Author's own benchmarks at http://www.oberhumer.com/opensource/lzo/lzodoc.php don't support that speed claim. Skarkkai (talk) 12:23, 5 July 2009 (UTC)[reply]

The LZO code itself is a nightmare to read, so if an expert could explain the details of the algorithm in more depth, it would be greatly appreciated, especially details on how LZO acceptably handles non-compressible data. An explanation of why LZO is so much faster in benchmarks than most other *NIX file compressors would be nice as well; the most I can discern through strace and the like is that LZO (as used in lzop) is faster because it works with data in fixed-size chunks, drastically reducing system call counts and eliminating some code for handling special situations. I have nothing more than a run of strace to base this on, though, and would love a clear explanation. 75.138.198.88 (talk) 06:11, 25 February 2012 (UTC)[reply]

I agree. I'd like for someone who understands how LZO works under the hood to tell us how it works. What does LZO do that's different from LZ77 and how does it do it? 97.82.138.203 (talk) 20:03, 18 November 2014 (UTC)[reply]

Raising Lazarus 20 year-old bug

[edit]

Might be of interest: [1]. 76.10.128.192 (talk) 17:36, 27 June 2014 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified one external link on Lempel–Ziv–Oberhumer. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 20:18, 13 May 2017 (UTC)[reply]