|Data compression has been listed as a level-4 vital article in Technology. If you can improve it, please do. This article has been rated as Start-Class.|
|WikiProject Computer science||(Rated Start-class, Top-importance)|
|WikiProject Computing||(Rated Start-class)|
|The content of Audio compression (data) was merged into Data compression. That page now redirects here. For the contribution history and old versions of the redirected page, please see ; for the discussion at that location, see its talk page.|
|The content of Video compression was merged into Data compression. That page now redirects here. For the contribution history and old versions of the redirected page, please see ; for the discussion at that location, see its talk page.|
|This talk page is automatically archived by MiszaBot I. Any threads with no replies in 360 days may be automatically moved. Sections without timestamps are not archived.|
Use of AST for data compression
(also posted at Talk:Time stretch dispersive Fourier transform)
... [A] UCLA group, led by Bahram Jalali, holder of the Northrop Grumman Opto-Electronic Chair in Electrical Engineering, and including postdoctoral researcher Mohammad Asghari, created an entirely new method of data compression. The technique reshapes the signal carrying the data in a fashion that resembles the graphic art technique known as anamorphism, which has been used since the 1500s to create optical illusions in art and, later, film. The Jalali group discovered that it is possible to achieve data compression by stretching and warping the data in a specific fashion prescribed by a newly developed mathematical function. The technology, dubbed "anamorphic stretch transform," or AST, operates both in analog and digital domains. In analog applications, AST makes it possible to not only capture and digitize signals that are faster than the speed of the sensor and the digitizer, but also to minimize the volume of data generated in the process. AST can also compress digital records -- for example, medical data so it can be transmitted over the Internet for a tele-consultation. The transformation causes the signal to be reshaped is such a way that "sharp" features -- its most defining characteristics -- are stretched more than data's "coarse" features.
"Compression is driven by low bandwidth paths between high bandwidth parts of the world" Section
This section seems very out of place. The ideas it discusses are interesting, but poorly worded and seemingly speculative, with no citations. It should be reviewed. — Preceding unsigned comment added by 220.127.116.11 (talk) 14:45, 24 January 2014 (UTC)
missing answers to common questions
People may come here searching for : What is/was a packer, what is/was a cruncher? How do they correlate to modern compression types etc. Terms used in the eighty and nineties for compression I think should have some mention in here. — Preceding unsigned comment added by 18.104.22.168 (talk) 11:40, 2 March 2014 (UTC)
Compression of Random data
https://sites.google.com/site/rubikcompression/strictly-long and more explination on that sites home page. Anyone up for an independent code up? Given the "logic" of the usual response, an independent coding is the most obvious way to eliminate accusations of trickery. — Preceding unsigned comment added by 22.214.171.124 (talk) 00:23, 31 March 2014 (UTC)
Lossless compression reduces bits by identifying and eliminating statistical redundancy... Lossy compression reduces bits by identifying unnecessary information and removing it.
What is the difference between identifying and eliminating statistical redundancy and identifying unnecessary information and removing it?
Looks like there was once a message here, but it got filtered through tapioca and turned into goo.