This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
lets say we got a binary string with totally N bits from witch p% are bits of 1 , (100-p)% bits of 0. We might use for an avalanche compressing data an XOR bit by bit with a N bits string that have 1.2*p% bits of 1... etc obtaining 0.75*p bits of 1 N resulted string... applyable also 4 optimizing performing muls or other functions using a small base of preresolved cases ... 18.104.22.168 (talk) 20:31, 7 November 2012 (UTC) (take it as an iq test if u wish, Russian problem style) a counting system using digits 0, 1, 3, 7, 15 might work with counting base b<=...3 wich tries to say we gat an up to 2X data compression comparing to natural bin encoding digits 0 n 1 and counting base b=2 deserialization starts from MSB... 22.214.171.124 (talk) 14:00, 23 February 2013 (UTC)
This no sense makes. Asymptotically, if the fraction of "1"s is p (not p%), we could use an entropy-based compression method with bits. — Arthur Rubin(talk)
thank u very much 4 this formula, it really helps to make some evaluation a priori, abt some other creative (pure creative) ideas in data compression, i understood that once p n (1-p) are not equals, there is a chance to obtain a gain compression which is numerical given by this formula. thank u once more ! Florin 126.96.36.199 (talk) 11:58, 14 April 2013 (UTC)
It seems strange that in the usage section there's sub-sections for audio and video but not just general file compression for things such as archiving or data transport. — Preceding unsigned comment added by 188.8.131.52 (talk) 11:23, 4 September 2013 (UTC)
... [A] UCLA group, led by Bahram Jalali, holder of the Northrop Grumman Opto-Electronic Chair in Electrical Engineering, and including postdoctoral researcher Mohammad Asghari, created an entirely new method of data compression. The technique reshapes the signal carrying the data in a fashion that resembles the graphic art technique known as anamorphism, which has been used since the 1500s to create optical illusions in art and, later, film. The Jalali group discovered that it is possible to achieve data compression by stretching and warping the data in a specific fashion prescribed by a newly developed mathematical function. The technology, dubbed "anamorphic stretch transform," or AST, operates both in analog and digital domains. In analog applications, AST makes it possible to not only capture and digitize signals that are faster than the speed of the sensor and the digitizer, but also to minimize the volume of data generated in the process. AST can also compress digital records -- for example, medical data so it can be transmitted over the Internet for a tele-consultation. The transformation causes the signal to be reshaped is such a way that "sharp" features -- its most defining characteristics -- are stretched more than data's "coarse" features.
"Compression is driven by low bandwidth paths between high bandwidth parts of the world" Section
This section seems very out of place. The ideas it discusses are interesting, but poorly worded and seemingly speculative, with no citations. It should be reviewed. — Preceding unsigned comment added by 184.108.40.206 (talk) 14:45, 24 January 2014 (UTC)
People may come here searching for : What is/was a packer, what is/was a cruncher? How do they correlate to modern compression types etc. Terms used in the eighty and nineties for compression I think should have some mention in here. — Preceding unsigned comment added by 220.127.116.11 (talk) 11:40, 2 March 2014 (UTC)