Jump to content

Capacity optimization

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by GreenC bot (talk | contribs) at 04:34, 19 November 2016 (1 archive template merged to {{webarchive}} (WAM)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Capacity optimization is a general term for technologies used to improve storage use by shrinking stored data. The primary technologies used for capacity optimization are deduplication and data compression. These solutions are delivered as software or hardware solution, integrated with existing storage systems or delivered as standalone products. Deduplication algorithms look for redundancy in sequences of bytes across comparison windows. Typically using cryptographic hash functions as identifiers of unique sequences, sequences are compared to the history of other such sequences, and where possible, the first uniquely stored version of a sequence is referenced rather than stored again. Different solutions use different methods for selecting data windows, from 4KB blocks to full-file comparisons known as Single Instance Storage or SIS.

Capacity optimization generally refers to the use of this kind of technology in a storage system. An example of this kind of system is the Venti file system [1] in the Plan9 open source OS. There are also implementations in networking (especially Wide Area networking), where they are sometimes called bandwidth optimization or WAN Optimization technologies.[2] [3]

Commercial implementations of capacity optimization are most often found in backup/recovery storage, where storage of iterating versions of backups day to day creates an opportunity for reduction in space using this approach. The term was first used widely in 2005. [4]

References

Capacity optimization through sensing threshold adaptation for cognitive radio networks (http://www.springerlink.com/content/fx023575w0836l04/)