|This article needs additional citations for verification. (August 2012)|
Depending on its design goals, a good checksum algorithm will usually output a significantly different value, even for small changes made to the input.
This is especially true of cryptographic hash functions, which may be used to detect many data corruption errors and verify overall data integrity; if the computed checksum for the current data input matches the stored value of a previously computed checksum, there is a very high probability the data has not been accidentally altered or corrupted.
Checksum functions are related to hash functions, fingerprints, randomization functions, and cryptographic hash functions. However, each of those concepts has different applications and therefore different design goals.
By themselves, checksums are often used to verify data integrity, but should not be relied upon to also verify data authentication. However they are used as cryptographic primitives in larger authentication algorithms. For cryptographic systems with these two specific design goals, see HMAC.
Check digits and parity bits are special cases of checksums, appropriate for small blocks of data (such as Social Security numbers, bank account numbers, computer words, single bytes, etc.). Some error-correcting codes are based on special checksums which not only detect common errors but also allow the original data to be recovered in certain cases.
Parity byte or parity word
The simplest checksum algorithm is the so-called longitudinal parity check, which breaks the data into "words" with a fixed number n of bits, and then computes the exclusive or of all those words. The result is appended to the message as an extra word. To check the integrity of a message, the receiver computes the exclusive or (XOR) of all its words, including the checksum; if the result is not a word with n zeros, the receiver knows a transmission error occurred.
With this checksum, any transmission error which flips a single bit of the message, or an odd number of bits, will be detected as an incorrect checksum. However, an error which affects two bits will not be detected if those bits lie at the same position in two distinct words. If the affected bits are independently chosen at random, the probability of a two-bit error being undetected is 1/n.
A variant of the previous algorithm is to add all the "words" as unsigned binary numbers, discarding any overflow bits, and append the two's complement of the total as the checksum. To validate a message, the receiver adds all the words in the same manner, including the checksum; if the result is not a word full of zeros, an error must have occurred. This variant too detects any single-bit error, but the promodular sum is used in SAE J1708.
The simple checksums described above fail to detect some common errors which affect many bits at once, such as changing the order of data words, or inserting or deleting words with all bits set to zero. The checksum algorithms most used in practice, such as Fletcher's checksum, Adler-32, and cyclic redundancy checks (CRCs), address these weaknesses by considering not only the value of each word but also its position in the sequence. This feature generally increases the cost of computing the checksum.
A single-bit transmission error then corresponds to a displacement from a valid corner (the correct message and checksum) to one of the m adjacent corners. An error which affects k bits moves the message to a corner which is k steps removed from its correct corner. The goal of a good checksum algorithm is to spread the valid corners as far from each other as possible, so as to increase the likelihood "typical" transmission errors will end up in an invalid corner.
- CHK Checksum Utility, An advanced checksum tool with CRC32, ED2K (eMule/eDonkey2000), MD5, SHA1, SHA1-Base32, SHA256, SHA384, SHA512 and WHIRLPOOL support
- MD5 & SHA Checksum UtilityA standalone freeware which can generate and verify MD5, & SHA-1 & SHA-256 hash from a file.
- Advanced Hash Calculator, hash calculator software for multiple files for Windows which calculates CRC-32, MD2, MD4, MD5, SHA-1, SHA-256, SHA-384 and SHA-512 checksums.
- Bitser, a free Microsoft Windows application which calculates MD5, SHA-1 and SHA-256 sums for any given input file.
- checksum, a fast file, folder and drive hashing application for Windows.
- MD5 File Hasher for Windows (Digital Tronic) , hash sum verification, monitoring, scheduled checks, detailed reporting for Windows.
- cksum, a Unix command which generates both a 32-bit CRC and a byte count for any given input file.
- File Checksum Integrity Verifier (FCIV), a command-prompt utility from Microsoft which computes and verifies MD5 or SHA-1 cryptographic hash values of files.
- Hash Validation Tool (hash), a command-prompt utility which will generate/validate several types of hash values for multiple files.
- Jacksum, a Java API, usable both through a GUI and a CLI, which incorporates many checksum implementations and allows to extend with as many as you need.
- RHash, an open-source CLI tool and C library which incorporates a large number of checksum implementations.
- jdigest, a Java GUI tool which generates and checks MD5 and SHA sums
- jcksum, a Java library, which can be used by developers in Java applications to calculate checksums using different algorithms.
- md5sum, a Unix command which generates an MD5 sum
- sha1sum, another Unix command which generates a SHA-1 sum
- Parchive, a crossplatform software which is capable of both verifying checksums and repairing errors when found.
- sum, a Unix command (also ported to Win32) which generates order-independent sums; uses two different algorithms for calculating, the SYSV checksum algorithm and the BSD checksum (default) algorithm.
- List of hash functions
- Luhn algorithm
- Parity bit
- Rolling checksum
- Verhoeff algorithm
- ZFS — a file system which performs automatic file integrity checking using checksums
|The Wikibook Algorithm Implementation has a page on the topic of: Checksums|