||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
fdupes is a program written by Adrian Lopez to scan directories for duplicate files, with options to list, delete or replace the files with hardlinks pointing to the duplicate. It first compares file sizes and MD5 signatures, and then performs a byte-by-byte check for verification.
fdupes is written in C and is released under the MIT License.
Similar programs 
Other programs that can find duplicates and run under *nix:
- dupedit - Compares many files at once without checksumming. Avoids comparing files against themselves when multiple paths point to the same file.
- dupmerge - runs on various platforms (Win32/64 with Cygwin, *nix, Linux etc.)
- dupseek - Perl with algorithm optimized to reduce reads
- fdf - Perl/c based and runs across most platforms (Win32, *nix and probably others). Uses MD5, SHA1 and other checksum algorithms
- freedup - POSIX C compliant and runs across platforms (Windows with Cygwin, Linux, AIX, etc.)
- freedups - shell script
- fslint - has command line interface and GUI.
- liten - Pure Python deduplication command line tool, and library, using md5 checksums and a novel byte comparison algorithm. (Linux, Mac OS X, *nix, Windows)
- liten2 - A rewrite of the original Liten, still a command line tool but with a faster interactive mode using SHA-1 checksums (Linux, Mac OS X, *nix)
- rdfind - One of the few which rank duplicates based on the order of input parameters (directories to scan) in order not to delete in "original/well known" sources (if multiple directories are given). Uses MD5 or SHA1.
- rmlint - Fast finder with command line interface and many options to find other lint too (uses MD5)
- ua - Unix/Linux command line tool, designed to work with find (and the like).
- findrepe - free Java-based command-line tool designed for an efficient search of duplicate files, it can search within zips and jars.(GNU/Linux, Mac OS X, *nix, Windows)
- fdupe - a small script written in Perl. Doing its job fast and efficiently.
- ssdeep - identify almost identical files using Context Triggered Piecewise Hashing
See also 
- ^ User "Dr. Liviu Daia" (16:03 GMT-8, 12 Dec 2009). "Re: Comparing large amounts of files". 
External links