The relevance of particular information in (or previously in) this article or section is disputed.
The information may have been removed or included by an editor as a result. Please see discussion on the talk page considering whether its inclusion is warranted.(April 2014)
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB, but sometimes MByte is used. The unit prefix mega is a multiplier of 1000000 (106) in the International System of Units (SI). Therefore one megabyte is one million bytes of information. This definition has been incorporated into the International System of Quantities.
However, in the computer and information technology fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as 1048576bytes (220), a measurement that conveniently expresses the binary multiples inherent in digital computer memory architectures. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, in which this measurement is designated by the unit mebibyte (MiB). Less common is a measurement that used the megabyte to mean 1000×1024 (1024000) bytes.
The megabyte is commonly used to measure either 10002 bytes or 10242 bytes. The interpretation of using base 1024 originated as a compromise technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 (210) approximates 1000 (103), roughly corresponding to the SI prefix kilo-, it began to be used for binary multiples as well. In 1998 the International Electrotechnical Commission (IEC) proposed standards for binary prefixes requiring the use of megabyte to strictly denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST. Nevertheless, the term megabyte continues to be widely used with different meanings:
1 MB = 1024000 bytes (= 1000×1024) B is the definition used to describe the formatted capacity of the 1.44 MB 3.5inch HD floppy disk, which actually has a capacity of 1474560bytes.
Semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two. The capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in the drive. Changes in any of these factors would not usually double the size. Sector sizes were set as powers of two (most common 512 bytes or 4096 bytes) for convenience in processing. It was a natural extension to give the capacity of a disk drive in multiples of the sector size, giving a mix of decimal and binary multiples when expressing total disk capacity.