The bit (a portmanteau of binary digit) is a basic unit of information used in computing and digital communications. A binary digit can have only one of two values, and may be physically represented with a two-state device. These state values are most commonly represented as either a 0or1.
The two values of a binary digit can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. The length of a binary number may be referred to as its bit-length.
In information theory, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.
The symbol for binary digit is either simply bit (recommended by the IEC 80000-13:2008 standard) or lowercase b (recommended by the IEEE 1541-2002 standard). A group of eight binary digits is commonly called one byte, but historically the size of the byte is not strictly defined.
As a unit of information in information theory, the bit has alternatively been called a shannon, named after Claude Shannon, the founder of field of information theory. This usage distinguishes the quantity of information from the form of the state variables used to represent it. When the logical values are not equally probable or when a signal is not conveyed perfectly through a communication system, a binary digit in the representation of the information will convey less than one bit of information. However, the shannon unit terminology is uncommon in practice.
The encoding of data by discrete bits was used in the punched cards invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semen Korsakov, Charles Babbage, Hermann Hollerith, and early computer manufacturers like IBM. Another variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes and stock ticker machines (1870).
Ralph Hartley suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word bit in his seminal 1948 paper A Mathematical Theory of Communication. He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit". Interestingly, Vannevar Bush had written in 1936 of "bits of information" that could be stored on the punched cards used in the mechanical computers of that time. The first programmable computer, built by Konrad Zuse, used binary notation for numbers.
A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc.
For devices using positive logic, a digit value of 1 (or a logical value of true) is represented by a more positive voltage relative to the representation of 0. The specific voltages are different for different logic families and variations are permitted to allow for component aging and noise immunity. For example, in transistor–transistor logic (TTL) and compatible circuits, digit values 0 and 1 at the output of a device are represented by no higher than 0.4 volts and no lower than 2.6 volts, respectively; while TTL inputs are specified to recognize 0.8 volts or below as 0 and 2.2 volts or above as 1.
Transmission and processing
Bits are transmitted one at a time in serial transmission, and by a multiple number of bits in parallel transmission. A bitwise operation optionally process bits one at a time. Data transfer rates are usually measured in decimal SI multiples of the unit bit per second (bit/s), such as kbit/s.
In the earliest non-electronic information processing devices, such as Jacquard's loom or Babbage's Analytical Engine, a bit was often stored as the position of a mechanical lever or gear, or the presence or absence of a hole at a specific point of a paper card or tape. The first electrical devices for discrete logic (such as elevator and traffic light control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques.
In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic core memory, magnetic tapes, drums, and disks, where a bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards.
In modern semiconductor memory, such as dynamic random-access memory, the two values of a bit may be represented by two levels of electric charge stored in a capacitor. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes, bits are encoded as the thickness of alternating black and white lines.
Unit and symbol
The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be bit, and this should be used in all multiples, such as kbit, for kilobit. However, the lower-case letter b is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter B is the standard and customary symbol for byte.
Multiples of bits
Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer and for this reason it was used as the basic addressable element in many computer architectures. The trend in hardware design converged on the most common implementation of using eight bits per byte, as it is widely used today. However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits.
Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the 21st century, retail personal or server computers have a word size of 32 or 64 bits.
The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (103) through yotta (1024) increment by multiples of 1000, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit).
Information capacity and information compression
When the information capacity of a storage system or a communication channel is presented in bits or bits per second, this often refers to binary digits, which is a computer hardware capacity to store binary code (0 or 1, up or down, current or not, etc.). Information capacity of a storage system is only an upper bound to the actual quantity of information stored therein. If the two possible values of one bit of storage are not equally likely, that bit of storage will contain less than one bit of information. Indeed, if the value is completely predictable, then the reading of that value will provide no information at all (zero entropic bits, because no resolution of uncertainty and therefore no information). If a computer file that uses n bits of storage contains only m < n bits of information, then that information can in principle be encoded in about m bits, at least on the average. This principle is the basis of data compression technology. Using an analogy, the hardware binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer (when information is more compressed), the same bucket can hold more.
For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.
In the 1980s, when bitmapped computer displays became popular, some computers provided specialized bit block transfer ("bitblt" or "blit") instructions to set or copy the bits that corresponded to a given rectangular area on the screen.
In most computers and programming languages, when a bit within a group of bits, such as a byte or word, is referred to, it is usually specified by a number from 0 upwards corresponding to its position within the byte or word. However, 0 can refer to either the most or least significant bit depending on the context.
Other information units
Other units of information, sometimes used in information theory, include the natural digit also called a nat or nit and defined as log2 e (≈ 1.443) bits, where e is the base of the natural logarithms; and the dit, ban, or hartley, defined as log2 10 (≈ 3.322) bits. This value, slightly less than 10/3, may be understood because 103 = 1000 ≈ 1024 = 210: three decimal digits are slightly less information than ten binary digits, so one decimal digit is slightly less than 10/3 binary digits. Conversely, one bit of information corresponds to about ln 2 (≈ 0.693) nats, or log10 2 (≈ 0.301) hartleys. As with the inverse ratio, this value, approximately 3/10, but slightly more, corresponds to the fact that 210 = 1024 ~ 1000 = 103: ten binary digits are slightly more information than three decimal digits, so one binary digit is slightly more than 3/10 decimal digits. Some authors also define a binit as an arbitrary information unit equivalent to some fixed but unspecified number of bits.
- Integer (computer science)
- Primitive data type
- Trit (Trinary digit)
- Entropy (information theory)
- Baud (bits per second)
- Binary numeral system
- Ternary numeral system
- Shannon (unit)
- Mackenzie, Charles E. (1980). Coded Character Sets, History and Development. The Systems Programming Series (1 ed.). Addison-Wesley Publishing Company, Inc. p. x. ISBN 0-201-14460-3. LCCN 77-90165. Retrieved 2016-05-22. 
- "Definition of BIT".
- John B. Anderson, Rolf Johnnesson (2006) Understanding Information Transmission.
- Simon Haykin (2006), Digital Communications
- "Units: B".
- Norman Abramson (1963), Information theory and coding. McGraw-Hill.
- Shannon, Claude. "A Mathematical Theory of Communication" (PDF). Bell Labs Technical Journal. Archived from the original (PDF) on 2010-08-15.
- Bush, Vannevar (1936). "Instrumental analysis". Bulletin of the American Mathematical Society. 42 (10): 649–669. doi:10.1090/S0002-9904-1936-06390-1.
- National Institute of Standards and Technology (2008), Guide for the Use of the International System of Units. Online version.
- Bemer, Robert William (2000-08-08). "Why is a byte 8 bits? Or is it?". Computer History Vignettes. Archived from the original on 2017-04-03. Retrieved 2017-04-03.
[…] With IBM's STRETCH computer as background, handling 64-character words divisible into groups of 8 (I designed the character set for it, under the guidance of Dr. Werner Buchholz, the man who DID coin the term "byte" for an 8-bit grouping). […] The IBM 360 used 8-bit characters, although not ASCII directly. Thus Buchholz's "byte" caught on everywhere. I myself did not like the name for many reasons. […]
- Buchholz, Werner (1956-06-11). "7. The Shift Matrix". The Link System (PDF). IBM. pp. 5–6. Stretch Memo No. 39G. Archived (PDF) from the original on 2017-04-04. Retrieved 2016-04-04.
[…] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long […] the Shift Matrix to be used to convert a 60-bit word, coming from Memory in parallel, into characters, or "bytes" as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. […] The Adder may accept all or only some of the bits. […] Assume that it is desired to operate on 4 bit decimal digits, starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0-3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on. […] It is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. […]
- Buchholz, Werner (February 1977). "The Word "Byte" Comes of Age...". Byte Magazine. 2 (2): 144.
[…] The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch. A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8 bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter. The first published reference to the term occurred in 1959 in a paper "Processing Data in Bits and Pieces" by G A Blaauw, F P Brooks Jr and W Buchholz in the IRE Transactions on Electronic Computers, June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch), edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows:
Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.)
System/360 took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. […]
- Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (1962), "4: Natural Data Units", in Buchholz, Werner, Planning a Computer System – Project Stretch (PDF), McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA., pp. 39–40, LCCN 61-10466, archived (PDF) from the original on 2017-04-03, retrieved 2017-04-03
- Bemer, Robert William (1959), "A proposal for a generalized card code of 256 characters", Communications of the ACM, 2 (9): 19–23, doi:10.1145/368424.368435
- "The World's Technological Capacity to Store, Communicate, and Compute Information", especially Supporting online material, Martin Hilbert and Priscila López (2011), Science (journal), 332(6025), 60-65; free access to the article through here: martinhilbert.net/WorldInfoCapacity.html
- Bhattacharya, Amitabha (2005). Digital Communication. Tata McGraw-Hill Education. ISBN 0070591172. ISBN 9780070591172.
|Look up bit in Wiktionary, the free dictionary.|