||This article needs additional citations for verification. (June 2008)|
Up to the early 1990s, many programs and data transmission channels assumed that all characters would be represented as numbers between 0 and 127 (7 bits). On computers and data links using 8-bit bytes this left the top bit of each Byte free for use as a parity, flag bit, or meta data control bit. 7-bit systems and data links are unable to handle more complex character codes which are commonplace in non-English-speaking countries with larger alphabets.
Binary files cannot be transmitted through 7-bit data channels directly. To work around this, binary-to-text encodings have been devised which use only 7-bit ASCII characters. Some of these encodings are uuencoding, Ascii85, SREC, BinHex, kermit and MIME's Base64. EBCDIC-based systems cannot handle all characters used in UUencoded data. However, the base64 encoding does not have this problem.
SMTP and NNTP 8-bit cleanness 
Historically, various media were used to transfer messages, some of them only supporting 7-bit data, so an 8-bit message had high chances to be garbled during transmission in the 20th century. But some Internet implementations really did not care about formal discouraging of the 8-bit data and allowed high bit set bytes to pass through.
Many early communications protocol standards, such as RFC 780, RFC 788, RFC 821 for SMTP, RFC 977 for NNTP, RFC 1056, RFC 2821, RFC 5321, were designed to work over such "7-bit" communication links. They specifically mention the use of ASCII character set "transmitted as a 8-bit byte with the high-order bit cleared to zero" and some of these explicitly restrict all data to 7-bit characters.
Later the format of email messages was re-defined in order to support messages that are not entirely US-ASCII text (text messages in character sets other than US-ASCII, and non-text messages, such as audio and images). 
The Internet community generally adds features by "extension", allowing communication in both directions between upgraded machines and not-yet-upgraded machines, rather than declaring formerly standards-compliant legacy software to be "broken" and insisting that all software world-wide be upgraded to the latest standard. In the mid-1990s, people[who?] objected to "just send 8 bits (to RFC 821 SMTP servers)", perhaps because of a perception that "just send 8 bits" is an implicit declaration that ISO 8859-1 become the new "standard encoding", forcing everyone in the world to use the same character set.[original research?] Instead, the recommended way to take advantage of 8-bit-clean links between machines is to use the ESMTP (RFC 1869) 8BITMIME extension.  Despite this, some MTAs, notably Exim and qmail, relay mail to servers that do not advertise 8BITMIME without performing the conversion to 7-bit MIME (typically quoted-printable, "Q-P conversion") required by RFC 6152. This "just-send-8" attitude does not in fact cause problems in practice, since virtually all modern email servers are 8-bit clean.
See also 
- RFC 780: Appendix A, RFC 788: 4.5.2., RFC 821: Appendix B, RFC 1056: 4.
- John Beck. "Email Explained". 2011.
- RFC 1428: "SMTP as defined in RFC 821 limits the sending of Internet Mail to US-ASCII characters."
- Dan Sugalski. "E-mail with Attachments". "The Perl Journal". Summer 1999. "When mail was standardized way back in 1982 with RFC822, ... The only limits placed on the body were the character set (7-bit ASCII) and the maximum line length (1000 characters)."
- RFC 2045 "Multipurpose Internet Mail Extensions, or MIME, redefines the format of messages"
- Theodore Ts'o, Keith Moore, Mark Crispin (12 September 1994). "8-bit transmission in NNTP". IETF-SMTP mail list. Retrieved 3 April 2010.
- "comp.mail.mime FAQ, part 3 "What's ESMTP, and how does it affect MIME?"". Usenet FAQs. 8 August 1997. Retrieved 3 April 2010.