I have always been under the impression that an octet was three bits representing an octal number. I guess this because I grew up in a Varian 501 which is an octal machine. —Preceding unsigned comment added by 184.108.40.206 (talk • contribs) 07:41, 16 August 2006
- This can be the case, but only generally specifying an IP number, because three digits from the IP number are specified using eight bits. I don't think the three digits should be called an octet itself, only the eight bits that make up that particular three digits, if that makes sense! There's some good info on it here: http://www.webopedia.com/DidYouKnow/Internet/2002/IPaddressing.asp
Matt512 15:24, 21 January 2007 (UTC)
- I think you are talking about two different things. What 210 was saying was that the three bits that make up an octal digit (for example, the binary number 101, which represents the octal digit 5) could be referred to as an octet. I believe this is another acceptable usage of the term "octet", but not one that is typically used in networking, since there is little call to express an IP address in octal as opposed to either decimal or hexadecimal. --ΨΦorg 21:58, 21 January 2007 (UTC)
Octet != Octal
An octet is a group of eight. Three bits representing an octal digit are just that: an octal number, not an octet. Likewise, a number in base 6 is not a "sextet", but six numbers in base 2 can be described as a sextet (of binary numbers). UNIX file system permissions are typically expressed as a triplet or quartet of octal numbers (corresponding to 9 or 12 binary flags).
Unless someone can provide official examples and / or a strong rationale behind using "octet" to describe a set of three numbers, I think this "exception" makes no sense and should be removed. —Preceding unsigned comment added by 220.127.116.11 (talk) 02:05, 25 October 2007 (UTC)
- It's sad but . Seems to be rarely used though and the earliest mention online of this term I could find is 2001. Some confuse Iran and Iraq, too, so it might not really be noteworthy after all. --18.104.22.168 (talk) 02:41, 23 November 2007 (UTC)
======= "Byte" and "Bit" are not homophones in French ==========
As a native speaker from France, I have to disagree with the statement that Byte and Bit are homophones in the french language (at least in France). Everybody I know in the Computer Science and Telecomm community has been trained to pronouce those words the english way. As a consequence, no misunderstanding is possible. D. Barthel (Jan 2008)
- I disagree too ! Byte and Bit are pronounced with an english accent, so they aren't homophones. But I understand that's can be a source of misunderstanding for youngers who doesn't speak english but that's all. But the use of "Octet" is easier to remember the difference between the two. --Max81 (talk) 21:14, 29 January 2008 (UTC)
- I'm French canadian and I also disagreee —Preceding unsigned comment added by 22.214.171.124 (talk) 18:20, 13 March 2008 (UTC)
Grammatically bit and byte ARE homophones in the french language no matter what common practice is. So, if this is an encyclopedia, it would be correct to include this bit ;-) of information in the article. Maybe with a hint that certain cultural practices have evolved beyond importance to the grammatical fact. 126.96.36.199 (talk) 11:36, 13 September 2010 (UTC)
'Bite' in French
I've read that another reason the francophone world may prefer 'octet' is that 'byte' sounds like 'bite' which has a distracting meaning in French. Several online sources confirm 'bite' means 'prick'/'cock' (as in 'penis' used as a vulgar insult) in French. So the use of 'octet' doesn't just clear up 8-bit ambiguities... it avoids a possibly snicker-inducing/distracting secondary meaning, especially when spoken. Gojomo (talk) 04:08, 25 July 2012 (UTC)
From what I understand the term octet came about from base 8, which was a commonly used base in computing until base 16 came into place (this needs historical details).
If I am not mistaken, previous language developers used the CPU register size to determine what is what, and from there compilers became ambiguous.
Irregardless, the base used was commonly representative of the CPU register size, which appears to be no longer true (confirm, and put details e.g. Murphy law and power of 2).
If I also recall, there was a nibble, which is half an octet, which may correspond to the previous 3/4-bit questions, which preceded.
That being, I think there was a fuss between the compilers and the CPU engineers. That is, what is a word, and what is a half word (not confirmed).
After many years I think it was somehow set in dirt that a word will be the size of a CPU register, and a half word will be half the size of a CPU register. Not too great for the programmer because confusion still goes on today... Moreover, to add to the complexity, you must take into account the floating point instructions (a long time ago, these used to be almost always software, but I guess the CPU engineers decided to integrate circuits to do it).
In conclusion, if your building software, define an octet to be 8-bits, which is identical to a 'C' char type. Also, make use of signed and unsigned. Today it is probably okay to rely on 'limits.h.' However, if you want to be truly CPU architectural independent (damn CPU engineers) go it your own! Otherwise, for now, always go with a power of 2.
Thought we were finished? Sorry no. There is signed and unsigned. No worries, a signed word is always a power of 2, and an unsigned will always be 1 less than the unsigned number...