WikiProject Mathematics (Rated B-class, Mid-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 B Class
 Mid Importance
Field: Basics
One of the 500 most frequently viewed mathematics articles.

## Sexidecimal is Correct! (not Literally but Historically)

It is true that engineers at IBM first used the bastardized Latin form Sexidecimal to describe a base 16 numbering system. It is not, and should not be the primary question here whether the form is correct grammatically. Instead, what is much more important is that the term was accurate historically -- it was used at a certain place and time. Language is less concerned with Accuracy than with Consensus. As such, it is always subject to change as soon as enough people agree that it should.

Google returns 374 results on "sexidecimal". Sounds like consensus to me.

## Other characters used

NEC in the NEAC 1103 computer documentation from 1958, uses the term "sexadecimal" and the sequence 0123456789DGHJKV. See the brochure at http://archive.computerhistory.org/resources/text/NEC/NEC.1103.1958102646285.pdf.

## But WHY?

The article does nothing to explain why the hexadecimal system was invented, or why it sees so much use in computer science. Can someone address this please? What is the point of using a base-16 system?

Computers use binary. Converting from a binary representation to hexadecimal representation is much simplier than converting from binary to decimal. This is performed rather quickly in your head by grouping 4-bit numbers in longer binary representations, and converting each 4-bit group to a hexadecimal digit. For example:
Representation Description
0010100100010101 16-bit binary number representation
0010 | 1001 | 0001 | 0101 16-bit binary number representation grouped in 4-bit groups
2 | 9 | 1 | 5 Converting 4 bit binary groups to 1 hexadecimal "digit"
0x2915 Hex value after conversion
With time, it becomes simple to convert from a 4-bit binary representation to a 1-hexadecimal "digit" representation. 199.62.0.252
But still, why hexadecimal? Why not base 4 or base 8? If that's too "small", why not base 32? That could also be represented with 0-9 + letters.
Hexadecimal seems very random to me. It would be nice to have an explanation in the article. Lonaowna (talk) 17:39, 22 September 2014 (UTC)
Excuse my quick self-reply, but when thinking in bytes, it make sense of course. To represent an entire byte in a single character, you would need base 256, which cannot be depicted with number and letters. To represent that byte in two characters, you need base 16 (16^2 = 256).
I think this should be included in the article, backed up by a proper source, of course. Lonaowna (talk) 17:53, 22 September 2014 (UTC)
Octal (base 8) was/is common as well. It's mainly tradition, plus a few minor practical issues. Base-4 only halves the size of the numbers, and doing arithmetic with large bases (eg. base-32) is somewhat clumsy (for example, the multiplication table for base-32 has 1024 entries - hex is bad enough at 256!). Remember that these traditions largely predate common calculators with hex or octal support (the TI Programmer was introduced in 1978, for something like $50 - about$200 today), and many of us learned to do hex (and octal and binary) arithmetic by hand on a regular basis. Octal was commonly used on a number of machines with six bit characters (or some multiple there-of), for example many of the 12/18/36 bit machines from DEC (PDP-1/4/5/6/7/8/9/10/15, DECSystem-10/20, for example), and often carried over to the same manufacturers 8-bit line (PDP-11 and VAX, for example). Octal works well with six bit characters because you need exactly two octal digits per character. It's a bit clumsy for 8-bit characters since you need three (which didn't . Further using octal on a machine with (say) 16-bit words leads to a less clear separation of bytes. For example 0x89AB is a word of two bytes, one 0x89 and the other 0xAB. In octal, that's 0104653, and the two bytes are 0211 and 0253, which leaves the character subdivisions much less clear. Base-32 is also clumsy plus it actually helps little. You still need two base-32 digits for an eight bit byte, and four for a 16 bit bit word, which is no improvement over hex, and it's clumsier as well, having both more tedious arithmetic and a lack of even partitioning of the characters. For 32 bit words you'd actually save an entire digit (7 vs. 8 in hex), so a minor gain, while leaving all the pain. So basically it because if you start with binary, you want a power-of-two base for your shorthand notation, and 16 just happens to be the most practical size in most modern cases (and 8 was a practical size on many older machines). 32 is too big, 4 is too small, and 16 fits better than 8. Rwessel (talk) 04:15, 23 September 2014 (UTC)
That's a great explanation. Thanks! It might be useful to add something like this to the article. Lonaowna (talk) 21:55, 24 September 2014 (UTC)

## Hidden comment in article body that should have been posted here

Regarding the following text in the article: "In typeset text, hexadecimal is often indicated by a subscripted suffix such as 5A316, 5A3SIXTEEN" This comment was appended: "this seems hugely verbose and i can't say i've ever seen it does anyone here a source?" by: Plugwash 23:13, 10 July 2005 (UTC).