Talk:Hexadecimal

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated B-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
B Class
Mid Importance
 Field: Basics
One of the 500 most frequently viewed mathematics articles.

Sexidecimal is Correct! (not Literally but Historically)[edit]

It is true that engineers at IBM first used the bastardized Latin form Sexidecimal to describe a base 16 numbering system. It is not, and should not be the primary question here whether the form is correct grammatically. Instead, what is much more important is that the term was accurate historically -- it was used at a certain place and time. Language is less concerned with Accuracy than with Consensus. As such, it is always subject to change as soon as enough people agree that it should.

Google returns 374 results on "sexidecimal". Sounds like consensus to me.

Other characters used[edit]

NEC in the NEAC 1103 computer documentation from 1958, uses the term "sexadecimal" and the sequence 0123456789DGHJKV. See the brochure at http://archive.computerhistory.org/resources/text/NEC/NEC.1103.1958102646285.pdf.


But WHY?[edit]

The article does nothing to explain why the hexadecimal system was invented, or why it sees so much use in computer science. Can someone address this please? What is the point of using a base-16 system?

Computers use binary. Converting from a binary representation to hexadecimal representation is much simplier than converting from binary to decimal. This is performed rather quickly in your head by grouping 4-bit numbers in longer binary representations, and converting each 4-bit group to a hexadecimal digit. For example:
Representation Description
0010100100010101 16-bit binary number representation
0010 | 1001 | 0001 | 0101 16-bit binary number representation grouped in 4-bit groups
2 | 9 | 1 | 5 Converting 4 bit binary groups to 1 hexadecimal "digit"
0x2915 Hex value after conversion
With time, it becomes simple to convert from a 4-bit binary representation to a 1-hexadecimal "digit" representation. 199.62.0.252
But still, why hexadecimal? Why not base 4 or base 8? If that's too "small", why not base 32? That could also be represented with 0-9 + letters.
Hexadecimal seems very random to me. It would be nice to have an explanation in the article. Lonaowna (talk) 17:39, 22 September 2014 (UTC)
Excuse my quick self-reply, but when thinking in bytes, it make sense of course. To represent an entire byte in a single character, you would need base 256, which cannot be depicted with number and letters. To represent that byte in two characters, you need base 16 (16^2 = 256).
I think this should be included in the article, backed up by a proper source, of course. Lonaowna (talk) 17:53, 22 September 2014 (UTC)
Octal (base 8) was/is common as well. It's mainly tradition, plus a few minor practical issues. Base-4 only halves the size of the numbers, and doing arithmetic with large bases (eg. base-32) is somewhat clumsy (for example, the multiplication table for base-32 has 1024 entries - hex is bad enough at 256!). Remember that these traditions largely predate common calculators with hex or octal support (the TI Programmer was introduced in 1978, for something like $50 - about $200 today), and many of us learned to do hex (and octal and binary) arithmetic by hand on a regular basis. Octal was commonly used on a number of machines with six bit characters (or some multiple there-of), for example many of the 12/18/36 bit machines from DEC (PDP-1/4/5/6/7/8/9/10/15, DECSystem-10/20, for example), and often carried over to the same manufacturers 8-bit line (PDP-11 and VAX, for example). Octal works well with six bit characters because you need exactly two octal digits per character. It's a bit clumsy for 8-bit characters since you need three (which didn't . Further using octal on a machine with (say) 16-bit words leads to a less clear separation of bytes. For example 0x89AB is a word of two bytes, one 0x89 and the other 0xAB. In octal, that's 0104653, and the two bytes are 0211 and 0253, which leaves the character subdivisions much less clear. Base-32 is also clumsy plus it actually helps little. You still need two base-32 digits for an eight bit byte, and four for a 16 bit bit word, which is no improvement over hex, and it's clumsier as well, having both more tedious arithmetic and a lack of even partitioning of the characters. For 32 bit words you'd actually save an entire digit (7 vs. 8 in hex), so a minor gain, while leaving all the pain. So basically it because if you start with binary, you want a power-of-two base for your shorthand notation, and 16 just happens to be the most practical size in most modern cases (and 8 was a practical size on many older machines). 32 is too big, 4 is too small, and 16 fits better than 8. Rwessel (talk) 04:15, 23 September 2014 (UTC)
That's a great explanation. Thanks! It might be useful to add something like this to the article. Lonaowna (talk) 21:55, 24 September 2014 (UTC)

Hidden comment in article body that should have been posted here[edit]

Regarding the following text in the article: "In typeset text, hexadecimal is often indicated by a subscripted suffix such as 5A316, 5A3SIXTEEN" This comment was appended: "this seems hugely verbose and i can't say i've ever seen it does anyone here a source?" by: Plugwash 23:13, 10 July 2005 (UTC).

Computing additions in lede[edit]

In [this edit], User:Nimur added computer-programming-oriented info in the lede. As a programmer myself, I agree with what it says, but is it right to emphasize this use in the lede? Hexadecimal isn't just for computing. Also, the new lede's "0x" notation emphasis contradicts with the "Representation" section just after the lede. Unless someone objects, I'll revert this per WP:BRD (which I'm not quite doing in order -- that's just how I roll). A D Monroe III (talk) 17:31, 20 June 2014 (UTC)

No worries. I was proactively responding to a Computing reference desk discussion, in which another user was confused: Hexadecimal question, (June 17). If you can edit my changes to clarify the lede, please feel free; or if you strongly feel that the lede was more clear before my changes, please feel free to revert. Nimur (talk) 22:40, 20 June 2014 (UTC)
Hm. So, if I follow this, the change was in response to the lede being incomprehensible; programming use was introduced as a way of better explaining this. I agree the original lede was not very helpful. So, I won't revert, but still don't want to rely so much on 'C' programming use right from the start. I'll have to think about this. A D Monroe III (talk) 23:10, 24 June 2014 (UTC)

Single-character pence representation on ICT 3100[edit]

User:188.29.21.86 added information about the pre-decimalization storage/representation of British Sterling currency values. Several schemes for single character pence representation existed. Typically on punched cards a lone 11 and 12 punch represented the 10d and 11d. Both mappings were actually commonly used: the B.S.I mapping was 11/12-punch to 11d/10d, and the "IBM" mapping was 11/12-punch to 10d/11d (although IBM provided support for both mappings in a number of products, for example, some of their Cobol compilers). For at least EBCDIC interpretations of punched card hole patterns, these were "-" and "&" characters (and other interpretations were different). Anyway, none of that has any connection to hex, unless ICT used their chosen representation of 10d and 11d as two of the hex digits (presumably 10 and 11), but that would still leave them needing a way to represent hex digits 12-15. That seems unlikely, and so I reverted the change, but if it can be documented, we should put it back - including information about what was done with the 12-15 digits. Rwessel (talk) 14:23, 22 September 2014 (UTC)