|WikiProject Mathematics||(Rated B-class, Mid-importance)|
Sexidecimal is Correct! (not Literally but Historically)
It is true that engineers at IBM first used the bastardized Latin form Sexidecimal to describe a base 16 numbering system. It is not, and should not be the primary question here whether the form is correct grammatically. Instead, what is much more important is that the term was accurate historically -- it was used at a certain place and time. Language is less concerned with Accuracy than with Consensus. As such, it is always subject to change as soon as enough people agree that it should.
Google returns 374 results on "sexidecimal". Sounds like consensus to me.
Other characters used
NEC in the NEAC 1103 computer documentation from 1958, uses the term "sexadecimal" and the sequence 0123456789DGHJKV. See the brochure at http://archive.computerhistory.org/resources/text/NEC/NEC.1103.1958102646285.pdf.
The article does nothing to explain why the hexadecimal system was invented, or why it sees so much use in computer science. Can someone address this please? What is the point of using a base-16 system?
- Computers use binary. Converting from a binary representation to hexadecimal representation is much simplier than converting from binary to decimal. This is performed rather quickly in your head by grouping 4-bit numbers in longer binary representations, and converting each 4-bit group to a hexadecimal digit. For example:
Representation Description 0010100100010101 16-bit binary number representation 0010 | 1001 | 0001 | 0101 16-bit binary number representation grouped in 4-bit groups 2 | 9 | 1 | 5 Converting 4 bit binary groups to 1 hexadecimal "digit" 0x2915 Hex value after conversion
- With time, it becomes simple to convert from a 4-bit binary representation to a 1-hexadecimal "digit" representation. 220.127.116.11
- Excuse my quick self-reply, but when thinking in bytes, it make sense of course. To represent an entire byte in a single character, you would need base 256, which cannot be depicted with number and letters. To represent that byte in two characters, you need base 16 (16^2 = 256).
- I think this should be included in the article, backed up by a proper source, of course. Lonaowna (talk) 17:53, 22 September 2014 (UTC)
- Octal (base 8) was/is common as well. It's mainly tradition, plus a few minor practical issues. Base-4 only halves the size of the numbers, and doing arithmetic with large bases (eg. base-32) is somewhat clumsy (for example, the multiplication table for base-32 has 1024 entries - hex is bad enough at 256!). Remember that these traditions largely predate common calculators with hex or octal support (the TI Programmer was introduced in 1978, for something like $50 - about $200 today), and many of us learned to do hex (and octal and binary) arithmetic by hand on a regular basis. Octal was commonly used on a number of machines with six bit characters (or some multiple there-of), for example many of the 12/18/36 bit machines from DEC (PDP-1/4/5/6/7/8/9/10/15, DECSystem-10/20, for example), and often carried over to the same manufacturers 8-bit line (PDP-11 and VAX, for example). Octal works well with six bit characters because you need exactly two octal digits per character. It's a bit clumsy for 8-bit characters since you need three (which didn't . Further using octal on a machine with (say) 16-bit words leads to a less clear separation of bytes. For example 0x89AB is a word of two bytes, one 0x89 and the other 0xAB. In octal, that's 0104653, and the two bytes are 0211 and 0253, which leaves the character subdivisions much less clear. Base-32 is also clumsy plus it actually helps little. You still need two base-32 digits for an eight bit byte, and four for a 16 bit bit word, which is no improvement over hex, and it's clumsier as well, having both more tedious arithmetic and a lack of even partitioning of the characters. For 32 bit words you'd actually save an entire digit (7 vs. 8 in hex), so a minor gain, while leaving all the pain. So basically it because if you start with binary, you want a power-of-two base for your shorthand notation, and 16 just happens to be the most practical size in most modern cases (and 8 was a practical size on many older machines). 32 is too big, 4 is too small, and 16 fits better than 8. Rwessel (talk) 04:15, 23 September 2014 (UTC)
Hidden comment in article body that should have been posted here
Regarding the following text in the article: "In typeset text, hexadecimal is often indicated by a subscripted suffix such as 5A316, 5A3SIXTEEN" This comment was appended: "this seems hugely verbose and i can't say i've ever seen it does anyone here a source?" by: Plugwash 23:13, 10 July 2005 (UTC).
Computing additions in lede
In [this edit], User:Nimur added computer-programming-oriented info in the lede. As a programmer myself, I agree with what it says, but is it right to emphasize this use in the lede? Hexadecimal isn't just for computing. Also, the new lede's "0x" notation emphasis contradicts with the "Representation" section just after the lede. Unless someone objects, I'll revert this per WP:BRD (which I'm not quite doing in order -- that's just how I roll). A D Monroe III (talk) 17:31, 20 June 2014 (UTC)
- No worries. I was proactively responding to a Computing reference desk discussion, in which another user was confused: Hexadecimal question, (June 17). If you can edit my changes to clarify the lede, please feel free; or if you strongly feel that the lede was more clear before my changes, please feel free to revert. Nimur (talk) 22:40, 20 June 2014 (UTC)
- Hm. So, if I follow this, the change was in response to the lede being incomprehensible; programming use was introduced as a way of better explaining this. I agree the original lede was not very helpful. So, I won't revert, but still don't want to rely so much on 'C' programming use right from the start. I'll have to think about this. A D Monroe III (talk) 23:10, 24 June 2014 (UTC)
Single-character pence representation on ICT 3100
User:18.104.22.168 added information about the pre-decimalization storage/representation of British Sterling currency values. Several schemes for single character pence representation existed. Typically on punched cards a lone 11 and 12 punch represented the 10d and 11d. Both mappings were actually commonly used: the B.S.I mapping was 11/12-punch to 11d/10d, and the "IBM" mapping was 11/12-punch to 10d/11d (although IBM provided support for both mappings in a number of products, for example, some of their Cobol compilers). For at least EBCDIC interpretations of punched card hole patterns, these were "-" and "&" characters (and other interpretations were different). Anyway, none of that has any connection to hex, unless ICT used their chosen representation of 10d and 11d as two of the hex digits (presumably 10 and 11), but that would still leave them needing a way to represent hex digits 12-15. That seems unlikely, and so I reverted the change, but if it can be documented, we should put it back - including information about what was done with the 12-15 digits. Rwessel (talk) 14:23, 22 September 2014 (UTC)