Talk:36-bit

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing (Rated Start-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 

It also allowed the storage of six alphanumeric characters encoded in a six-bit character encoding.

Can someone give an example for a six-bit character encoding? --Abdull 18:23, 26 January 2007 (UTC)

See sixbit, which I just created. --Macrakis 20:50, 26 January 2007 (UTC)

Rename as "36-bit"?[edit]

I suggest this article be renamed "36-bit", for consistency with the other articles using Template:N-bit . Letdorf (talk) 14:56, 25 March 2008 (UTC).

C[edit]

The C programming language requires that all memory be accessible as bytes, so C implementations on 36-bit machines use 9-bit bytes.

I don't believe that that's true. The C language requires that a C compiler recognize the datatype "char", but puts few restrictions on its size, other than that "char" can't be larger than "short", "int", or "long". As far as the requirements of the C language, it would be perfectly acceptable for a "char" to always occur on a 36-bit-word boundary and to occupy any or all of that 36-bit word, again provided only that "short" was no smaller. If C compilers on historic 36-bit mainframe computers used 9-bit bytes, that was a choice of the compiler authors in the interest of making the most efficient use of memory, which (by 2011 standards) was shockingly limited and appallingly expensive — it wasn't a requirement. In comparison, the Pascal programming language supports both types of character storage, and allows the choice to be made by the Pascal program author ("array" vs. "packed array") rather than by the Pascal compiler author; C gives that choice to the C compiler author only. 76.100.17.21 (talk) 10:50, 23 January 2011 (UTC)

What is it that you believe is not true in the above statement? "The C programming language requires that all memory be accessible as bytes"? Or "The C programming language requires that all memory be accessible as bytes"? The first one is a requirement that limits the byte length to divisors of 36, i.e. 6, 9, 12, 18 and 36. Of these 9 was chosen being the smallest practical length (6 being too small). The restriction imposed by C is that within words there cannot be bits not accessible by the char type, not the matter that you discuss above. I believe that your argumentation misses the point discussed in the article, so I am removing the "dubious" remark. 129.112.109.245 (talk) 23:10, 1 April 2011 (UTC)

I'm not sure what the preceeding note was asking about, as the same quoted text was offered for both alternatives, but I am sure that the preceeding author is confusing the general requirements of the C language with the specific design decisions, however reasonable and well chosen, of compiler developers. C requires that a compiler support a scalar datatype named "char", with no restrictions on its size, offset, or alignment within a machine word; the only constraint is that a "char" must be large enough to hold a character of "the basic execution character set". Note how minimal that constraint is! The basic execution character set is not required to be ASCII, EBCDIC, or any other particular representation; it's not required to be the same as the compilation / source code character set; and there may even be multiple execution / run-time character sets, of which only the basic one need fit within a "char". On wide-word, word-addressible hardware, there is no requirement that multiple C "char"s be stored within a word, and if they are, there is no requirement that they fill all available bits without gaps, or even that they be spaced uniformly within that word. So the claim that "the C programming language requires that all memory be accessible as bytes" is wrong -- but I'll be conservative and just tag it "dubious" rather than deleting it. If I'm the one who's wrong, please cite the parts of the C standard document that say so, and correct me.  :) And as for the second part of the original claim, that "C implementations on 36-bit machines use 9-bit bytes", not only does that not follow from the preceeding part of the claim, but such a sweeping statement can't be supported without citing documentation for every C compiler which ever ran on 36-bit machines. If there are no objections after a reasonable period, I'll just edit the article to mention that, within the C language, the use of 9-bit chars packed four per 36-bit word is a natural and efficient choice of compiler designers, and omit the claim that the requirements of the C language somehow dictate that result. 76.100.17.21 (talk) 00:24, 23 October 2011 (UTC)

The C standard requires that character types have at least 8 bits, the signed char type have a range of at least -127 to +127, and that the unsigned char type have a range of at least 0 to 255 (ISO/IEC 9899:1999 section 5.2.4.2.1), and that all C types other than bitfields be a multiple of the size of a character (section 6.2.6.1 paragraphs 2 and 4). The only character sizes that satisfy this requirement on a machine with a 36-bit word are 9, 12, 18, and 36. --Brouhaha (talk) 03:18, 23 October 2011 (UTC)

The Marshall Cline and other references in this article seem to support these surprising-to-me requirements. I added the Marshall Cline reference to this article. It says there *is* a requirement that C "char"s "fill all available bits without gaps" and that "the C programming language requires that all memory be accessible as bytes". It uses the 36-bit PDP-10 as an example, and derives the same "9, 12, 18, and 36" options that Brouhaha lists. As far as I can tell, the only reason for those requirements is so memcpy() can copy arbitrary data, of any data type, from one "C object" or "plain old data" struct to another, without needing to know exactly what kind of data it is, and without losing any bits or accidentally overwriting either neighboring struct. I agree that this is a surprising and apparently unnecessary restriction, at least to people like us. (People who are familiar with Pascal "packed array" and unpacked arrays, and C data structure alignment, and the "Unpredictable struct construction" section of "The Top 10 Ways to get screwed by the "C" programming language"[1]). --DavidCary (talk) 21:12, 6 April 2013 (UTC)

address space[edit]

The following statement appears in the article:

These computers used 18-bit word addressing, not byte addressing, giving an address space of 218 36-bit words,

The computers referenced include the IBM 7090, which had only 15 bits of address space, or 32,768 36-bit words. If there is no objection I will replace the statement in the article with the following:

These computers had addresses 15 to 18 bits in length. The addresses referred to 36-bit words, so the computers were limited to addressing between 32768 and 262144 words, or 196608 to 1572864 characters. John Sauter (talk) 04:24, 4 February 2012 (UTC)

Good point. However, I doubt that every 36-bit computer had at least 32768 words of memory installed. Perhaps it would be simpler and more accurate to state something more like:

These computers had addresses 15 to 18 bits in length. The addresses referred to 36-bit words, so the computers were limited to addressing at most 262144 words, or at most 1572864 characters.

--DavidCary (talk) 02:15, 7 April 2013 (UTC)