Talk:Integer (computer science)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computer science (Rated C-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
WikiProject Computing / Software (Rated C-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Software (marked as Mid-importance).

homed integer[edit]

homed integer pops up in the VisualC++ header files in reference to the DEC Alpha. Anybody know what that's about? Hackwrench 21:08, 5 November 2005 (UTC)

Finally found the relevant MSDN article.[1] Hackwrench 22:48, 5 November 2005 (UTC)

Current correct URL as of 2-10-2014: [2] Jimw338 (talk) 00:43, 11 February 2014 (UTC)

use case column[edit]

The column of uses for 64bit mentions "very large numbers". This is quite subjective, for many people already 32bit is "very large". And it doesnt tell something about the actual place where those very large numbers are used. It would be nice if someone could find some good concise words for "file size for modern multi gigabyte system file, directory, hdd sizes. Or network transfer rates, or transfer amounts." — Preceding unsigned comment added by (talk) 10:15, 6 May 2013 (UTC)


(this was originally under Quadword)

I think this probably isn't adequate. I mean, I know what "unit," "computer memory or storage," "four," and "word" all mean, but I still don't know what "quadword" means (at least, I don't think I do). Isn't that interesting? --LS---- That is because no one has explained how many bytes are in a word. In IBM/370 Assembler a word has 32 bits or 4 bytes. A doubleword is the length of 2 words, thus 64 bits or 8 bytes. I was entering something about this under word with a one word instruction as an example and found someone disagreeing with me---I give up. Rose Parks

problem is that different machines have different word sizes

OK, different problem: if you use "word" without further ado, many people reading the article won't know that you mean something different from what is ordinarily meant by "word."

I found this link on a page with byte...what was I to think? RoseParks

The article asserts that now word usually means 16 bits but I think it still means "most efficient unit size on some particular architecture" or "architecture defined sequential packing of bits". What do others think? --drj

I think the author is showing his/her IBM PC x86 bias by saying a word is 16 bits.

I have been around enough architectures that I consciously don't decide how big a word is, until someone tells me.

Somewhen in the past, drj was completely right, word meant the machine's "native" bit width, i.e. registers would always contain a word. But I think the trend to call 16 bits a word always, even outside of the now quite rare 16-bit beasts, is definitely there. This may be due to the wintel juggernaut, but it may also be due to there being a need to call 16 bits (and 32 bits, etc.) something. Word is certainly not the best choice, but most people that are not chip designers seem to find it adequate.

Despite the dominance of Wintel I still couldn't find enough evidence to support the case for word usually meaning 16 bits (for example count "16 bit word" versus "32 bit word" on altavista). So I deleted that claim from the head page. I can believe that amongst programmers that have only been exposed to wintel boxes word usually means 16 bits, but that not what was claimed. I don't really want to fight a battle over this but I would like some evidence. --drj

Likewise I'm not sure about 'longword' and 'long' as always meaning 32 bits. There are existing architectures where the C type 'long' is 64 bits. --Matthew Woodcraft

I think the article is clear enough about the terms being somewhat ambiguous (how's that for a .sig quote!) --LDC

I know it's incorrect to say "That variable should have been declared as long, which has at least 32 bits on any computer". C does not guarantee that long has at least 32 bits, only that it has >= the number of bits that int has. E.g. says "C language standards specify a set of relationships between the various data types but deliberately do not define actual sizes.". 01:47, 6 December 2008 (UTC)
The C standard does require that a long be able to represent all values, inclusive, between -2^31+1 (for non-two's complement systems) and 2^31-1. —Preceding unsigned comment added by (talk) 00:34, 22 February 2010 (UTC)


It strikes me that some of the information on this page is duplicated in the articles word, byte etc. Perhaps we should move it to a single place, and redirect from the others? --Uriyan

SI units?[edit]

picking a nit: As far as I know (and the SI entry supports me) terms like "megabyte" are not, in any way, SI measurements. The section that compares the power-of-ten meanings of the prefixes with the power-of-two meanings of the prefixes is fine, because it is describing the prefixes. measured in a metric fashion (i.e. powers of ten). I decided to drop out all the talk about hard drives and just leave the illustration of the difference, and the cross-reference to binary prefix, where the controversy is covered much better. Zack 18:23, 27 August 2005 (UTC)

It's not about SI *units*, it's about SI *prefixes*. Examples for units are meter, gramm, pascal, Kelvin, Joule. Mega always meant and always will mean "Million" which is exactly 10^6 or 1000000. -- 16:04, 18 October 2006 (UTC)

Maybe cover the int() function present in lots of languages/math packages? — Omegatron 07:27, 17 October 2005 (UTC)


Why are char datatypes discussed here? char datatypes represent a character, not an integer. And how can a signed char exist (characters are textual, not numeric). -- (talk) 14:44, 7 January 2008 (UTC)

Well, in C/C++ (and probably others), chars can be signed or unsigned, and if explicitly declared as such, are considered an integral type. Oli Filth(talk) 19:38, 7 January 2008 (UTC)
Thank you for clearing that one up. By the way, (as far as I know)Java's char type is a strict Unicode implementation (I'm not so sure about .NET though) -- (talk) 21:13, 7 January 2008 (UTC)
I'm no Java expert, but I've just taken a look at the Java spec, and it specifies char as an unsigned integral type (see section 4.2). I think, therefore, that we should revert the changes related to Java. Oli Filth(talk) 21:22, 7 January 2008 (UTC)
I have a little test for the Java char type.
public static void main(String[] args)
  char a = 'H';// 'H' is a character literal
  char b = 'e';
  char c = 'l';
  char d = 'o';
  char e = ',';
  char f = ' ';
  char number = 42;// an integer stored  in a char variable
  System.out.println(a + b + c + c + d + e + f + number); // what would be printed here?
This program assigns values to char variables. Because addition is defined for all integral types, the + operator could possibly be either integer addition or string concatenation.
I have done some Java programming before and, as far as I am concerned, a char is a one-character string. I have never tried to concatenate two chars or a char and an integer, only a char and a string.
What would this program print? An integer or a string starting with "Hello, "? -- (talk) 16:12, 8 January 2008 (UTC)
I don't know! However, there's an easy way to find out. I'm not sure what bearing this would have on the article, though. Oli Filth(talk) 02:25, 9 January 2008 (UTC)
This would demostrate how the char type works - as text or an integer. The Java spec also states that the addition operator is defined for all integral types (including char) and, because println is overloaded for strings and integers, the output depends on whether + is used for integer addition (returning an int) or concatenation (returning a string). This could show that 'H' + 'i' may not be the same as "Hi". -- (talk) 14:53, 9 January 2008 (UTC)
Concatenation only happens when there is a string. Remember the rule about not being able to overload functions differing only in return type? Character literals don't invoke it. Also, the + operator is invoked from left to right, so (1 + 1 + "a" + 1 + 1) is "2a11". —Preceding unsigned comment added by (talk) 00:27, 22 February 2010 (UTC)
From a high level perspective a character are a different concept from an integer. Characters enumerate letters. That the {} languages implicitly convert char to int adds to confusion but that does not make them the same. Most non {} languages (Ada and Pascal spring to my mind) treat them separately offering functions query a characters ordinal number or create a character from it's ordinal number. --Krischik T 11:57, 24 July 2008 (UTC)

Why is UCHAR/unsigned char not shown in the table under C/C++ on the row for byte->unsigned? Instead it shows the same information as for signed char. — Preceding unsigned comment added by (talk) 10:53, 22 November 2012 (UTC)

Decimal digits[edit]

The decimal digits is somehow misleading - taking the example a byte:

If you use a byte for arithmetic then a byte does not go all the way +/- 999 - with a range of -128 .. 127 I would say its about 2.125 digits but certainly not 3.

If you want to print a byte then -128 takes 4 characters - so it is again it's not 3.

So the numbers shown in "Decimal digits" is not answer to any real live question. --Krischik T 11:57, 24 July 2008 (UTC)

Well, "-" isn't a digit. Whilst I'm inclined to agree with your logic, I think it's still a useful indicator of the order of magnitude of the ranges in question, in "units" that are slightly more tangible. Oli Filth(talk|contribs) 19:14, 24 July 2008 (UTC)

Integer vs "Integral"[edit]

Can we include any reference as to why Microsoft refers to integers and "integrals" in C#? In the .Net CTS they are integers along with every other language I am personally familiar with. At the top of this article it states, "These are also known as integral data types" but there is no reference to this fact provided. As best I can tell, this seems to be a Microsoft-only designation. The wikipedia article on Integrals talks about the calculus definition. In the math Integer article contains the statement, "More formally, the integers are the only integral domain whose positive . . ." but this doesn't explain why an integer would be called an "integral." There is some clarification here, but Wolfram states "Numbers that are integers are sometimes described as "integral" (instead of integer-valued), but this practice may lead to unnecessary confusions with the integrals of integral calculus" on this page. Again, this seems to be only a C# practice from a computer language perspective. Any ideas?--P Todd (talk) 02:12, 30 October 2009 (UTC)

"Integer" is a noun; "Integral" in this context is an adjective modifying "types". Also, there is a reference to the C standard now. I know at least the Java language specification uses the term "integral types" also. —Preceding unsigned comment added by (talk) 00:21, 22 February 2010 (UTC)


Aren't flops 2 bits? Can those be added to the chart? Tidus3684 (talk) 21:31, 31 March 2011 (UTC)

I've always understood that 'flops' means 'floating point operations per second' with floating point processor speeds being commonly measured in MegaFlops. Murray Langton (talk) 21:55, 31 March 2011 (UTC)
Me too. (talk) 10:36, 7 April 2011 (UTC)

Suggest merge[edit]

I sugest we merge Short integer and Long integer here; I don't see the reason why the content here must be duplicated in those articles, and surely it's more of an overview if the various integer sizes are described here. --Wtshymanski (talk) 13:14, 1 September 2011 (UTC)

Support with reservations. The content is not really duplicated in both of these articles. Look, for example, at the tables there: they list sizes of the integer type in different environments, whereas this page generalizes for all cases. I think we can merge by making separate subsections for short and long integer types in the section Common integral data types, then copying all the information except the redundant parts of the introduction and finally moving the current information in Common integral data types to these subsections if possible.1exec1 (talk) 20:04, 4 September 2011 (UTC)
Support. 1exec1's proposal sounds good. - Frankie1969 (talk) 18:43, 11 September 2011 (UTC)
Support merge makes comparison easier. Max Longint (talk) 22:15, 5 October 2011 (UTC)
Comment While we are at this, I would suggest to dismantle/rewrite Computer numbering formats Max Longint (talk) 22:30, 5 October 2011 (UTC).


Hello, I'm a non-native English speaker. Could the article include how we should pronounce "integer" ? like "g" or "j" ? Thanks. — Preceding unsigned comment added by (talk) 13:42, 22 November 2011 (UTC)