From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing (Rated Start-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.


Hello Friends.

Can any body explain a little that what is the basic difference between 16-bit and 32-bit words. I just want to know that what is the actual advantage of 32-bit word over 16-bit.



It's basically like this. If the word size of a machine is 16-bit, then you can store at most 2^16 = 65536 different combinations in one word. These combinations are often taken to be the numbers 0 to 65535 or -32760 to 32759 or memory locations.
If you need larger numbers, or if you need to keep track of more data that needs more memory, you'll need to use two words.
Using two words for storing numbers increases the time it takes to do calculations with them. This is because instead of one operation to for instance add two numbers, more operations are needed.
If you use a 32-bit machine however, you can use 4294967296 different combinations. That means you can use a single word to contain larger numbers or references to a wider range of memory locations.
Therefore a 32-bit machine will perform better at the same clock speed, with the added bonus of making the lives of programmers easier. (The last bit has also got a lot to do with the way some processor manufacturers (Intel springs to mind) and OS developers (Microsoft for instance) chose to overcome the restrictions of 16-bit words.)
I hope this helps.
Yours sincerely,
Shinobu 07:12, 31 August 2005 (UTC)

16-bit DOS and Windows 1.0/2.0 applications[edit]

Those applications were 16-bit, not 8-bit. The 8088 was a processor with a 16-bit instruction set and an 8-bit bus; it's the instruction set, not the bus width, that matters to the "width" of applications. Guy Harris (talk) 18:56, 20 January 2008 (UTC)

Memory Adressing[edit]

So how many kilobytes can 16 bit processor address?-- (talk) 02:11, 1 May 2008 (UTC)

From looking at this and other pages, it appears it can address 2^{16}=65536 bits, which is 8192 bytes (octets) or 8 kibibytes. If by "kilobyte" you mean "1,000's of bytes," then it's 8.192 KB, but if you meant "1,024's of bytes," which is what Windows and I think Macintosh now use, then it's 8 KiB. Eebster the Great (talk) 05:04, 14 September 2008 (UTC)
Its actually 65,536 bytes, which is 64 kilobytes. Using certain techniques, a CPU can address more memory than the width of its ALU. An example of this would be the Intel 8086, which is a 16-bit microprocessor with a 20-bit physical address capable of addressing 1 megabyte. Rilak (talk) 07:58, 14 September 2008 (UTC)
Eebster made the assumption that every single bit has its own address, but addressing is done per byte. Therefore Eebster was a factor eight off. Shinobu (talk) 03:03, 4 October 2008 (UTC)
Good call; sorry. I guess I'll try restrict myself to answering questions to which I actually know the answer. Eebster the Great (talk) 05:12, 4 October 2008 (UTC)
But if we stuck with what we already know, how would we ever learn anything? I agree that, with a flat memory model, 16 bit addresses can be used to specify at most one of 2^{16}=65536 addressable locations. With byte-addressable memory, that gives 64 KiB. But some 16 bit digital signal processors use 16 bit word addressable memory, and so they can directly address the equivalent of 128 KiB. However, some systems don't have a flat memory model -- as Rilak implied, some systems have hardware that uses bank switching or memory segmentation to allow the CPU to indirectly address significantly more memory. —Preceding unsigned comment added by (talk) 20:20, 24 October 2008 (UTC)

Good point! I think we should clarify the article, and perhaps all n-bit articles. Rilak (talk) 08:48, 25 October 2008 (UTC)

Shouldn't the Z80 and the 6809 go here since you still consider 68000 as being 32-bit?[edit]

You always consider 68000 being 32-bit for no other reason other than it's registers being 32-bit, regardless of it having a 16-bit ALU and data bus, but you never call the Z80 and the 6809 16-bit, despite them having the same reasoning? —Preceding unsigned comment added by (talk) 16:06, 16 May 2009 (UTC)

Add list of 16-bit CPUs/MPUs?[edit]

I believe that it would be beneficial to have a consolidated list of 16-bit CPUs and MPUs on this page, much like the 8-bit page's list. This would make it very easy for readers to browse through all the 16-bit architectures and processors without having to fish around. Therefore, I am adding an incomplete list to this article with the processors and architectures I am aware of. If there is a more preferable way to do this, or if you know of something to add to the list, please feel free to help. Daivox (talk) 03:10, 3 May 2011 (UTC)

16-bit application[edit]

Is [1] an improvement? I do not see an improvement. BTW, I hastily used rollback and destroyed some other changes. I object only against “On the x86 architecture, a 16-bit application normally means any software written for…” and am indifferent to changes in other parts of the article. Incnis Mrsi (talk) 21:35, 26 July 2013 (UTC)

I'm still looking for documentation, but there might have been 16-bit applications written for UNIX System V/286 that ran, along with 32-bit applications, on UNIX System V/386, and the same might have applied to pre-386 and 386 Xenix, so there might have been 16-bit applications not written for "MS-DOS, OS/2 1.x or early versions of Microsoft Windows". The term largely applies to DOS, OS/2, and Windows applications, but it might not exclusively apply to them, even if there were relatively few UNIX applications for x86 at that time. Guy Harris (talk) 22:28, 26 July 2013 (UTC)
In addition, in theory, there might have been versions of some of those OSes that ran on machines other than PC compatibles and to which the 16-bit vs. 32-bit distinction applied.
(BTW, I put back all of HLachman's changes other than the one to which you object.) Guy Harris (talk) 22:36, 26 July 2013 (UTC)
Thanks to both of you for your attention to this issue. I'm OK with either wording (they're both mine!). Was my 2nd wording an improvement? I borrowed it from the 32-bit application article because it sounded more concise than my 1st wording, while being synonymous (although I'm not certain of that). Either way, my main intent is to make sure that if the paragraph is PC-centric then it declare itself as such, because prior to my 1st wording (04:46, 11 June 2012‎) it merely said, "A 16 bit application is any software written for... (various PC platforms)...." That seemed to imply that PDP-11 programs, for example, cannot be called "16-bit applications" (even while they might run on a VAX in compatibility mode). So I'm fine with leaving the wording as-is. As for the remaining issues in the paragraph (Xenix, etc.), I'm not too clear what to do about those. Thanks also to Guy for restoring my other edits. HLachman (talk) 23:59, 27 July 2013 (UTC)