Talk:Intel 80386

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing (Rated C-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
 

Models and variants, content error?[edit]

It says the i386DX was produced with about 104mm^2 die size in the CHMOS III process and later with about 39mm^2 die size in the CHMOS IV process. For the i386SX it says it was produced with the CHMOS IV process and with about 104mm^2 die size. Are you sure the die size and/or process for the SX variant is correct? It doesn't seem to add up for me, at least.

178.24.193.37 (talk) 16:51, 11 May 2010 (UTC)

Most Important Design Choice[edit]

I don't understand how keeping the flat memory model was such a significant design choice. Upon further research, it seems that all preceding Intel chips also featured a flat memory model. —Preceding unsigned comment added by 216.27.163.78 (talk) 02:46, 21 February 2009 (UTC)

Earlier chips supported only 64KB of flat/linear/continous addressing; the 386 was the first chip which had this extended to 4GB, i.e. 65536 times as much, a very significant difference. 83.255.39.24 (talk) 20:40, 23 February 2009 (UTC)
In order to address all of the memory available in the computer using an 8086 or 80286, you had to use a segment register, which would be multiplied by 16 and added to the address register. So in order to access any given memory location within, say, the first megabyte of RAM, you need to do some maths to present the CPU with a segment:offset. The 386 provided a method of accessing all the available RAM by extending the address pointers beyond 16 bits. - Richard Cavell (talk) 06:51, 1 July 2009 (UTC)

Release[edit]

I've seen sources that say the chip was released in 1985. [[1]]. Was it really 1986 or was it 1985? Timbatron 21:42, 25 December 2005 (UTC)

The chip taped-out in October 1985 (I was there). It was not "released" until (IIRC) late 1986, as at the time there was a long lead-time between tape-out and public availability. -- Gnetwerker 08:08, 3 February 2006 (UTC)
Sorry to respond to such an old comment, but is there any citations to confirm this? Everything I've read on the Intel website points to 1985 not 1986. --Android Mouse 00:03, 15 July 2007 (UTC)

(Verifable/Validation Source) I was the Responsible Individual (RI) Research and Development Technician working on the 80386 at Intel Corporation's R&D Facility, adjacent to the Production Facility in Livermore, CA. The run yielded approximately 8 die per wafer at E-test for the first time in late 1985. With new proprietary information, the product was immediately launched into full, mass production as the world was waiting for this chip to be born--and so it was, in the third quarter of 1986. kathywinchell@gmail.com — Preceding unsigned comment added by Winchell4 (talkcontribs) 18:08, 20 June 2014 (UTC)

Sources[edit]

"Intel decided against producing the chip before then, as the cost of production would have been uneconomic." What is the basis for this assertion? The chip wasn't designed until Oct '85, this implies otherwise. -- Gnetwerker 08:08, 3 February 2006 (UTC)

Customers[edit]

I think it is significant that the first major customer was Compaq, then not a large company, rather than IBM. While I "know" this (from being at Intel), I don't have a source. Anyone? (P.s. -- The Compaq page says all of these things without attribution.) -- Gnetwerker 08:11, 3 February 2006 (UTC)

SX-DX[edit]

Does anyone know what SX and DX stand for? I heard once "Single eXecution" and "Double eXecution". But I've never seen that confirmed. warpozio 14:04, 27 March 2006 (UTC)

SX and DX means very different things, depending on processor's generation. 80386SX internally is identically the same as DX (fully 32-bit), but it has 16-bit data bus, which slowed down it's memory access performance comparing to 80386DX, which had 32-bit data bus. In this matter it's similar to Motorola 68000, which is also a 32-bit processor internally (32 bit addressation, registers, arithmetic), but also has 16-bit data bus. It was done to minimize the costs of motherboards - 68000 was out much earlier than 80386SX. Also, first 80386 of course has 32-bit data bus, thus is was the "DX", yet it wasn't called so, because 80386SX and separation between SX and DX was introduced later.

In the 486 generation processors SX versions doesn't have built-in FPU. Of course 80386 never has integrated FPU, thus 486SX at the same frequency is something like faster 80386DX (faster due architectural advances - pipelined ALU and so on). Yet, AMD has managed to produce 80386DX working at quite high frequencies (40Mhz), thus is often was faster than 486SX with lower freqs like 25Mhz.

SX and DX are mostly marketing features, which are introduced to separate lower and upper segments of market. In such meaning they are something like "Celeron" and "Pentium" trademarks used todays. Though different generations of processors use different ways to "cripple" the performance in low-cost models.

"Crippling"? That's only really something that's been done on rare occasions to meet market demand for cheaper parts that hadn't been produced in sufficient numbers (like the Durons that were Athlons with most of the cache disabled by removing a jumper wire and could be returned to full spec by bridging the contacts). Mostly it's dies which didn't pass QC for whatever reason (fault on the wafer, too close to the edge...) but it's only the part which doesn't feature in the low cost model anyway which has been damaged - e.g. part of the cache, or the hyperthreading controller. Cut a designed-in breakable link (same as used to designated them as fit for a certain speed) to disable that area and make the chip report as the cheaper member of the processor family, and it's still usable rather than being consigned to the bin. Even though the performance is so lacking you may wish that they HAD recycled it...! 193.63.174.11 (talk) 11:51, 27 April 2011 (UTC)
You didn't read the question, Mr Unsigned. You answered a question that was in your head. The gentleman wanted to know what the letters DX and SX stood for. You let him down, and that upsets me. Lupine Proletariat 14:55, 18 May 2006 (UTC)

SX - Single Word External (16-bit data bus) DX - Double Word External (32-bit data bus). —Preceding unsigned comment added by 89.243.46.113 (talk) 20:34, 6 November 2007 (UTC)

...and also a contraction of SimpleX and DupleX, which could be in relation to having a bus that was the same size as, or double that of the previous-gen CPUs? 193.63.174.10 (talk) 15:10, 27 October 2010 (UTC)

80287[edit]

Assembler manuals claims that original 80386 could work with 80287 processor - to save one's investments, or to allow intemediate price-and-perfomance level between FP-less sole 80386 and expensive 80386+80387 pair

A few board designs had both a DIP socket for an 80287 and a PGA socket for an 80387. Only one FPU could be installed. I've seen only one example of such a board in 29 years. It was a "full AT" size, only fit the large, horizontal desktop cases. Bizzybody (talk) 11:22, 21 February 2012 (UTC)

Multiply bug[edit]

OK, why delete that section? It's significant - the first Intel '32 bit' CPU didn't, you know, actually work and Intel ended up stamping thousands of chips '16 bit only'. Several important programs (eg Windows) checked for this. Lovingboth 22:30, 29 October 2006 (UTC)

Third generation x86 processor?[edit]

Isn't 80386 be the fourth generation x86 processor (8086, 80186, 80286 and then 80386)? Or is there some reason why one of these should not be regarded as a generation? Even though this article is about computing, I guess the 8086 can't be counted as zeroth generation... 213.216.199.30 21:20, 13 February 2007 (UTC)

To my understanding, the 8086 and the 80186 are both considered chips in the first generation, similar to how the Pentium II and III are both considered sixth generation chips. Suigi 05:05, 14 February 2007 (UTC)
It would be more natural to regard only 8086 (1978) and 8088 (1979) as first generation chips, but both 80286 (1982) and 80186/188 (1982) as second generation designs. 80186 and 80286 have a great deal in common technically. /HenkeB 23:47, 18 March 2007 (UTC)
That makes more sense. It still means that the 80386 is the 3rd generation chip, thus resolving this issue. Suigi 01:12, 19 March 2007 (UTC)
Yes. /HenkeB 15:48, 19 March 2007 (UTC)
Further question is, does the 486 count as 3rd or 4th gen, as it's largely a turbocharged 386? The Pentium becomes the 4th gen if 486 is 3rd (with a genuine architecture shift)... and so, MMX and/or P-Pro becomes 5th gen? They exhibit enough difference from both the original Pentiums, and the PII/PIII line after all... 193.63.174.11 (talk) 11:45, 27 April 2011 (UTC)

Socket[edit]

The summary box states that the 386 is a 68 pin CPU. I am fairly sure (looking at a summary from Intel docs) that the 386 DX was offered in a 132 pin PGA or PQFP format. The coprocessor, the 387, was 68 pin. Also, the 386 SX may have been offered in a 68 pin format since it had only a 16 bit external data bus.

Blackberry[edit]

Should it be noted that this chip was still used in RIM Blackberries until recently? —Preceding unsigned comment added by 86.164.161.165 (talk) 14:15, 25 December 2007 (UTC)

Disambiguation needed[edit]

A disambiguation page is needed for "i386". A search querie for this term directs to this page (the 80386 page). The term "i386" also refers to a directory used in Windows operating systems that contains files used to create an installation disk. The directory is not related to the processor used on the host machine. WWriter (talk) 22:06, 4 February 2008 (UTC)

Firstly, the directory on a Windows install disk does refer to the processor architecture. Secondly, is the directory on a Windows install disk worthy of its own article? - Richard Cavell (talk) 06:55, 1 July 2009 (UTC)

Typo found[edit]

Intel i386 SL processon <--- Should end in R, but being a newbie I can't figure out how to get at it... It's the text for the image of the SL processor I believe.

fixed. -75.69.164.125 (talk) 21:43, 21 March 2008 (UTC)

i386EX and Hubble Space Telescope[edit]

This page says that the i386EX was used in the Hubble Space Telescope. The Hubble Space Telescope was launched in 1990 (but didn't work until 1993 when the corrective mirrors were installed), but the i386EX did not come out until 1994. How can this be? Was the i386EX added on a servicing mission? —Preceding unsigned comment added by 24.167.184.128 (talk) 05:10, 22 September 2008 (UTC)

40Mhz parts?[edit]

I'm almost certain you could get 40Mhz 386 desktops - I even have gaming magazines that report it as the minimum spec for some older or less demanding titles (e.g. "40mhz 386, 25mhz 486SX or 386+FPU, or any 486DX or Pentium"). However this speed is only reported as available for embedded parts in the article. Was it an official (and possibly premium at first but eventually cheap and long-running) Intel (or AMD) 5v CPU, or was some manufacturer leveraging embedded processors in their budget-spec computers - in sufficient quantity for them to be worth mentioning instead of e.g. "well, it *will* run on a 33mhz 386, but it'll be choppy" or whatever? 193.63.174.11 (talk) 11:56, 27 April 2011 (UTC)


I had a machine from data general that came with an intel 386 @ 40MHz. I wish I still had it to provide CPU suffixes, but I know it came stock that way. — Preceding unsigned comment added by 24.63.79.245 (talk) 13:41, 3 September 2011 (UTC)

Yes, there was a 80386DX 40Mhz CPU available for desktops. I had one, never could locate a matching Mhz 80387DX. At least one company made a 80486DX2 80 Mhz CPU with the 80386 PGA pinout, but they were bloody expensive. The 40Mhz 386 was most likely a "Take that!" response to Motorola's 40 Mhz 68030 which garnered much publicity for its used in Apple's "wicked fast" Macintosh IIFX. Bizzybody (talk) 11:14, 21 February 2012 (UTC)

Look out for possible copyright violations in this article[edit]

This article has been found to be edited by students of the Wikipedia:India Education Program project as part of their (still ongoing) course-work. Unfortunately, many of the edits in this program so far have been identified as plain copy-jobs from books and online resources and therefore had to be reverted. See the India Education Program talk page for details. In order to maintain the WP standards and policies, let's all have a careful eye on this and other related articles to ensure that no copyrighted material remains in here. --Matthiaspaul (talk) 15:12, 30 October 2011 (UTC)

I have just reverted these edits by User Sachin.god, because they were a 1:1 copy paste from existing material. -84user (talk) 01:52, 1 November 2011 (UTC)

Programs written for older chips.[edit]

Some 64bit CPUs have removed support for running 16 bit software. One example is the AMD LE1620. (One of which is in the box I'm using right now.) Try to run any old DOS program on Windows XP using this CPU and you get an NTVDM error. The CPU doesn't support the x86 virtual machine, might even be incapable of booting plain old DOS. Bizzybody (talk) 11:19, 21 February 2012 (UTC)

B1 stepping bugs[edit]

There should be some discussion about the B1 stepping bugs that made opcodes fail to work if followed by specific other opcodes. — Preceding unsigned comment added by 84.243.199.239 (talk) 16:29, 10 December 2014 (UTC)

code[edit]

Could someone provide the assembled code for the Example Code listed? Having the actual binary (hex) code allows for code size comparisons with other processors. — Loadmaster (talk) 17:30, 29 April 2016 (UTC)

Start DOSBox or start Bochs and install FreeDOS inside it, then type the code into "debug.com" by entering "a" + [enter], when done. Type just <enter> to quit assembling and then type "d 100" to dump the hex version. Taken freely from memory. Bytesock (talk) 19:34, 29 April 2016 (UTC)
I used defuse.ca, an online x86/x64 assembler/disassembler site. Hopefully the code is correct. — Loadmaster (talk) 17:28, 4 May 2016 (UTC)
Perhaps this is what you are looking for? "55 89 E5 8B 75 0C 8B 7D 08 8A 06 46 3C 41 0F 8C FC FF FF FF 3C 5A 0F 8F FC FF FF FF 04 20 88 07 47 3C 00 0F 85 FC FF FF FF 5D C3". It's right there in the code example. Your online assembler did however return "Sorry, your input is too big or contains unsafe directives! The period (.) character must not appear anywhere in your source code.". Ooops, I see it's you that added those bytes. The online assembler did however return "Error: junk `al' after expression Error: no such instruction: `copy mov [edi],al' Error: no such instruction: `done pop ebp' " even with the old code and the comments removed. Would be interesting if x86_64 incurs a serious bloat. What platforms have you tried to compare with? Bytesock (talk) 19:02, 4 May 2016 (UTC)
Yes, I used the online assembler to produce the opcodes, which I then added to the example code in the article. My concern is that the relative conditional jump opcodes (for JL, JG, and JNE) seem a bit lengthy (6 bytes); I would expect there to be shorter equivalent versions (only 2 or 3 bytes long). — Loadmaster (talk) 21:07, 4 May 2016 (UTC)