Talk:Intel 8086

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing (Rated C-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.


The article seems to be lacking sources. A reference to the original 8086 intel doc would be cool, maybe somebody has a link? anton

Incorrect information[edit]

I corrected the '1989' introduction year to 1978. It seems that somebody wrote wrong information.

by sending out the appropriate control signals, opening the proper direction of the data buffers, and sending out the address of the memory or i/o device where the desired data resides

Also by AMD[edit]

AMD created this processor type too. It has number P8086-1 has a AMD logo and also (C) Intel 1978. I can make a picture of the chip and add it to the article.

AMD did not create this processor but they did second-source it i.e. manufacture it under licence from Intel because Intel could not manufacture enough itself. Intel are the sole designers of the original 8086 processor. --ToaneeM (talk) 09:15, 15 November 2016 (UTC)

Please clarify[edit]

The 8086 is a 16-bit microprocessor chip designed by Intel in 1978, which gave rise to the x86 architecture. Shortly after, the Intel 8088 was introduced with an external 8-bit bus, allowing the use of cheap chipsets. It was based on the design of the 8080 and 8085 (it was assembly language source-compatible with the 8080) with a similar register set

Does the final sentence refer to the 8088 or the 8086? At first glance, it continues the info on the 8088, but upon consideration, it seems more likely to refer to the 8086. Is this correct? It's not too clear.

Fourohfour 15:15, 9 October 2005 (UTC)

It refers to both, 8086 and 8088 are almost the same processor, except that 8086 has 16 bit external data bus and the 8088 has 8 bit external data bus and some very very small diferences. both where based on 8080's design. -- 17:20, 18 October 2005 (UTC)

In "Busses and Operation", it states "Can access 220 memory locations i.e 1 MiB of memory." - While the facts are correct, the "i.e." suggests that they are directly related. The 16 bit data bus would imply an addressable memory of 2MiB (16 x 220), but the architecture was designed around 8 bit memory and thus treats the 16bit bus as two separate busses during memory transfers. 11:08, 20 March 2007 (UTC)

The 8086 was not "designed around 8 bit memory" (however 8088 was), but each byte has its own address (like in 8-bit processors). Therfore 220 bytes and/or 16-bit words can be directly addressed by both chips; the latter are partially overlapping however, as they can be "unaligned", i.e. stored at any byte-address. /HenkeB 20:11, 21 May 2007 (UTC)

Even more confusing, the 8088 article says nearly the exact same thing: 'The original IBM PC was based on the 8088' Family Guy Guy (talk) 01:09, 30 May 2017 (UTC)

But what's the confusing part of this? Andy Dingley (talk) 01:26, 30 May 2017 (UTC)

first pc?[edit]

is this the first "pc" microprocessor? I think so, in which case it should be noted, although I expected to find a mention to the first, whether it's this or a second one. The preceding unsigned comment was added by (talk • contribs) .

The first IBM PC used an Intel 8088, which is (as far as I know) effectively a slightly cut-down 8086 with an 8-bit external data bus (although internally still 16-bit). So that fact should probably be noted in the article. Fourohfour 18:27, 10 March 2006 (UTC)

also, for comparison, I'm interested in the size and speeds of pc hard-drives at the time, ie when it would have been used by a user -- I had thought around 10 MB but in fact if it can only address 1 MB (article text) this seems unlikely. Did it even have a hard-drive? a floppy? anything? The preceding unsigned comment was added by (talk • contribs) .

You're confusing things here. When it talks about "memory" it means RAM. Hard drives are different, they are I/O devices and the processor communicates with them via programmed input/output (PIO) or DMA. The 20-bit / 1Mbyte limitation is only for RAM. Mystic Pixel 07:36, 8 June 2006 (UTC)


I don't think the two articles should be merged. After all, the one talks about the 8086 itself while the other about the general architecture my_generation 20:06 UTC, 30 September 2006

I think merging the articles is a good idea. The Intel 8086 set the standard for microprocessor architecture. Look at the Intel 8086 user manual (if you can find it on eBay or Amazon) and you'll see the information that's included in both of these articles. It would be easier to have all that information in just one. In answer to the above disagreement, you can't describe the 8086 without describing its architecture.

There's nothing wrong with duplicating information that's already in the manual, but Microprocessor 8086 is poorly written and misnamed. An encyclopedia article isn't going to explain how to program a µP and theoretical details are out of place here. Is there anything in particular you think should be moved to Intel 8086? Or perhaps remove the duplicate information and give Microprocessor 8086 a name which is clearer (or at least grammatical)? I'd like to remove that page and just have it redirect. Potatoswatter 06:46, 21 November 2006 (UTC)
I second what Potatoswatter said. (I thought the 8086 manual was available, but it doesn't seem to be. That's really weird, the 80186 manual is all over the place. Hmm.) Mystic Pixel 05:10, 22 November 2006 (UTC)

Not a bug?[edit]

This comment was in the "bugs" section: Processors remaining with original 1979 markings are quite rare; some people consider them collector's items.

This didn't seem to be related to a bug, so I moved it here. - furrykef (Talk at me) 06:38, 25 November 2006 (UTC)

It was referring to the previous statement that such processors were sold with a bug. I'm putting it back. Potatoswatter 15:04, 25 November 2006 (UTC)
What exactly was the bug that was in the processor? More information could be given than just a "Severe interrupt bug". --SteelersFan UK06 03:10, 8 February 2007 (UTC)

Processor Prices[edit]

Does anyone knows how much was the price of an 8086 processor when launched as a per unit or per thousand units? I think it is an important information that is missing.-- 22:21, 13 August 2007 (UTC)

I found the pricing from the Intel Preview Special Issue: 16-Bit Solutions, May/June 1980 magazine. It is now posted at List of Intel 8086. Rjluna2 (talk) 01:29, 10 October 2015 (UTC)

I don't recall a onesy price for the CPU alone. I do recall that we sold a box which had the CPU, a few support chips, and some documentation for $360. That was an intentional reference to the 8080, for which that was the original onesy price. I think the 8080 price, in turn, was an obscure reference to the famous IBM computer line, but that may be a spurious memory. For that matter, I'm not dead sure on the $360 price for the boxed kit, nor do I recall the name by which we called it at the time. I have one around here somewhere, as an anonymous kind soul left one on my chair just before I left Intel for the second time. —Preceding unsigned comment added by (talk) 20:48, 26 February 2008 (UTC)

iAPX 86/88[edit]

The Intel manuals (published 1986/87) I have use the name iAPX 86 for the 8086, iAPX 186 for the 80186 etc. Why was this? Amazon lists an example here John a s (talk) 23:04, 6 February 2008 (UTC)

Around the time the product known internally as the 8800 was introduced as the iAPX432 (a comprehensive disaster, by the way), Intel marketing had the bright idea of renaming the 8086 family products.

A simple 8086 system was to be an iAPX86-10. A system including the 8087 floating point chip was to be an iAPX86-20.

I (one of the four original 8086 design engineers) despised these names, and they never caught on much. I hardly ever see them any more. But, since Marketing, of course, controlled the published materials, a whole generation of user manuals and other published material used these names. Avoid them if you are looking for early-published material. If you are looking for accuracy, avoid the very first 8086 manual, than which the second version had a fair number of corrections--but nearly all those were in long before the iAPX naming.

Peter A. Stoll —Preceding unsigned comment added by (talk) 20:43, 26 February 2008 (UTC)

Embedded processors with 256-byte paragraphs[edit]

The article says:

According to Morse et al., the designers of the 8086 considered using a shift of eight bits instead of four, which would have given the processor a 16-megabyte address space.

It seems some manufacturers of 80186-like processors for embedded systems have later done exactly this. That could perhaps be mentioned in the "Subsequent expansion" section if reliable secondary sources can be found. So far, I've found only manufacturer-controlled or unreliable material. Paradigm Systems sells a C++ compiler for "24-bit extended mode address space (16MB)"[1] and lists supported processors from several manufacturers:

  • Genesis Microchip: I can't find any model number for a processor of theirs supporting this mode. Later bought by STMicroelectronics.
  • Lantronix: DSTni processors, spun off to Grid Connect. DSTni-LX Data Book[2] says the processor samples pin PIO31 on reset to select 20-bit or 24-bit addresses. DSTni-EX User Guide[3] calls the 4-bit shift "compatible mode" and the 8-bit shift "enhanced mode".
  • Pixelworks: ImageProcessor system-on-a-chip, perhaps including the 80186-compatible microprocessor in PW164[4]. Pixelworks licensed[5] an 80C186 core from VAutomation but only 20-bit addressing is mentioned there.
  • RDC Semiconductor: R2010, always 8-bit shift.[6]
  • ARC International: acquired[7] VAutomation, whose Turbo186 core supported a 256-byte paragraph size.[8][9] (talk) 23:52, 11 July 2009 (UTC)

Interesting, please feel free to add some of this to the article (at least for my part, I've written around 60-70 percent of it, as it stands). /HenkeB —Preceding unsigned comment added by (talk) 19:40, 3 November 2009 (UTC)
The RDC R2010 is RISC, not x86-compatible Лъчезар共产主义万岁 17:51, 14 August 2011 (UTC)

Instruction timing[edit]

Can anyone provide a reference to an official source from where these timings are taken? The datasheets given at the end of the article present only external bus timings (memory read/write is 5 clocks) but doesn't list any other information about the internal logic and ALUs. Thank you. bungalo (talk) 10:26, 26 April 2010 (UTC)

One of these 5 cycles in the timing diagrams is a fully optional wait state, so the basic memory access cycle is 4 clock cycles in the 8086. (talk) 09:42, 27 April 2010 (UTC)

What a valid citation looks like[edit]

Moved from user talk page...article discussions belong on article talk pages May I ask you again What is wrong with a data sheet or a masm manual as a ref? Can you get a better source? I don't get it! What are you trying to say? "Only the web exist" or something? (talk) 06:08, 3 November 2010 (UTC)

The problem is that "You could look it up somewhere" isn't a valid reference style and tells the reader nothing about where to find a specific relevant description of the instruction timing cycles. Find a book with a publisher and an ISBN, and a page number for a table of instruction cycle times. I'd cite my Intel data book if I could, but right now it's stored away and I won't have it unpacked for weeks. Telling the reader to "look it up in some manual you've never heard of" is lazy and very nearly insulting to the reader, who probably is familiar with the idea of finding relevant literature anyway and doesn't need some patronizing Wikieditor to tell him the bleeding obvious. This is NOT a difficult fact to cite, there must have been at least 3 books printed in the history of the world that actually list instruction cycle timing for the 8086 and all we need is ONE ISBN/author/publisher/date/page number citation to validate this table. --Wtshymanski (talk) 13:07, 3 November 2010 (UTC)
Taken from the MASM 5.0 reference manual; numbers were also included in early 8086 and 8088 datasheets. That's a pretty poor citation. Does this golden manual have a publisher, a date, you know, stuff like that? Perhaps even an ISBN? Any specific datasheets in mind? Publisher, date, etc. ? --Wtshymanski (talk) 16:10, 20 December 2010 (UTC)
Still not a citation. Go to the front page of whatever manual you're looking at, and copy down here the name of the editor or author, the full name of the manual, it's edition number if any, the copyright date, the publisher, the ISBN if it has one, and the pages on which these numbers allegedly appear. Can you do that for us? Hmm? --Wtshymanski (talk) 13:01, 21 December 2010 (UTC)
There. Was that so hard? Now when a vandal comes along and randomizes all those cycle timing numbers (if that hasn't happened already), someone can compare with *his* copy of the Microsoft MASM Manual 5th Edition and make the numbers consistent with the reference again. Page numbers are still needed. Its important to say *which* manufacturer's data sheets you're talknig about, too. Does an AMD part or Hitachi part have the same cycle count as the Intel part? As long as we say *which* manufacturer we're talking about, we're ok. --Wtshymanski (talk) 15:29, 21 December 2010 (UTC)

Of course it is about the original Intel part if nothing else is said (see the name of the article). I gave a perfectly fine reference in April this year, although I typed in those numbers many years ago. I have never claimed it to be a citation; citations are for controversial statements, not for plain numerical data from a datasheet. The MASM 5.0 reference manual was certainly uniquely identifiable as it stood, and it would really surprise me if more than a few promille of all the material on WP has equally good (i.e. serious and reliable) references. Consequently, with your logic, you better put a tag on just about every single statement on WP...!? (talk) 13:05, 22 December 2010 (UTC)

If it was important enough for you to add it to the article, it was important enough to have a proper citation. Never say "of course" in an encyclopedia. How do we know the MASM manual is talking about the same part, stepping level, etc. ? Call up your local public library and ask them for "the MASM reference manual, you know, like, 5.0 ? " and see how far you get with just that. Real encyclopedias don't need so many references because real encyclopedias have an editorial board and paid fact checkers. Wikipedia relies on citations because we don't trust our editors to get anything right and so we rely on multiple persons to review any contributions. Wikipedia is sadly lacking in citations. Any numerical data should have a citation so that vandalism can be detected and reverted; you may have noticed once or twice an anon IP will change a single digit in a value and scurry off back under a rock, leaving a permanent error in the encyclopedia because no-one can find the original reference from whence the data came. It's too bad you were inconvenienced with backing up a statement of fact...let's take all the references out of Wikipedia and rely on our anonymous editors to keep the tables right. --Wtshymanski (talk) 14:38, 22 December 2010 (UTC)
Any distance or perspective? Wikipedia is not supposed to be some kind of legal document, it's a free encyclopedia. Also, I find it peculiar how extremely concerned you seem to be with "vandals" when it was yourself that actually messed up the table – putting JMP and Jcc timings in N.A. columns. I corrected that (20 dec 11.01) and put that (damn) reference back. However, you kept on deleting that information, over and over. That style makes me sick and tired just by thinking of contributing any further to Wikipedia (and don't you call me "any slacker"). (talk) 20:59, 22 December 2010 (UTC)
You don't get it, do you? I would never confuse this stunning display of erudite scholarship with that of a slacker. --Wtshymanski (talk) 22:49, 22 December 2010 (UTC)
Looking at the Intel Component Data Catalog, 1980 edition, bearing Intel publication number AFN-01300A-1, which includes 8086/8086-2/8086-4 16-Bit HMOS Microprocessor data sheet, there's no listing of instruction cycle times for each opcode. Until a data sheet listing instruction cycle times can be found, I'm deleting the vague description since Intel itself doesn't seem to have published instruction cycles in their own data sheet. --Wtshymanski (talk) 06:07, 2 January 2011 (UTC)
The document we really want to cite is Intel 8086 family user's manual,October 1979, publication number 9800722-03. The bootleg copy on the Web gives instruction timing cycles in Table 2-21, pages 2-51 through 2-68 (starting in the non-OCR .pdf at page 66). The summary in this article leaves out a lot of details. --Wtshymanski (talk) 06:39, 2 January 2011 (UTC)

Random vs ad-hoc[edit]

Industry jargon (such as the cited reference, first one on the Google Books hits) seems to prefer "random logic" as the description for the internal control circuits of a microprocessor, as contrasted with "microcode". "Ad-hoc" has the disadvantage of being Latin, and is probably as pejorative as "random" if you're sensitive about such things. --Wtshymanski (talk) 14:51, 29 June 2011 (UTC)

I'm sure that within the industry it's described as "random logic". it's also described as "plumbing", "useless overhead" and probably "that squiggly stuff". The problem is that "random" has a dictionary meaning, and it's one that's unhelpful out of this narrow context. To the lay reader, this is confusing and thus unhelpful.
I'm not defending ad hoc. It's entirely apposite and quite a good description of the real situation. It's also a pretty common and well-understood latin term (by the standards of latin loan phrases). However it is in latin, and there's a fair argument that we should avoid such as a general principle of accessibility.
Random though is just bad, because it is misleading - you yourself felt compelled to add an edit summary with an apology for using it. Call it what you like, but don't call it random. Andy Dingley (talk) 00:14, 30 June 2011 (UTC)
It doesn't matter what I call it, Andy. That's the great thing about Wikipedia. What does the literature call it? --Wtshymanski (talk) 02:18, 30 June 2011 (UTC)
Recording of one instance of its use is no compulsion to use that unhelpful use to make the article worse. There are any number of ways to word this - if you don't like ad hoc, the lose it. However adding 'random' gives quite the wrong message. Andy Dingley (talk) 08:18, 30 June 2011 (UTC)
Google Books gives 4130 hits for ' microprocessor "random logic" ' and 7 hits for ' microprocessor "ad-hoc logic" '. It's what people writing about micrprocessors call non-microprogrammed logic. ANyone stuyding microprocessors is going to run across the dread phrase "random logic" very quickly, why not use it in context here? --Wtshymanski (talk) 13:57, 30 June 2011 (UTC)
WP:V still isn't WP:COMPULSION. Most readers here (and anywhere on WP) are very naive and new to the subject, not those seriously studying it. There's no expectation that they're "going to run across the dread phrase". Andy Dingley (talk) 14:11, 30 June 2011 (UTC)
FWIW, I strongly agree that it should be "random logic"; that is a standard term. It is not comparable to descriptions like "plumbing," "useless overhead," and "that squiggly stuff," which are *not* standard technical terms. It's no more perjorative than the word "random" in "random access memory"; does anyone want to change RAM to AHAM? "Random" here means "capable of performing any arbitrary function," not "selected from a probability distribution"; no designer objects to using it in cases where it's appropriate, and no designer sees anything perjorative about describing random logic using the term "random logic." If anything, "ad-hoc" is perjorative because it suggests the design was done in an unsystematic way. I hesitate to mention this, but another standard term might be "combinational logic." That would not be as good ("random logic" is more specific because it excludes things like PLAs that could be included in "combinational logic") but it at least would make the point that it's not sequential logic like microcode, while avoiding the dirty R-word. I wish I could change "a mix" to "a mixture," but the page seems to be protected. (talk) 14:39, 9 July 2011 (UTC)

Small detail[edit]

in the Microcomputers using the 8086 section, the compaq deskpro clock speed listed doesn't match the one listed on the wiki page for this product, plus it's not listed in the "Crystal_oscillator_frequencies" page neither, I have no idea where this comes from so I added a (?).... — Preceding unsigned comment added by (talk) 08:06, 1 March 2012 (UTC)

Absurd predecessor/successor in misinfo box[edit]

It's not like kings or popes or presidents...Intel was still selling lots of 8080s after the 8086 came out, and the 8086 and 8088 were virtually the same era - the 80286 came out before the 80186, for that matter. --Wtshymanski (talk) 14:20, 21 August 2012 (UTC)

It doesn't imply that one entirely replaced the other and that only one could be on sale at a time.
The point is that the 8086 built upon the work with the 8080, and its instruction set and assembly language were broadly similar (compared at least to the 6800 / 68000 or the radically different TI 99xx series). The 8088 was a successor to it as a "value engineered" example with a skinny external bus (a thread of hardware simplification). The 80186, 80286, even the 80386, 486 et al could be considered as successors (a thread of increasingly sophisticated memory management). As we have to choose one example from each thread, lest it become too confusing, then the 8088 & 80186 seem reasonable. The 8088 & 80286 might well be better though. Andy Dingley (talk) 14:31, 21 August 2012 (UTC)
Well, if you think this vague (to me) notion of predecessor and successor is useful in the box - alright, though I think it gives the reader the wrong idea. Hopefully anyone seriously interested in the development of Intel processors will read more than the info box, and anyone who's satisfied with just the "info" box probably wouldn't care anyway. --Wtshymanski (talk) 15:19, 21 August 2012 (UTC)
The might be some reasonable arguments for giving the 8080/8085 as the predecessor (intended as a more powerful spiritual successor, similar ISAs, same support chips, NEC V20 could run both ISAs), but 8088 was more like the Celeron of its time, not a successor but a slower, cheaper version working with cheaper support chips. —Ruud 20:47, 22 August 2012 (UTC)
I'm not spiritual enough to understand this. There are fairly well-defined agreed-upon successors or predecessors of Millard Filmore, Pius X or King Moshoeshoe, but many of these chips have much more tangled family trees, overlapping partly or entirely in their brief time in the sun. I don't think the notions of succsssor/predecessor is tightly defined when it comes to Intel chips. --Wtshymanski (talk) 21:43, 22 August 2012 (UTC)
I agree with the last sentence. The 8086 not being binary compatible with the 8080 makes it not a clear-cut—no footnotes necessary—successor. Some other aspects, including the fact that is was source compatible, do make good arguments for calling it a successor. —Ruud 21:53, 22 August 2012 (UTC)
The 8086 is the first member of the Intel x86 family. It is considered to be a successor to the 8080 because 8080 source code could be rebuilt for the 8086. The 8085 is definitely NOT a variant of the 8086, it is a variant of the 8080 and fully binary compatible with it. The 8088 is not a successor to the 8086, it is a 8086 with a 8-bit data bus much like the 80386 SX is a 80386 DX with a 16-bit data bus. As far as binary compatibility goes, the Zilog Z-80 is a true successor to the 8080. The 80186 is the successor to the 8086 and it certainly came before the 80286 (hence the numbering). The 80186 is where instructions like PUSHA and POPA originated. The 80186 (or the 8-bit data bus 80188) weren't used in PCs because of the integrated hardware (e.g. interrupt controller, DMA controller, timer) which was incompatible with PC architecture. The 80286 supports all of the opcodes of the 80186 and of course adds protected mode support. Asmpgmr (talk) 19:53, 24 August 2012 (UTC)
Well, the 8080 instruction set was a subset of the 8085; SID and SOD don't work on an 8080. My CP/M-fu has long since vanished but wasn't there some stunt with the flags register that you could do, to determine if the program was running on an 8085 or an 8080? --Wtshymanski (talk) 20:00, 24 August 2012 (UTC)
Pins have nothing to do with the instruction set. 8085 is 100% binary compatible with the 8080 though this is getting off topic. I've removed 8085 as a variant of the 8086 in the infobox since that was completely wrong. The infobox as it is now is correct. Asmpgmr (talk) 20:22, 24 August 2012 (UTC)
Not correct. The 8085 instruction set was a superset of the 8080 instruction set. Since the 8080 came first, it could not have been a subset of the then non existent 8085. (talk) 17:37, 25 August 2012 (UTC)
The 8086 was code compatible with the 8080 at the source code level. This means that although an 8080 ROM would not work with the 8086, nevertheless, every 8080 instruction had an equivalent instruction (or combination of instructions) in the 8086. (talk) 17:37, 25 August 2012 (UTC)

It is clear enough that the 8085 was the processor that immediately preceeded the 8086. Since the 8080 and the 8085 were so architecturally similar, it would seem reasonable to show the predecessor as the 8080/8085. (talk) 17:41, 25 August 2012 (UTC)

Slowest clock speed[edit]

The lowest rated clock speed was 5MHz for both the 8086 here and the 8088 (which is what most PCs used, the IBMs exclusively so). Yes, the original IBM PC ran at 4.77MHz but that was a design choice: from memory it mapped quite well into the video timing signals although I admit I forget the details. The chip itself was a 5MHz chip underclocked to the slower speed. Datasheets for the two chips are available: 8086 and 8088: there are no chips slower than 5MHz described. Crispmuncher (talk) 04:05, 29 August 2012 (UTC).

The IBM PC and many compatibles used a 14.31818 MHz clock source divided by 3 which is 4.77272 MHz. Even if 5 MHz was the slowest speed rated by Intel, many early PCs definitely ran at 4.77 MHz so listing that is correct. Asmpgmr (talk) 04:37, 29 August 2012 (UTC)
The first PCs used 8088s, not 8086s, so that isn't relevant here in any case. However, the template documentation at Template:Infobox CPU points out the "slowest" parameter refers to the lowest maximum clock, it does not address underclocking. There probably is a minimum clock speed: I recall Intel making an explicit point about a fully static design for embedded 80186's but only much later, but that is probably on the order of 1MHz or less. Crispmuncher (talk) 04:55, 29 August 2012 (UTC).
IBM PCs used the 8088. Many other companies used the 8086 and some ran their systems at 4.77 MHz just like companies did with 8088-based systems though many did not (8 MHz 8086 systems were common like the AT&T 6300 / Olivetti M24). Anyway I see no reason not to list 4.77 MHz as the minimum clock speed for both the 8086 and the 8088 since this usage was common because of PCs. Asmpgmr (talk) 15:48, 29 August 2012 (UTC)
The minimum clock rate for the HMOS 8086 is not quite as low as 1 MHz. From the Intel 8086/8086-2/8086-1 datasheet, the maximum clock cycle time for all three speed grades (5 to 10 MHz max.) is 500 ns, which equals a minimum clock frequency of 2 MHz. The fact that this is constant for all speed grades implies that it results from a stored-charge decay time which is independent of the factors that limit logic signal propagation time and thus maximum frequency. As a CPU can often be overclocked a significant amount above its rated maximum frequency before it begins to actually malfunctions, it is probably also possible to "underclock" a CPU below the minimum frequency specified by the manufacturer before malfunctions actually occur, perhaps by a much larger percentage than it can be overclocked. As far as I know, few people have tried this (outside the CPU design and engineering labs, where I'm sure it has been tried extensively). (talk) 11:06, 9 October 2013 (UTC)
The point of this is not to list the minimum speed at which the chip would still work (some development boards ran even slower, to make use of cheaper memory) but to cite the design speed of the first generation of the processor - 5MHz. This isn't the PC article, it's the 8086 article. Andy Dingley (talk) 15:59, 29 August 2012 (UTC)
Yes, good point, and the 8086 was used in lots of things besides PC and compatible desktop computers, including arcade game machines, space probes, and NASA STS (Space Shuttle) ground support equipment. (talk) 11:06, 9 October 2013 (UTC)

Something you should know or insert into article about memory segmentation[edit]

How to convert hexodecimal into decimal.
Intel 8086 Memory Segmentation, page 8.
Intel 8086 Datasheet.
Intel 8086/8088 User Manual.
1) 64KB of 2^20=1048576 RAM address is used for ES (Extra Data Segment), from address 70000(h) to address 7FFFF(h) (in hexodecimal). This is from 7 * 16^4 + 0 * 16^3 + 0 * 16^2 + 0 * 16^1 + 0 * 16^0 = 458752 to 7 * 16^4 + 15 * 16^3 + 15 * 16^2 + 15 * 16^1 + 15 * 16^0 = 458752+61440+3840+240+15=524287.
2) 64KB of 1MB RAM address is used for DS (Data Segment), from address 20000(h) to address 2FFFF(h). This is from 2*16^4=131072 to 2*16^4+15*16^3+15*16^2+15*16^1+15*16^0=131072+61440+3840+240+15=196607.

— Preceding unsigned comment added by Paraboloid01 (talkcontribs) 13:10, 1 November 2012 (UTC)

Derivatives and clones[edit]

Enhanced clones[edit]

Can someone provide information or reference to support this claim: Compatible—and, in many cases, enhanced—versions were manufactured by Fujitsu, Harris/Intersil, OKI, Siemens AG, Texas Instruments, NEC, Mitsubishi, and AMD

What is an enhanced version? To me it is a version that has some additional software features. To best of my knowledge NEC was the only company that produced such enhanced 8086 version - its V30 processor. Harris and OKI (and later Intel) made a CMOS version - 80C86, it doesn't have any software enhancements. It appears to be a simple conversion from NMOS to CMOS logic technology.

Also I don't think Texas Instruments ever made 8086 clone (they were making 8080 though).

I don't think "enhanced" means only software features must be improved. That is thinking like a programmer, and a lot of people who work with CPUs aren't programmers, or aren't mainly programmers—including most of the engineers at Intel that design the CPUs! Hardware enhancements could include lower power dissipation, more hardware interface flexibility (e.g. inbuilt 8288 bus controller logic), demultiplexed buses, higher output drive capability, and, as in the CMOS versions, more flexible clock rate (down to DC for CMOS). If these aren't enhancements, what are they? They aren't modifications, because they don't alter the basic capability, only extend it.
Another hardware enhancement would be the addition of internal peripherals like an interrupt controller or DMA controller. The interrupt controller might just be simple logic that connects a few separate interrupt pins each to specific interrupt levels, skipping the INTA cycle, or doing a dummy one, or doing a real one that uses a different interrupt vector table from the Intel-standard one used by main INTR interrupt pin. Some added peripherals, such as a DMA controller or a programmable interrupt controller, would be visible to software, whereas the simple non-programmable interrupt controller described above would not be.
Also, there could be hardware enhancements to the base CPU that could be visible to programmers. Faster instruction execution would be one that would be visible to software, if it was timing-aware. (If you program in a high-level language, then no, you can't see this, but you can if you programin assembly language.) Caches and buffers, e.g. for data and instructions from memory, would also potentially have a software-visible effect on timing. (talk) 10:22, 9 October 2013 (UTC)

Soviet clones[edit]

I don't agree with the following claim: The resulting chip, K1810BM86, was binary and pin-compatible with the 8086, but was not mechanically compatible because it used metric measurements. In fact the difference in lead pitch is minimal - only 0.04 mm between two adjacent pins, which results in 0.4 mm maximal difference for a 40 pin DIP package. So that soviet ICs can be installed in 0.1" pitch sockets and vice-versa. It was not unusual to see western chips installed in later soviet computers using metric sockets. And interestingly enough I also have seen Taiwanese network card using some soviet logic IC.

This picture shows ES1845 board with NEC D8237AC-5 DMA controller IC installed in a metric socket (top left corner). — Preceding unsigned comment added by Skiselev (talkcontribs) 21:52, 5 June 2013 (UTC)

Variable MN/MX[edit]

There is no reason a computer designer could not equip an 8086/8088-based computer with a register port that would change the MN/MX pin, taking effect at the next reset. (Changing the level of the pin while the CPU is running might have unpredictable and undocumented results.) However, since other hardware needs to be coordinated to the MN/MX pin level, this would require other hardware to switch at the same time. This would not normally be practical, and probably the only reasonable use for it would be hardware emulation of two or more different computer models based on the same CPU (such as the IBM PCjr, which uses the 8088 in minimum mode, and the IBM PC XT, which uses it in maximum mode). It is even possible that the 8086/8088 does not respond to a change in the level of MN/MX after a hardware reset but only at power-up. Even then, it is certainly possible to design hardware that will power down the CPU, switch MN/MX, wait an adequate amount of time, and power it back up. (talk) 10:01, 9 October 2013 (UTC)

There is a very good reason why it cannot be done - at least not easily. Eight of the pins of the processor have different functions between MAX and MIN mode. Since these are related to the bus control signals, it would be necessary to provide logic such that the 8288 bus controller was in circuit for the MAX mode and taken out of circuit for the MIN mode. Since the purpose of the MIN mode is to save on that 8288 controller chip, there would be nothing to gain by having a switchable mode system as the 8288 would have to be present for the MAX mode of operation. I B Wright (talk) 16:36, 7 March 2014 (UTC)
It's true that a system could change the MN/MX pin but it would require a lot of circuitry and effort. Not that I haunt the design offices of the world but I've never heard of a system that did want to do this. Were you thinking of including an observation about dynamically changing MN/MX? If so, I wouldn't. You could just as well build a system that changed the Vcc supply rail from 5V to 5.5V under software control or changed the clock from 4MHz to 5MHz but it's not a typical application or educational/insightful so it's not for the 8086 Wiki page. --ToaneeM (talk) 09:39, 15 November 2016 (UTC)

Memory organisation[edit]

Maybe I missed it in the article, but there seems to be nothing about how the memory is organised and interfaced to the 8086. Unlike most 16 bit processors where the memory is 16 bits wide and and is accessed 16 bits at a time, memory for the 8086 is arranged as two 8 bit wide chunks of memory with one chunk connected to D0-D7 (low byte) and the other to D8-D15 (high byte). This arrangement comes about because the 8086 supports both 8 bit and 16 bit opcodes. The 8 bit opcodes can occupy both the low byte and the high byte of the memory. Thus the first (8 bit) opsode will be present on the low byte of the data bus, but the next will be present on the high byte. Further, if a single 8 bit opcode is on the low byte, then any immediately following 16 bit opcodes will be stored with its lowest byte on the corresponding high byte of the memory and the highest byte on the next addressed low byte. In both cases there is an execution time penalty. In the first scenario, the processor has to switch the second opcode back to its low byte position (time penalty is minimal in this case). In the second scenario, the processor has to perform 2 memory accesses as the 16 bit opcode occupies 2 addresses, further the processor then has to swap the low and high bytes once read (the swap is a minimal time penalty but the two memory accesses are a significant penalty as two cycles are required). The processor then has to read the second momory location again to recover the low byte of the next 16 bit opcode or the 8 bit opcode as required. This means that a series of 16 bit opcodes aligned on the odd boundary forces the processor to use two memory access cycles for each opcode.

Code can be assembled one of two ways. The code can be assembled such that the opcodes occupy whatever memory position is next available. This give more compact code but with an execution penalty. Alternatively, the code can be assembled such that a single 8 bit opcode always occupies the low byte of memory (a NOP code is placed in the high byte), and a 16 bit opcode is always placed at a single adress in the correct low/high byte order (again NOP codes are used as required). This gives a faster execution speed as the above time penalties are avoided, but the price is larger code as the valid program codes are padded with NOP opcodes to ensure that all opcodes start on an even byte boundary. Assemblers will usually allow a mixture of modes. Compilers usually do not offer such control and will invariably compile code for minimum execution time.

Somewhere, I have a book that details all of this, but I am blowed if I can find it at present. I B Wright (talk) 17:10, 7 March 2014 (UTC)

Sample code[edit]

The sample code is very sloppy. It saves/sets/restores the ES register even though it's never referenced by the code. The discussion says this is needed but it's wrong ("mov [di],al" uses the default DS: segment just like "mov al,[si]" does). It would be needed by the string instructions, like REP MOVSB, but the discussion proposes replacing the loop with REP MOVSW (a word instruction), which would actually copy twice as many bytes as passed in the count register. REP MOVSB would work, and would obviate the need for the JCXZ check (REP MOVSB is a no-op when CX=0). — Preceding unsigned comment added by (talk) 20:14, 19 December 2015 (UTC)