Jump to content

Talk:Motorola 68000

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Paraboloid01 (talk | contribs) at 19:15, 9 October 2012 (→‎All CPU's are 8 bit). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconComputing B‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.

Code density

The first reference link is dead.

(VAX, 320xx microprocessor usually produced more compact code than 68000 also. Sometimes the 8086 as well.
---
Hard to believe. The VAX is a 32-bit machine.

(Yes, but it was byte-coded, very compact. GJ)

The TI 32000 series is a RISC machine, isn't it?

(possibly, for some definitions of RISC. But i meant the Natsemi chip 32016 and successors. I have now fixed the link! GJ)

RISCs execute fast, but their code is not compact.

(RISC is not usually compact. But several are not much worse than 68K, and the HP-PA allegedly beats nearly everything for code size. No i don't know how, and i never got my hands on one to test. GJ)

Even space-optimized RISCs like the ARM need larger code than the 68K. They had to really sweat the ARM down with the "thumb" and "thumbscrew" approaches to reduce it to less than the 68K. Just reading about it tells me somebody had a bad 6 months getting there.

Certainly the 8086 is not smaller; You'll cram about 2x as much code into a 68K machine per byte as an x86. If you don't believe -me-, see the 6/27/97 entry:

http://vis-www.cs.umass.edu/~heller/cgi-bin/heller.cgi/?Frame=Main

x86 code is just not that compact. Ray Van De Walker

(It was 20% smaller than 68K the only time i actually coded something in both and cared enough to check the sizes. It did depend on what you were coding. 32 bit arithmetic on 8086 was horrible, and running out of registers was nearly as bad. And if you could make use of auto increment and decrement on 68K that was a big win. But the stuff i did mostly avoided all that, and was extremely compact on 8086. This experience was apparently almost normal for hand coded 16 bit stuff. 68K usually won for stuff from compilers. With the 80386, Intel became more "normal" and all the comparisons probably changed. -- Geronimo Jones)

That web page (~heller) seems like an especially bogus comparison. 1) using C++ is a joke, neither CPU architecture was designed to support it. C would be a better language to compiler. 2) using different breed compilers is silly, you should use compilers from the same stable. For example, the lattice C compiler targets both architectures, as does Metrowerks (just about), and of course gcc. 3) the program you compile probably makes a big difference. As GJ points out 32-bit ops on an 8086 are a pain, but if your C program uses mostly 'int' then that's not a problem. On a 80486 it might not make much difference. --drj

--- These are all reasonable objections. However, there's no doubt that many designers thought that it was more compact. So, I rewrote it from a NPOV to say so. I also rewrote the orthogonality discussion from a NPOV. I hope that helps. Ray Van De Walker

---

A common misunderstanding among assembly-language programmers had to do with the DBcc loop instructions. These would unconditionally decrement the count register, and then take the branch unless the count register had been set to -1 (instead of 0, as on most other machines). I have seen many examples of code sequences which subtracted 1 from the loop counter before entering the loop, as a "workaround" for this feature.

In fact, the loop instructions were designed this way to allow for skipping the loop altogether if the count was initially zero, without the need for a separate check before entering the loop.

The following simple code sequence for clearing <count> bytes beginning at <addr> illustrates the basic technique for using a loop instruction:

    move.l <addr>, a0
    move.w <count>, d0
    bra.s @7
@1:	clr.b (a0)+
@7:	dbra d0, @1

Notice how you enter the loop by branching directly to the decrement instruction. This makes it execute the loop body precisely <count> times, simply falling right out if <count> is initially zero.

Also, even though the DBcc instructions only support a 16-bit count, it is possible to use them with a full 32-bit count as follows:

    move.l <addr>, a0
    move.l <count>, d0
    bra.s @7
@1: swap d0
@2: clr.b (a0)+
@7: dbra d0, @2
    swap d0
    dbra d0, @1

This does involve a bit more code, but because the inner loop executes up to 65536 times for each time round the outer loop, the extra time taken is insignificant.Ldo 10:05, 12 Sep 2003 (UTC)

all motorola 68k

its nice and it needs improvement

virtualization

The main page claimed that "the 68000 could not easily run a virtual image of itself without simulating a large number of instructions.". This is false; the only 68000 instruction which violates the Popek_and_Goldberg_virtualization_requirements is the "MOVE from SR" instruction. The 68010 made "MOVE from SR" privileged for that reason, and added an unprivileged "MOVE from CCR" instruction that could be used in its place.

It was further claimed that "This lack caused the later versions of the Intel 80386 to win designs in avionic control, where software reliability was achieved by executing software virtual machines.". The i386 is MUCH harder to virtualize than a 68000, as it has very many violations of the Popek and Goldberg requirements, and they are much more difficult to deal with than the 68000's "MOVE from SR" instruction. See X86 virtualization, and in particular Mr. Lawton's paper referenced there.

I'm not sure how common the i386 was in avionics, but the 68000 and later 68K family parts were in fact widely used.

--Brouhaha 22:52, 23 Nov 2004 (UTC)

I know the early Apollo workstations had to include a major KLUDGE in order to implement virtual memory-- they had TWO 68000's one running one clock cycle ahead of the other. If the one ahead got a page fault, the one behind would service the fault, then they'd exchange places.

"Its name was derived from the 68,000 transistors on the chip."

Please supply a reference for this. Mirror Vax 21:12, 18 Jun 2005 (UTC)


It's really unlikely the 680000 has only 68,000 transistors. More likely the name came as an upgrade of the good old Motorola "6800" series. Although there's almost no resemblance between the two architectures.

Actually the MC68000 did have approximately 68,000 transistor "sites"; that count included PLA locations that might or might not have an actual transistor depending on whether that PLA bit was a one or a zero. This information was widely publicized by Motorola FAEs back then, but wasn't in the data sheets, so it's hard to find anything that would be considered definitive today. At one time the Mototola SPS BBS had information on transistor counts of various devices in the 68K family, which ranged from 68,000 for the MC68000 to 273,000 for the MC68030. If someone had time to dig through electronics trade journals (Electronics, Electronics Design, EDN, EE Times) from 1979-1980, they might find mention of the transistor count.
Or one might pester one of the original designers of the MC68000. His email address isn't that hard to find, but I'm not going to put it here since that would probably result in the guy getting tons of email with dumb 68K questions.
Anyhow, it's accurate to say that the MC68000 designation derived from BOTH the transistor count and as a logical successor to the MC6800 family. --Brouhaha 01:03, 17 October 2005 (UTC)[reply]
We can only include verifiable information, not speculation (however plausible). Besides, how the name was arrived at is not important. Mirror Vax 01:54, 17 October 2005 (UTC)[reply]
The 68000 transistor count was widely known at the time - I'm sure one could find it mentioned in Byte (magazine) etc. There was a great deal of rivalry between the 8086 and the 68000. The transistor count was presumably a way of advertizing how advanced the 68000 was, compared to the 8086, and explain why the 68000 was delayed. An important piece of information, IMHO. -- Egil 04:56, 17 October 2005 (UTC)[reply]
Just doing a quick google on "68000 transistors" I easily found:
Note the 29000 transistors of the 8086. Not mentioning the rivalry between the 68k and the 8086, and not mentioning the transistor count issue would be the wrong thing here, it is an important piece of historical information. The actual transistor count ended up slightly over 68000, I've seen 70000 mentioned. -- Egil 05:12, 17 October 2005 (UTC)[reply]
Mirror VAX wrote "We can only include verifiable information" -- since when? You *REALLY* would not like what the 68000 page would turn into if we removed everything that wasn't 100% verifiable from actual printed, customer-distributed Motorola literature. Since multiple people (myself included) have personal recollection of Motorola FAEs giving the 68000 transistor number and stating that it influenced the part number, I think it's fair game to include, and certainly it's closer to being authoritative than a lot of the other rubbish on the page.
The 68000 FAQ has a list of transistor counts that appears to have been derived from information Motorola put on their now-defunct "Freeware BBS". It confirms the transistor count. --Brouhaha 23:41, 17 October 2005 (UTC)[reply]
First of all, the subject is not the transistor count. We are discussing how the chip was named. Sorry if I didn't make that clear. Why isn't it good enough to simply state the transistor count, and leave to the reader speculation about what "influenced" the name? Mirror Vax 02:11, 18 October 2005 (UTC)[reply]
Also, it's possible that the influence worked in the other direction - perhaps they decided on the 68000 name, and then creatively rounded the transistor count to match (maybe there are really 67000, or 69000...) Mirror Vax 02:21, 18 October 2005 (UTC)[reply]
Your latest suggestion is ridicolus. If you had even bothered reading the referrences, you would have seen that the final design ended up with having around 70,000. Motorola made a great marketing fuss about the transistor count wrt. chip name, and that is certainly something that should be mentioned. -- Egil 05:35, 18 October 2005 (UTC)[reply]
Mirror VAX asks "Why isn't it good enough": because various Motorola employees including FAEs at the time of introduction made a point of the transistor count being related to the part number; it's not just some random coincidence that customers noticed after the fact. Why do you have such a big problem with it? As for your other suggestion, I've been in the industry for over 20 years, currently work for a semiconductor company, and I've never yet heard of anyone basing design characteristics of a chip on the numerical portion of a part number.
It is much more likely the case that they had an approximate transistor budget in mind when they started the design, beased on the process technology and die size they wanted. As the design progressed, the transistor budget probably changed. For instance, it could have been decided to increase the transistor budget to allow for more GPRs, or a larger instruction set. It is possible but rather less likely that the design ended up needing fewer transistors than the original budget. Without contacting the designers, we're unlikely to ever know what the original transistor budget at the outset of the project was.
In any case, it is common practice for the final part number to be determined AFTER the design is complete and ready for production. One chip my employer developed was known by the number 4000 during development, then 3035 for first silicon (not shipped to customers), then 3023 for the final product. Except possibly for the original 4000 designation, the part numbers were determined by the marketing department and were essentially unrelated to the engineering details. --Brouhaha 00:23, 19 October 2005 (UTC)[reply]
You don't know how the marketers arrived at the name. You weren't in the room. I wasn't in the room. So we have two choices: (1) we can invent a history that seems plausible, and might be wholly true, partly true, or wholly false, or (2) we can stick to what we know to be true. You prefer (1); I prefer (2). Mirror Vax 02:26, 19 October 2005 (UTC)[reply]
I know what the Motorola FAEs *said* was the basis for the name. So we can assume that they were telling the truth, or we can assume that they were lying, or we can assume that I am lying. Which seems more plausible to you?
Did you actually have any contact with Motorola FAEs regarding the MC68000 in the 1979-1981 timeframe? I dealt with the local Motorola FAEs in Denver as part of my job. --Brouhaha 05:36, 19 October 2005 (UTC)[reply]
OK, found a reasonably definitive reference. Harry "Nick" Tredennick, one of the engineers responsible for the logic design and microcode of the 68000 (and listed as one of the inventors on six of the Motorola patents regarding the 68000), posted to comp.sys.intel on 22-Aug-1996 a response to comments about the 68000 designation being derived from the transistor count, or as a followon to the 6800: "I think there was a little of each in the naming, but definitely some contribution from its being a follow-on to the 6800. We (the lowly engineers) were concerned at the time that the press would confuse the 6800 with the 68000 in reporting. It happened." This confirms what the Motorola FAEs were telling customers at the time. --Brouhaha 09:29, 24 October 2005 (UTC)[reply]
The current version of the article says, "The transistor cell count, which was said to be 68,000 (in reality around 70,000)...". I don't know if that's true or not, but if it is, it undermines the notion that the name was derived from the transistor count (as does Tredennick's statement that there was "definitely some contribution from its being a follow-on to the 6800"). Rather, it suggests that the stated transistor count was derived from the name. Why not name it the MC70000? Why, if you are bragging about large transistor count, would you "round down" 70,000 to 68,000? Mirror Vax 15:06, 24 October 2005 (UTC)[reply]
Which part of "I think there was a little of each" did you not understand? He didn't say that the part number was based exclusively on the MC6800. And given your insistence on authoritative information, where is the authoritative source for the 70,000 count? --Brouhaha 19:03, 24 October 2005 (UTC)[reply]
Good question. As I said, I don't know if it's true or not. Mirror Vax 19:29, 24 October 2005 (UTC)[reply]

Motorola 6800

What about the Motorola 6800 (one zero less)? --Abdull 13:12, 2 October 2005 (UTC)[reply]

What about it? --Brouhaha 01:03, 17 October 2005 (UTC)[reply]
The Motorola 6800 is an 8-bit CPU.

Talking about claims

The article says: "Originally, the MC68000 was designed for use in household products (at least, that is what an internal memo from Motorola claimed)." I very much doubt this. What sort of household product would need the computing power of the 68k? The 68k was totally state of the art wrt complexity, pin count and chip area at the time, with a price to match. (I would have believed the above statement if we are talking about the MC6800, but that is another issue). -- Egil 05:59, 17 October 2005 (UTC)[reply]

So, how many bits?

To help clarify this, is the 68000 code word size 16 or 32 bits wide? --Arny 09:03, 30 January 2006 (UTC)[reply]

Do you mean the size of the instructions? They could vary from 16 bits (eg 0x4e75 for RTS, 0x4e71 for NOP, and 60xx for short relative branches) to 80 bits (0x23f9aaaaaaaa55555555 for MOVE.L $AAAAAAAA, $55555555). The data bus for reading/writing to memory was 16 bits wide, and the registers A0-A7 and D0-D7 were 32 bits wide. Cmdrjameson 14:17, 30 January 2006 (UTC)[reply]
I think the conventional view is that the 68000 is a 16-bit implementation of a 32-bit architecture; the later 68020, '030 and '040 are 32-bit implementations of the same architecture. This is what it basically says in my copy of "68000 primer" by Kelly-Bootle and Fowler. Graham 22:55, 30 January 2006 (UTC)[reply]
Yes, this is what I've heard too. I'm next to certain this is explicitly documented in Motorola's reference manuals about the 680x0. By way of contrast, the 68008 was also a 16-bit implementation of the same architecture, but this time in a smaller physical package and as a result had an 8-bit data bus, and only a 20-bit external address. Cmdrjameson 01:20, 31 January 2006 (UTC)[reply]
It was still a 16/32-bit chip though. The narrow bus was only to keep the physical pin count down. It used as many fetches as needed to bring in the data byte by byte on the 8 lines. Of course this made it slow but more than adequate for the applications it was intended for. I think this approach was really clever on the part of Moto - they allowed people to learn the instruction set once and apply it over a very wide range of different chips and applications. The same code would run unchanged on all varieties of the processor and the hardware just did what it needed to do to make it work. I guess it could be said that this was one of the first micros to be designed mainly from the software perspective rather than the hardware one. Graham 01:27, 31 January 2006 (UTC)[reply]
Oh absolutely, it was still definitely a 16/32-bit chip. Mind you, this notion of having a common instruction set/architecture and a large range of implementations with different price/performance characteristics wasn't unique to Motorola. DEC differentiated the VAX product line with horribly slow but cheap implementation in the 11/730 vs the faster 11/780; and later with systems like the MicroVAX vs the 8650. And of course the granddaddy of them all is IBM's System/360 which did all this back in the 60s... --Cmdrjameson 11:00, 31 January 2006 (UTC)[reply]
The 68000 was a mainframe on a chip ;-) Graham 11:10, 31 January 2006 (UTC)[reply]
Hardly. It may have been the first microprocessor to have an architecture similar to that of a mainframe CPU, but it didn't particularly have any of the other attributes of mainframes, nor was the raw computing performance comparable to a contemporary mainframe. That's not a dig at the 68000; it wasn't trying to be a mainframe, and it was definitely a best-in-class microprocessor for several years after its introduction.
Intel called their Intel iAPX 432 a "Micromainframe", and it had a few attributes that were more mainframe-like than the 68000, but its uniprocessor performace was significantly worse than the 68000. --Brouhaha 23:09, 31 January 2006 (UTC)[reply]
Earnestness alert! I was joking. Graham 23:47, 31 January 2006 (UTC)[reply]


The article claims that the 68000 has 3 ALUs. This is completely wrong. It has a single 16-bit ALU. And this is probably the most important aspect that makes the 68000 a 16/32 chip, even internally.

32-bits ALU operations are performed internally with two 16-bit steps, and using a carry when needed. 32-bit address calculations are performed using a separate simple AU unit. This unit is not an ALU. The only operation it can perform is a simple 32-bit addition.Ijor 19:40, 14 December 2006 (UTC)[reply]

If you look at a die photo there are three equal sized ALUs. Multiplication in particular is handled by two ALU's chained together. Potatoswatter 10:36, 15 December 2006 (UTC)[reply]
Can you point to some document that states that those 3 sections are 3 ALUs as you think? Can you point to any source describing that multiplication is performed by two ALUs? Can you explain, if multiplication uses two 16-bit ALUs, why it takes the number of cycles as it does? Can you explain, if it has more than a single ALUs, then why logical 32-bit operations (that don't require carry) take longer than 16-bits ones? If it has 3 ALUs, then can you explain the timing of the DIVx instructions? Can you explain why the need of implementing an ALU exclusively for address caculation, when all what is needed is a simple small addition?Ijor 17:01, 15 December 2006 (UTC)[reply]
I'm pretty busy so I can't do research for you, and I won't be around for the next month either. See The M68000 Family, vol. 1 by Hilf and Nausch. Microphotograph on p40. The low address word has a much smaller ALU. They describe in detail the layout of the microcode and nanocode ROMs and how the ALUs are ported to each other. There are two LSW ALUs and one MSW ALU. 70 cycles for multiplication = 16 instruction cycles * 4 cycles/instruction cycle + 6 cycles overhead. The ALUs form three 16 bit registers of internal state. I'd guess two are being used to calculate a running total, with the first operand latched into the low word ALU's input, and a left shift performed every insn cycle. The third ALU is used to right-shift through the second operand.
Don't underestimate the importance of instruction cycles as opposed to clock cycles. The ALUs just couldn't be programmed to do an operation every cycle. The above algo fits with the address ALU doing two additions per insn cycle and the data ALU being able to do right shifts one per insn cycle. The microcode needed one cycle to branch, assuming the operation was programmed as a loop. (Otherwise that cycle would be a conditional branch, so same difference.) No real activity could happen when the microcode state machine was dedicated to controlling itself. Making a microcoded ~68000 transistor machine do multiplication that fast is harder than it might sound.
Not fair to demand an explanation of division. Generally what you seem to be confused about is the fact the address ALU had to compute a 32 bit addition for every insn just to increment the PC. Potatoswatter 04:47, 16 December 2006 (UTC)[reply]
I don't need you to make any research for me, I already did. I researched and investigated the 68000 far and beyond what was ever did, at least in disclosed form. The questions I asked were rhetoric, just to proof my point. I already know the answer to all of them.
I don't have that book, but if it states that the 68000 has 3 ALUs, then the book is wrong. The book seems to be confusing an ALU, with a simple AU. The 68000 32-bit AU, which indeed can be separated in two halves, is not an ALU. Among other things it can't perform logical operations, and it can't perform shifts or rotations. It can only perform a simple addition. That's why it is called AU, and not ALU. Btw, the term AU is used in Motorola documentation, it is not my own one.
The above MUL algo is wrong, for starters because 70 cycles is only the maximum, it depends on the operand bit pattern and is not fixed.
It is true that the ALU can't perform operations on every CPU cycle. Actually the 68000 needs a minimum of two cycles for any internal operation, not just ALU ones. But it is wrong that it needs an extra cycle to perform microcode branches. Microcode branches, conditionals or not, don't take any overhead at all. The reason is because every microcode instruction performs a branch. There is no concept of microcode PC. Every micro instruction must specify the next one.
Instructions must of course perform a 32-bit addition to compute the next PC. This is not an impediment or an explanation for why 32-bit operations take longer. The only explanation is because there is a single 16-bit ALU. See my article about the 68000 prefetch and cycle-by-cycle execution to understand why.
It is perfectly fair to ask about division. I solved all the details about the 68000 division algo already and published the results about a year ago. Can you or the authors of that book explain the exact timing of both DIV instructions for multiple ALUs?
As you can see by reading my articles, I know exactly the difference between a CPU clock cycle, a bus cycle, and a micro-cycle.Ijor 16:04, 16 December 2006 (UTC)[reply]
Some quotes from Motorola papers and patents: "A limited arithmetic unit is located in the HIGH and LOW sections, and a general capability arithmetic and logical unit is located in the DATA section. This allows address and data calculations to occur simultaneously. For example, it is possible to do a register-to-register word addition concurrently with a program counter increment".
I think this clearly shows that there is a single ALU, and that the other one is an AU. Again, it wouldn't make any sense to implement an ALU that would never perform any LOGICAL operations.
Btw, by re-reading your post it seems you think that one microcycle (what you call instruction cycle) takes four CPU clock cycles. This is also wrong, it takes two clock cycles, not four.Ijor 04:18, 2 January 2007 (UTC)[reply]

Here's a bit of real history. (I was at Motorola in the 80's).


The original design was for a machine with 32 bit address registers and only 16 bit data registers, eight of each! The microcode writer convinced the powers that be to make it a 32Address/32data machine.
This history is reflected in the fact that the high 16 bits of the 8 data registers are located beside the high 16 bits of the address registers and are physically located on the far side of the chip from the data ALU rather than beside it like the lower 16 bits are.
Yes, there were 3 math units. One 16 bit ALU and two 16 bit Address Units. The ALU was complex enough for the math instruction set while the AUs were only able to do address related math (add, subtract). And of course the 2 16 bit AUs worked together to make the 32 bit address calculations.
Therefore, to do a simple 16 bit math instruction, the ALU can do the operation while the AUs can perform address calculations during the same micro-cycle.
To perform a general 32 bit data operation, it is necessary to move the high 16 bit register data past/through the AUs to the ALU. This is why 32 bit ops take many more cycles than 16 bit ops.
One could therefore say it was a 16 bit processor pretending to be a 32 bit one. A real 32 bit ALU came out in the 68020.
BTW: If you look at a die, the top half is the microcode, the middle the random logic, and the bottom the registers and ALU/AUs. The bottom third is ordered LtoR: High A & D registers, AUhigh, (X) , AUlow, Low A registers, (X), Low D registers, ALU, I/O aligner. Where the Xs are gates (usually closed) that allow data to travel between the three areas and the I/O aligner was the interface to the Data pins with byte/word align logic.


Some more interesting History.


After making the 68000, IBM and Motorola got together and made the 360 on a chip. It was thought that because the 68000 core was '32 bits' and regular enough that this would be an easy task.
What they failed to realize was that the random logic in the middle of the chip would have to change considerably to make the 360 on a chip. This took more resources and time than Motorola expected, delaying the 68010 and 68020, giving Intel a better chance to jump ahead with the i286/i386. 71.231.43.16 (talk) 13:05, 30 November 2007 (UTC)HenriS[reply]

Very interesting historic tidbit about the 360. But regarding your comments about why 32-bit ALU operations take more cycles, I'm not sure that is correct. The main reason they take more is because, obviously, the ALU is 16-bits and then an extra micro-cycle is required at the minimum. I don't think the higher 16-bits need to go through the AU for reaching the ALU. It is true they are physically located nearer to the AU than to the ALU. Bus the internal buses connect all the sections. If that wasn't true then 32-bit ALU operations would take much longer than they actually do.Ijor (talk) 05:07, 6 December 2007 (UTC)[reply]

It is a 16 bit ALU. The multiplication was performed by a barrel shifter and was heralded as an innovation in microprocessor design. I don't know where all of this 3 ALUs BS is coming from but there was one 16 bit ALU in this chip. The MC68000 was a 16 bit machine. The MC68010 added virtual memory capability. The MC68020 was the first 32 bit family member. The MC68020 core is used as the basis for CPU32 designs. I know this machine very well. I've designed with it. I've written assemblers and debuggers for it and used one in my Atari ST. —Preceding unsigned comment added by 72.78.53.31 (talk) 21:18, 24 July 2010 (UTC)[reply]

Ijor, have you ever considered contributing something besides to this talk page? Potatoswatter (talk) 08:23, 6 December 2007 (UTC)[reply]
Hi Potatoswatter. I did, but only "considered", sorry. A possible useful contribution could be a link to my undocumented 68000 web page. If you like it, go ahead and put a link on the main page: http://pasti.fxatari.com/68kdocs/ Ijor (talk) 15:40, 6 December 2007 (UTC)[reply]

Content from Amiga virtual machine (AFD)

The article Amiga virtual machine has come to my attention as it was proposed for deletion today. While my take is that it should be deleted, I think it contains some useful information, and I've proceeded to add some of it (namely, the section originally titled Bytecodes) to the 68k article (is that a good name for an article, by the way?). Only later did I realize that this is possibly a more appropriate place for such content. However, at this point, I prefer to avoid further roaming of content until someone else reviews it and discusses about the most appropriate placement for it. LjL 20:16, 16 May 2006 (UTC)[reply]

68020 addressing modes influenced by printer applications?

I'm rather dubious of the claim added by Wayne Hardman that the addressing modes of the 68020 were influenced specifically by printer applications. Can anyone cite a reference? It seems much more likely that Motorola chose new addressing modes based on a survey of compiler-generated code for a wide variety of applications, and that some of those addressing modes happened to be fairly useful in printers (or graphic rendering in general). --Brouhaha 23:07, 21 August 2006 (UTC)[reply]

I question the premise that the new addressing modes are useful. This sort of feature creep is typical of microcoded CPUs and doesn't necessarily reflect any measurable benefit. Coldfire dumped the new modes (except for the scaled index feature). Mirror Vax 00:03, 22 August 2006 (UTC)[reply]
Obviously Motorola thought at the time that they would be useful. What I question is whether they were thought to be especially useful for printer applications. If no one can cite any reference for that, the sentence should be removed. --Brouhaha 22:23, 22 August 2006 (UTC)[reply]
I agree that the claim about 68020 addressing modes being influenced by printer applications is dubious, and even if true, it would be more appropriate for the 68020 article. I removed it as part of some other edits. --Colin Douglas Howell 03:31, 20 September 2006 (UTC)[reply]
From old Mac OS days, I remember most, if not all, of the addressing modes getting generated by the compiler with at least fair frequency on general OO code. As to how much faster it went or denser it was as a result, I don't know :v) . But can we all agree it's still less wacky than x86? Potatoswatter (talk) 08:31, 6 December 2007 (UTC)[reply]

Ti-calculators with M68k

Some Ti-calculators (Texas Instruments) uses the Motorola 68000 e.g. the Ti-89 Titanium (or original) and Ti-92 Voyage. How about adding info about them? --Red_Hat_Eagle 03:31, 30 October 2006 (UTC)[reply]

TI graphing calculators are already mentioned briefly, but I agree that a bit more detail should be added. --Colin Douglas Howell 06:39, 31 October 2006 (UTC)[reply]
I've added more detail on the 68000's use in TI calculators. --Colin Douglas Howell 22:19, 1 November 2006 (UTC)[reply]

History section needs to be restructured

The History section is currently too long and contains a mix of general historical info about the 68000 and descriptions of specific 68000 applications. I confess that I've recently made the problem worse. The section needs to be split into separate History and Applications sections, or something along those lines, but I'm not sure how best to go about this. There doesn't seem to be any agreed-upon structure for Wikipedia articles on microprocessors. (I know "Be Bold" is the Wikipedia motto, but I'm naturally oriented towards caution rather than boldness.) --Colin Douglas Howell 22:38, 1 November 2006 (UTC)[reply]

OK, I just went ahead and did it. There's still room for further improvement, of course. --Colin Douglas Howell 05:02, 2 November 2006 (UTC)[reply]
Nice work. Chris Cunningham 09:05, 2 November 2006 (UTC)[reply]

Article has vague "cheerleading" statements which need eliminating

This article has some vague statements with a sort of "cheerleading" tone. The earliest version of the article seems to have been written by a fan of the 68000 and contains a number of such statements, some of which are still in the current article. While I agree that the 68000 was a well-designed processor, I think that such POV expressions are out of line here; it's better to describe and explain the processor's advantages in a clear, unbiased way.

Here's one example which I've just removed, referring to uses of the 68000: "It also sees use by medical manufacturers and many printer manufacturers because of its low cost, convenience, and good stability." The 68000 is certainly cheap now, but no more so than many other processors and microcontrollers competing for these markets. Whether it is "convenient" can depend on all sorts of factors, such as "which microcontrollers contain the particular devices we need?" and "what architectures are our embedded programmers most fluent in?" As for "good stability", I'm sure that applies to most microcontrollers; the embedded market has little tolerance for flakiness. (I felt free to remove the sentence because the uses in medical and printer fields are now also mentioned elsewhere.) --Colin Douglas Howell 19:02, 3 November 2006 (UTC)[reply]

Another one I've just eliminated claimed that the 68000 family was popular in the Unix world "because the architecture so strongly resembles the Digital PDP-11 and VAX, and is an excellent computer for running C code". It's certainly true that 68000-based Unix systems were popular, but these reasons for this popularity are hard to support. The 68000 did not "strongly resemble" either the PDP-11 or the VAX. True, it was a general-register architecture like those others, but there were lots of differences in detail. For example, the PDP-11 was purely 16-bit, had no separate address registers, and included the program counter as a general register. The VAX, on the other hand, had 16 general registers and was a three-address machine rather than a two-address machine like the 68000 and PDP-11. You could say that the 68000 had some limited general similarity to the PDP-11 and VAX, but it's not clear that was important for its popularity in Unix systems.

As for "an excellent computer for running C code", that statement doesn't even seem meaningful. The 68000 was no better for running compiled C code than many other architectures. It's true that C programmers on the 8086 and 80286 might have had to worry about far and near pointers, but even that deficiency was eliminated with the 80386. In any case, architectural issues are normally the C compiler's problem, not the user's, and again it's not clear whether such concerns were important in the popularity of 68000-based Unix. --Colin Douglas Howell 21:12, 3 November 2006 (UTC)[reply]

Removed unverifiable statement about recent Hitachi 68000 production

I've removed this statement: "As of 2001, Hitachi continued to manufacture the 68000 under license", because I can't find any information backing it up. Although I know that Hitachi made 68000 versions, I can't find anything about 68000 production on their web sites, either on the current pages or in the Internet Archive. If someone could find some sources of information about this, it would be a big help. --Colin Douglas Howell 07:09, 19 November 2006 (UTC)[reply]

Same pinout as ever?

Just wondering if today's 68K is the same size as the monster was in my Mac and Genesis? --24.249.108.133 23:42, 28 December 2006 (UTC)[reply]

'Todays' 68K CPUs are not pin compatible. They changed in the 68020.
It looks like the DIP package is discontinued, with only pin grid array and quad edge packages produced. You can still find the old chips on the surplus market, though the price is now higher. Chip package evolution is driven by board process evolution (it's more expensive to use a "monster" chip, so if you did you'd upgrade your factory and buy the new package), and environmental regulations (the newer packages are less toxic). Potatoswatter (talk) 08:40, 6 December 2007 (UTC)[reply]

Usefulness of PC-relative mode

The article makes the statement:

   * 16-bit signed offset, e.g. 16(PC). This mode was very useful.

Very useful for what? It seems like it'd be useful for position independent code, but not much else. Since it's usefulness is non-obvious, should this be called out, or should we remove this editorialization? --Mr z 20:15, 9 August 2007 (UTC)[reply]

Amiga 500 processor

The Amiga 500 article makes a very precise but unsourced claim of 7.15909 mhz for the 68000, but this article makes no mention of the 68000 being manufactured at this speed. Is this correct? Miremare 18:20, 13 August 2007 (UTC)[reply]

Slight underclocking. NTSC Color Subcarrier Frequency is 3.579545 MHz , the CPU speed (on NTSC models) exactly twice that. (83.245.252.197 22:47, 31 August 2007 (UTC))[reply]
The PAL version ran at 8/5 of the PAL carrier (4.43361875MHz) at 7.09379MHz. Both versions used an 8MHz part. —Preceding unsigned comment added by 193.27.220.14 (talk) 12:25, 6 August 2008 (UTC)[reply]

With only 56 instructions the minimal instruction size was huge for its day at 16 bits.

What does this mean? The Strela, 1953, had a minimum instruction size of 36 bits. The PDP-10 (from early 60's) also. PDP-11 instructions are always 16 bit (because everything is always contained in one word - 1 opcode, up to 2 operands each including a register and an addressing mode)... I need to regards this statements as a mistake, unless it is supposed to say that the minimum opcode length is 16 bits, which sounds unplausible.

History section dispute

Theaveng, Intel and the 68000 are not analog to the VHS and Betamax and comparisons between those two products don't go into details non-techies can't follow.

Let's take a closer look at the current history section and I'll tell you some of my troubles with it.

The 68000 grew out of the MACSS (Motorola Advanced Computer System on Silicon) project, begun in 1976. One of the early decisions made was to develop a new "no compromise" architecture that paid little attention to backward compatibility. This was a gamble because it would mean adopters of the chip would have to learn the new system entirely from scratch. In the end, the only concession to backward compatibility was a hardware one: the 68k could interface to existing 6800 peripheral devices, but could not run 6800 code. However, the designers built in a tremendous amount of forward compatibility, which paid off once 68k expertise had been gained by end users. For instance, the CPU registers were 32-bits wide, though address and data buses outside the CPU were initially narrower. In contrast the Intel 8086/8088 were 16-bits wide internally.

With the exception of the register file the 68000 was also 16-Bit wide internally. Also, this is a typical throwaway statement that don't quite fit. Making random comparisons with Intel CPUs just diffuse what the article actually is trying to tell. Also note that the word tremendous is a clear case of POV.

At the time, there was fierce competition among several of the then established manufacturers of 8-bit processors to bring out 16-bit designs. National Semiconductor had been first with its IMP-16 and PACE processors in 1973-1975, but these had issues with speed. Texas Instruments was next, with its TMS9900, but its CPU was not widely accepted. Next came Intel with the Intel 8086/8088 in 1977/78. However, Motorola marketing stressed the (true) point that the 68000 was a much more complete 16-bit design than the others. In fact, it is an implementation of a 32-bit architecture on a 16-bit CPU. This was reflected in its complexity. The transistor cell count, which was then a fairly direct measure of power in that era, was more than twice that of the 29,000 of the 8086.

Okay, but centers a bit too much on 'other CPUs'. Let's split this up into two paragraphs, one discussing all the other CPUs and another for 68000 marketing. As for the transistor count comparison, the article has already touched upon transistor count, delving back onto this topic is a typical armature mistake. And why are we comparing with the two year older intel architecture? The Z8000 was closer in age, 16-Bit and like the 68000 not 'backwards constrained'.

The simplest 68000 instructions took four clock cycles, but the most complex ones required many more. An 8 MHz 68000 had an average performance of roughly 1 MIPS.

Nice to know, but it's just a random technical dribble that's not discussed further. Should be cut away if it improves the article's readability.

On average, the 68000's instructions in typical program code did more work per instruction than competing Intel processors, which meant that 68000 designs both needed less RAM to store machine code and were faster.[citation needed] Additionally, the 68000 offered a "flat" 24-bit addressing system supporting up to 16 MB of memory; at the time, this was a very large memory space. Intel chips instead used a segmented system which was much more difficult to program.

much more difficult is clearly POV. The RAM/Faster statement is uncited, vague and only seem to exist to tell us that the 68000 CPU was better than competing Intel CPUs (which is wrong - later Intel CPUs were faster). It says nothing about other CPU archs and leave it up to the reader to figure out what historical impact this had.

Also, keep in mind that the 68000 had strong competitors in the Z8000, NS162032, iAPX 432 and other contenders to the title of CPU of the eighties. The article as it currently stand does a poor job of detailing how the 68000 faired technically against these CPUs. Why single out the Intel 8086? The 8086 was two years older, had starkly different design goals, and would have faded away if it didn't get on the IBM rocket ship.

That said, I say again that technical details should not be the focus of the history section. The history of the 68000 CPU can be discussed without readers knowing about bit width, transistor count, address range, clock cycles, etc. These are technical terms that flies over many people's heads - rightfully so - and we should make an effort to keep those to a more technically focused section of the article. If we do feel the need to mention, say, clock cycles then we should (within reason) give a short explanation just what it means.

--Anss123 22:41, 2 October 2007 (UTC)[reply]


Well then EDIT it to eliminate words like "massive" and anything else you think seems biased. And move the technical stuff out of history & into a sub-suection titled technology. I'd have no objection to that. ----- But what you did was basically excise half the article (including the Intel comparisons, which I liked), and I object to throwing away information so indiscriminately (it's such purges that led to the loss of early BBC tv shows & early silent movies) (because someone carelessly through them in the trash). EDIT to improve; don't do massive deletions is the wiki mantra.
Also I suggest you take a look at the Betamax article. It does indeed do technical comparisions to VHS (comparing different video resolutions and frequencies). And I love that article, because it's a great resource. So too is this 68000 article. Please don't turn it into a worthless fluff piece that has no technical value whatsoever. - Theaveng 13:51, 3 October 2007 (UTC)[reply]

It's okay now but a while ago I read this page and I noticed how much fanboyism was in this page. A few seconds ago I read the page and I noticed that all the fanboyism already got taking out. I'm sorry I deleted your page and put my oppinion about it. It's just that it was not the best processor out there like the artical said it was back a few months ago. I'm sorry I did not reread it to see if it already got edited. I thought it was the same old fanboy-made article that was here back in September.

What I'm sick of hearing is Sega fanboys claming that their processor was like god and Nintendo's processor was a peice of poop. I hear that all day long and I'm sick of it. Wikipedia supporting their oppinion makes it even worse.

Thankyou for fanboy-proofing your article.

By the way the "4 cycles to add bytes, 8 cycles to add words, 4 cycles to access memory, not at the same time" stuff is true and is a major hit for it's CPU performance. Most other CPUs at it's time (ex: 6502, 65C816, Z80, 6809, ARM etc) didn't have those cycle limitations. This makes this processor very weak compared to others.

the 65C816 is one of the most underrated processors of all time. Sure the 68000 has a lot more internal registers; but it has the above performance problem. 65C816 doesn't have that problem with performance so it's able to access external memory faster than the 68000 could even access internal memory.

Nintendo is as fast as Sega because of the above 68000-only "bandwidth" limitation that almost no other processor, including 65C816, had.

—Preceding unsigned comment added by 75.58.34.62 (talk) 02:47, 24 November 2007 (UTC)[reply]

Yes those evil Sega fanboys! That said, the SNES CPU is slower. At full speed the SNES CPU runs at 3.5 MHz - less than half of the Genny, but when accessing ROM, Video or other hardware it slows down to 1.5 MHz or 2.5 MHz depending on what it accesses. Further troubling it is it limited register set, resulting in programs often needing more instructions to be equivalent with the 68000. Worse, there's a 2 cycle penalty for fetching 16-bit words on the SNES (8-bit data buss) whereas the Genny might pull it off in one cycle with a little luck. At the end of the day the Genny CPU is significantly faster, even if some Genny games suffer from slowdowns.
--Anss123 (talk) 10:25, 24 November 2007 (UTC)[reply]

Stop just stop!!! You didn't read my post the 68000 takes four clock cycles to access memory and do addition to bytes and connot do both at the same time. the 65C816 takes only one cycle to do both and can do them simultaniously

and all this "drops down to 2.5 megahertz when accessing certain parts of memory" proves nothing because a good programmer would avoid using that part of memory anyway

plus it has an H-DMA chip that does all the raster interept VRAM loading so the CPU doesn't have to worry about it. Something Sega Genesis doesn't have.

and let me rimind you that the snes uses a specialized version of the 65C816 that has two buses that can be used at the same time and many extra "external registers" built on the inside of the chip so it can access them even faster.

edit: by the way, can you tell me, one way you could possibly pull off something on 68000 in 1 cycle when the shortest opcode takes 4? —Preceding unsigned comment added by 75.58.75.4 (talk) 19:59, 24 November 2007 (UTC)[reply]

I was talking about fetching 16-bit words. A genny can pull that off in one cycle as it has a 16-bit data buss, whereas the SNES has a 8-bit data buss and need at the very least two cycles.
The two address busses in the SNES are not data busses. When you say both can be used at the same time I believe you referee to when the SNES does DMA. Both busses are in use then, with the system running at 2.5 MHz. If you use the 8-bit address bus the CPU slows down to about 1.5 MHz, used for reading the controller and audio chip among other things.
Early SNES games ran the system at 2.5 MHz so that they could use cheaper ROM chips. Later games allow the CPU to run at 3.5 MHz, but not when accessing video hardware, audio hardware, etc....
--Anss123 (talk) 21:30, 24 November 2007 (UTC)[reply]

give me a link to that information please about it slowing down to 1.5 MHz when accessing the video and audio chips. And NO wikipedia articles are NOT proof because they can be manipulated by anyone and especially you. You'll never find detailed hardware level information about it because you made that up. I'll find detailed hardware information on snes's memory map and it will completely agree with me.

and you still fail in naming me an 68000 opcode that takes less than four cycles.

edit: I found real proof before you found your own bs. www.romhacking.net/docs/memmap.txt

there you go, BUS-B is on fast speed. The only slow part of memory is the W-RAM which is the part that saves game files. These registers only have to be written to after levels.

I did not say it slowed down to 1.5 MHz when accessing the video chip, I said it slowed down to 1.5 MHz when using bus the 8-bit buss to access the controllers and audio chip. When working with the video chip you generally want to perform memory copies, and that is most efficiently done with DMA. When doing DMA the system is slowed to 2.5 MHz, the CPU is actually halted. I believe romhacking is used as a source for this on Wikipedia, the document you referenced is written by the same user that wrote the Wikipedia stuff (user:Anomie).
Oh, and about that Opcode. Why are you insisting that I name a one cycle opcode? I never claimed there was a one cycle opcode. I did claim that a 68K can perform a 16-bit fetch in one cycle, whereas the SNES needs two cycles. This due to Nintendo's decision to use a 8-bit data buss on the SNES, the 65C816 can be used with a 16-bit data buss too - but not on the SNES.--Anss123 (talk) 13:34, 25 November 2007 (UTC)[reply]

Yeah, there is no one cycle opcode in it's entire instruction set. The shortest opcode takes four. Yes, theoretically it can access 16-bits per cycle, but in reality it can't because there is no opcode that enables it to do that. I don't want to break your fantasy but the chip's microcoding is really bad. Sure Motorola could've did a lot of nice stuff with the same architecture but the microcode didn't utilize it's own hardware.

The 65C816's microcode did a much better job at utilizing the it's own hardware it's running. The 65C816 chip does actually accesses memory every cycle. The 68000 can only access memory every 4 clock cycles because of the poor decisions in microcoding.

Please stop using this theoretical philosophy, because we do not live in a theoretical world.

case closed, good night —Preceding unsigned comment added by 75.57.173.83 (talk) 03:39, 26 November 2007 (UTC)[reply]

Anon dude: if either the 68000 or the 65816 were that bad, they wouldn't both have so many design wins. The 68000 accessed the bus every 4 cycles, but it was clocked faster, finally over 60 MHz, and has better compilers. In the end it was used in more applications, notably the Palm series of PDAs and the TI calculators in this decade.
It is generally pointless (and thus unencyclopaedic) to compare competing products on aesthetic merits. The marketplace does that for us, and decides what aesthetics actually matter. (That marketplace consists of engineers who decide which chip is better for actual applications.) Nintendo chose the 65816. Sega chose the 68000. Apple chose both but knew one was a dead end. Behind each of those decisions was a broad discussion including the issues you've brought up. So let's make use of the research of others and stop clogging this talk page. Potatoswatter (talk) 04:15, 26 November 2007 (UTC)[reply]

Hmm, I don't want to enter the useless debate. Just a few technical corrections. The 68000 can't access 16-bits per cycle, not in practice and not in theory. This has nothing to do with the microcode or opcodes. As already said by others, a 68000 bus cycle takes 4 clock cycles. No external access can take less than that.

It is interesting you think the 68000 microcode is bad. Either you are trolling, or most of the industry didn't (and doesn't) agree with you. And again, the 4 clock cycles per bus cycle has nothing to do with microcode.

It is also wrong that it can't perform math operations and memory access at the same time. It can.Ijor (talk) 05:00, 6 December 2007 (UTC)[reply]

The thing I'm asking is why does everybody think the "every four cycles" limitation as such a very tiny and uneffective problem. It makes it FOUR times slower than every other 16/32-bit processor, giving it the equivilant performance of a 4/8-bit processor.
because then the clock rate gets increased. And that's not how bits work. Potatoswatter (talk) 00:02, 15 December 2007 (UTC)[reply]

And what makes you think it's pros are sooooooooo bigger than it's cons? I made a list of pros and cons of both cpus that you deleted because you think I'm trolling you. I am most certainly not trolling you. Your the one who is trolling. My post was perfectly unopinionated and you came here and deleted my post because you think your opinion is always the best, while I only displayed mathematically proven facts, and even gave you an overlooked fact about the Sega licsense programming quality control policy that stated that every game developed on Sega Genesis had to be cycle counted or else no release. I read that from an old Nintendo Power magazine from 1995. Thankfully I have my pro and con list saved on my computer. here it is:

65816 PRO: it can access memory every cycle

68000 CON: it can access memory only every four cycles

here it seems like the 65816 goes 4 times faster but I'll go on.

65816 CON: it only has an 8 bit data bus

68000 PRO: it has a data bus 16 bits wide

Okay now the 68000 is twice as much as a forth of a 65816 which means it's half as fast.

65816 CON: it runs at 3.5mhz in snes

68000 PRO: it runs at 7mhz in sega

Now the 68000 is running at twice as fast as half as fast as 65816, which means that a they are both running at the same speed now.

65816 PRO: even though the real bandwidth is the same, this chip can use it's accessed bits more wisely. Such as:

-instructions are 8-bit long

-data words can be 8-bit or 16-bit long

-addresses can be 8-bit, 16-bit, or 24-bit long.

It takes much shorter time to load instructions, data, and addresses, because they can be a smaller amount of bits than the 68000.

68000 CON: it just can't get as much information out of it's bits as much as 65816 can.

-instructions take up 16-bits

-data words take up 16-bits

-addresses take up 32-bits.

Because they use more bits, they take more time to load.

Ouch, 68000 just isn't very good at utilizing it's bytes like the 65816 does but there is still one more thing to talk about:

65816 CON: has an accumulator, two other multi-purpose register, a stack pointer, a few page/bank select registers, a program counter and THAT'S IT?

68000 PRO: it has 8 multipurpose registers, 7 address registers, a stack pointer, and a program counter.

The 68000 has more registers and the 65816 has better bandwidth. End of story —Preceding unsigned comment added by 75.58.58.238 (talkcontribs)

It wasn't Potatoswatter who removed your post, it was me. Not because it was opinionated, which is fine, but because of its overly confrontational tone and use of labels such as "stupid idiots" and "freakin idiots" to describe users who disagree with you and "crap" to describe their opinions. Please see WP:CIVIL and WP:NPA. Miremare 00:48, 17 December 2007 (UTC)[reply]

I'd like to correct the misunderstanding that the 65816 uses microcode. One of the reasons it can access the bus every cycle lies in the fact that it does not utilize microcode. The PLA performs both state tracking and state decoding using a fairly sizable and/or matrix. -- Samuel A. Falvo II, 2010 July 11. —Preceding unsigned comment added by 173.11.86.21 (talk) 16:35, 11 July 2010 (UTC)[reply]

Optimized code

Maybe it would be an idea to add more information about assembly instructions that achieved the same thing but faster?! e.g. if I remember correctly sub.l d0,d0 was much faster than clr.l d0 but I don't know this for sure! —Preceding unsigned comment added by 80.213.173.129 (talk) 16:52, 23 November 2009 (UTC)[reply]

Counting general registers

Currently the infobox says 8 general registers which is imho fairly misleading. The address registers, while not omnipotent are still much more "general" than "general registers" of most RISC CPUs. They can be used as source or destination op in complex arithmetic expressions and load/store values from memory with few restrictions (much fewer than typical RISC general register). Opinions? Richiez (talk) 09:57, 6 February 2011 (UTC)[reply]

I agree that D0-D7 and A0-A7 are general purpose registers by any sane definition of the term.
Right now the page reads...
gpr = 8 × 32-bit + 7 address registers also usable for most operations + stack pointer.


Before it was simply...
gpr = 8 × 32-bit


I think it should be...
sr/ccr = 1 x 16 bits
ssp = 1 x 32 bits
msp = 1 x 32 bits (68020 and higher only)
gpr(total) = 16 × 32-bits, consisting of:
gpr/data = 8 × 32-bits
gpr/address = 7 × 32-bits
gpr/usp = 1 x 32 bits


It has been a while since I programmed a 68000, but I believe the register count is as follows:
8 x 32-bit registers, D0-D7, usable as data or general purpose registers
7 x 32-bit registers, A0-A6, usable as address or general purpose registers
1 x 32-bit register, A7, usable as a user stack stack pointer or general purpose register
1 x 32-bit Interrupt Stack Pointer AKA System Stack Pointer (not usable as a register or accessible in user mode)
(68020 and higher) 1 x 32-bit Master Stack Pointer (not usable as a register or accessible in user mode)
1 x 32-bit (only 24 bits are used to generate RAM addresses) Program Counter
1 x 16-bit Status Register / Condition Code Register (16 bits accessible in supervisor mode, lower 8 bits accessible in user mode)
Corrections/comments are welcome. In particular, did any of the 68000 parts use memory management to make use of the top-bits of the program counter?

BTW, http://en.wikibooks.org/wiki/68000_Assembly is available under the Creative Commons Attribution-ShareAlike License, so we can lift anything we want (with attribution) from it and use it here. Guy Macon 20:07, 6 February 2011 (UTC)[reply]

My change was just a quick fix to the infobox, later noticed that 68k family page has something else yet.. does not seem in any way uniform across the various architectures. So it should be at least informative - and consistent with M68k. Not sure what exactly you mean with memory management in the top bits of PC, many systems of the time abused the bits for some purpose which turned out incompatible with later changes. Otherwise your list seems correct but not sure if it fits into the infobox. Richiez (talk) 00:42, 7 February 2011 (UTC)[reply]

All CPU's are 8 bit

"8 × 32-bit + 7 address registers also usable for most operations + stack pointer"

It can't be. There should be 8 32-bit address registers and 7 32-bit data registers. But if to be honest, all datasheets after very firsts are nonsenses. Only those very first are correct, in my opinion. More and more nonsense put into newer and newer CPU manuals (like cache, which don't exist; no wonder 486 have 8k of cache, which mean 8 registers and k can be equal to 1, so lieing). There is very important thing, that if there is 8 bit data bus, then you can use address bus of 16 bits or 32 bits and it will work with not doing changes to instructions at all! So thats why I think all CPU must be 8 bits (have 8 bits data bus). Coprocessor like intel 8087 to intel 8086 CPU have 80 bits data bus with memory or 64 bits data bus with memory. So big possibility that the same memory module (of 16 chips) have half chips for CPU and half chips for coprocessor. And CPU with coprocessor comunicating also through 8 bits data bus. Only after data loaded to coprocessor registers (and then maybe to coprocessor memory RAM) it is in 80 bits or 64 bits format. CPU registers are 16 bits for most CPUs like Intel 8080, but data bus is 8 bits (need two cycles to load data to CPU register).


Here good proposal what cache can be (page 73 for Acrobat Reader or 61 on list; 5.6 ZERO PAGE ADDRESSING). This is about 6502 CPU, which (or similar) used in NES. So 6502 CPU have 16 bit address bus and 8 bit data bus. For addressing memory need sent lower address 8-bits (after opcode) through data bus and in third cycle (first cycle for opcode) need sent higher address 8-bits. So if in second cycle bits are 00000000, then memory accessed faster (need only two cycles, instead 3 cycles). So memory have 256 pages and each page 256 8-bit empty spaces. So page, which accessed with address 00000000 can be in my understanding as cache, because it is recommended to use first page (00000000) for frequently used instructions or constants. So from this not hard to see, that if you have memory with bigger pages, you have faster instructions or programs execution. Maybe this is also meaning of DDR SDRAM (DDR double data rate [for first page]).
Even 64 bit CPU like Core i7, have 16 bit segment and 64 bit offset. So 16 bit to choose memory (RAM) page and 64 bits for address in chosen page. And if page address is 0000000000000000, then it also 1 cycle faster. Maybe this is what cache is about. Or maybe cache is on [CPU] chip RAM pages? Maybe instead of first RAM pages, CPU puts all data into caches, and when pages address is too big, then it is not cache on CPU chip, but external RAM.
Also interesting thing that 6502 RAM address 0-21845 using for RAM; address 21845-43690 using for I/O; address 43690-65536 for ROM. — Preceding unsigned comment added by Paraboloid01 (talkcontribs) 21:00, 15 September 2012 (UTC)[reply]
Please not that this discussion page is for discussing improvements to the article. It is not a general forum for discussing the subject in general. 86.130.168.254 (talk) 12:22, 17 September 2012 (UTC)[reply]
You, see, I calculated number of wires going to each of 16 chips of RAM (DDR2-800 [400MHz] in my case). So number of wires is, hard to tell, maximum 31-40 wires (but it's unrealistic, because I almost half from over side counting as not the same, so 40 is supermaximumimaginary) and minimum about 16-20 wires. So to exploit design of 64-bit data bus and 64-bit Address bus, TO EACH CHIP [of 16] must go 64 wires for address bus and 4 wires for data bus (4*16=64). So my computer is with 64-bit CPU and DDR2 is also for 64-bit CPU. And in intel datasheet it also says, that instead 16 bit segment selector and 32 bit offset, there is 16 bit segment selector and 64 bit offset to address memory [in 64 bit mode]. So you see it appears that in 64 bit mode need more cycles to address memory, than in 32 bit mode (like Pentium, Pentium II, Pentium III, Pentium 4). So maybe everything is OK, need just waist for initialization more cycles in 64 bit mode than in 32 bit mode, but this is not that should correspont to intel datasheet. BTW old RAM chips (for Pentium II) have more than 36 wires (actually I counted only more than 36 pins of old RAM chips, but number of wires looks quite convincing) each of 16 chips (for 32 bits RAM 32 wires is for address and 4 wires for data per each chip of 16; actually combination like 4 chips [RAM] and 32 wires for address and 16 wires for data bus is also correct). So maybe there really is 8-bit data bus and 8 bit address bus even on most new RAM (RAM address is chosen sending through data bus 8 bits two times, so addressing 2^(8+8)=65536 bytes). Maybe now Bill Gates phrase "nobody will need more than 64 KB of memory" doesn't sounds so funny? — Preceding unsigned comment added by Paraboloid01 (talkcontribs) 17:08, 9 October 2012 (UTC)[reply]
One explanation I found, is that for 32 bit CPU you can store or 2^32=4294967296 of 8-bit integers or 4294967296 16-bit integers or 4294967296 32-bit integers or 4294967296 64-bit integers. Do not matter what you storing (8 bit integers or 64 bit integers), there still is only 4294967296 places (I don't counting of using 16 bit segment selector). So if don't use segment selector, then on 32 bit CPU you can store 4294967296(addresses)*64(bit)=274877906944 bit of data or 274877906944/8=34359738368 bytes = 32 GB (32*1024*1024*1024 bytes). So I have only 1 GB DDR2 RAM module (thus it is 16 chip of 64 MB each). So 1 GB = 8 Gb and so 8192(Mb) / 64(bit) = ~128 millions of addresses. And 2^27 =134217728 addresses. So it turns out that no matter if your CPU is 32 bit or 64 bit, 27 wires is enough. So then such combination 27 wires for address bus and 4 wires for data bus for each chip (each chip of 16 MUST have 27 wires for address bus). This gives 27+4=31 wire if not counting GND (-) wire. So simply there maybe just don't need more wires for address bus if memory don't have such number of address (I mean if memory don't have 2^32=4294967296 addresses, but have 2^27 addresses, like in mine 1GB RAM memory module case). This theory of 31 wire looks very convincing, but I fell like there is about 16-24 wires per chip, maybe I wrong, or maybe there really is just 8bit CPU or 3 cycles with 10-bit directory, 10-bit something in directory and 12-bit something with page and it is 10+10+12=32, only don't know how this explain for 64-bits addressing RAM. Something maybe not with segment and offset, but then why they don't make up they mind what they trying to say about linear to physical address and then again about segment selector and offset, which then is true (and then how they do compatibility with older CPUs like 8086/8080, if they trying to go away from 16-bit segment selector and 32-bit offset, instead 8-bit segment selector and 8-bit offset for 8080, or 16-bit segment selector and 16-bit offset for 286)? — Preceding unsigned comment added by Paraboloid01 (talkcontribs) 19:02, 9 October 2012 (UTC)[reply]
BTW 286 theory with 16-bit two times passing for address and 16-bit data bus looks quite convincing. Then it can address 2^32 addresses or 2^32 *64=274877906944 Gb=34359738368 GBytes= 32GB. Now maximum memory (RAM) module is 8 GB. And then for each chip 1 wire for data to form 16 bit data bus. So then (16+1)=17 wires (18 wires with minus or 19 wires with "+" and "-" power supply) for each RAM chip of 16 chips is convincing.