Template talk:CPU technologies
|WikiProject Computing / Hardware||(Rated Template-class, Top-importance)|
SoC and Microcontroller
incorrect category names
"microarchitecture" should be "architecture" and "pipelining" should be "parallelism". micro-architecure refers to the electronic circuitry, not the logical organization. architecture is the word for logical organization. and most of the items in the "pipelining" category have nothing to do with pipelines, but are rather parallelism/concurruncy features, most of which are orthogonal to pipelines. (orthogonal meaning they can be used with or without pipelines, and pipelines can be used with or without them.) Kevin Baastalk 16:54, 24 February 2008 (UTC)
I think this template has grown too much as of late, branching out in highly periferal topics, like software, packaging and examples withing sub categories. I don't think this should be an be all, end all template encompassing everything relating to CPU technologies. It's quite a large topic and this template is not benefiting by this "feature creep". Keep the larger issues, skip the periferaltopics, and the subtopics. -- Henriok (talk) 19:49, 11 September 2008 (UTC)
I am a bit confused about the categorization of entries in this template after it was edited by User:Ramu50 so I reverted it. It may have been confusing before, but is has become more confusing afterwards. There are too many "I don't even know where that came from(s)" to list here as justification, but here are two of them:
- MCM (multi chip module) and DCM (dual chip module) categorized in microarchitecture? Microarchitecture? These are packaging styles! Rilak (talk) 09:01, 16 September 2008 (UTC)
Notes about my edits
Rilak, seriously if don't understand how certain components work don't revert it instantly. ALU and FPUs are subunits of processors, they can be standalone too, a lot of Cyrix processor are FPUs-based. Even a GPU (GPGPU) is a Floating Point Processor.
I put MCM and DCM packaging under multiprocessing, because I think most people probably already know these are multicore processing technologies, and the architecture of multiprocess, multitask, scheduling and NUMA would already be familar. But I guess probably a bad idea.
I put SSD there, because about 3 month ago, I saw Intel might be implanting SSD-based cache in the NUMA architecture in one of their Roadmap pdf I believe. (Note: the picture only showed a concept, planned or not planned is not known to me). --Ramu50 (talk) 16:33, 18 September 2008 (UTC)
- Firstly, these Cyrix "FPU-based processors" are floating-point coprocessors, meaning that they require a host processor.
- Secondly, GPUs are not considered to be standalone processors and they are not just floating point processors, they contain a lot more hardware for the purpose of generating images, etc.
- Thirdly, I fail to see how a packaging technology (or styles) such as MCM and DCM is a multiprocessing technology when it does not even process data in the first place! Such processing is done on the silicon dies, containing the actual processor(s), that is attached on to the MCM or DCM. It would be like saying that the motherboard or PCB is a multiprocessing technology because it contains multiple processor sockets containing MCMs containing multiple dies. MCMs and DCMs might be used by multicore processors, but that does not mean that in order for a processor to be multicore, it must use a MCM or a DCM. Further more, I find it difficult that these packaging technologies (or styles) can be classified as a "multiprocessing" technology when they are used in applications other than computing such as power components.
- Finally, one expects this template to include only general technologies and techniques. These SSDs were only mentioned in a future road map, by one vendor, so I fail to see any justification for inclusion. There are asynchronous CPU designs, some experimental implementations and a few commercial microprocessors where some elements can be considered to be asynchronous, but it is not mentioned in the template, even though such designs can be considered to be more common than a SSD-based cache in a future, nonexistent product because they are not a particularly general technology. The template might as well include every single conceivable technology used by CPUs, regardless of whether it is theoretical, experimental, etc. Organic logic gates anyone? Rilak (talk) 09:11, 19 September 2008 (UTC)
That is totally not true that FPU-processor require a host processor, give citation for that. GPU itself is made up of SIMT (Single Instruction Multithreaded) SM (Shader Processor) and doesn't require a host processors. They just have decoders and other units separated from them to achieve SIMD array. Secondly in case you didn't know, MCM and DCM packaging can limits of integrated circuits packaging thus disallowing physical cache coherency if wish.
Motherboard doesn't limit anything and is not part of any components of a computer, it is just a packaging technology, so don't use that example. Placing MCM and DCM is just a reference.
The SSD mention is not true, AMD, Intel, and Sun have all mention of possibly implanting SSD caches before, but Intel were the only one that presented a conceputal diagram in the Roadmap of Xeon.
Get real idoit, OLED is a logic devices, and they are not part of processors, of anyone can implant them in a RAM if they want to. They have been implanted on keyboard, who are you to mock at the technologies. --Ramu50 (talk) 22:46, 7 October 2008 (UTC)
- Firstly, this template discusses CPUs, as evidenced by the title of the template: Template:CPU technologies. Whether a GPU is standalone, made from FPUs is completely irrelevant.
- Secondly, MCMs and DCMs are packaging technologies. If you think otherwise, please provide multiple citations from scholarly papers or university-level or equivalent textbooks from reputable journals, publishers and authors.
- Thirdly, "SSD caches" is a "possible" future technology, as you yourself have mentioned. There is no need to mention nonexistent technologies as I have said so before, which was seconded by Henriok.
- Lastly, I never mentioned OLED (by which I assume you mean Organic Light Emitting Diode) and please cease your personal attacks or administrative action will be sought in accordance with WP:CIVIL.
- If you disagree with any of the points made about inclusion of material in this template, but not the point about ceasing personal attacks on other editors including myself, I suggest you start a RfC (WP:RfC) to resolve this instead of arguing and attacking everyone who you don't agree with. Rilak (talk) 04:46, 8 October 2008 (UTC)
I think it would be beneficial if this template stayed on-topic, so I have identified entries that I believe should be removed instead of removing it straight away because of the concerns of one editor. These entries are:
- Move "Microcontroller" and "SoC" to "Types" as they are not components. A "microcontroller" is a simple CPU for simple applications such as controlling a toaster. CPUs are not constructed out of microcontrollers so I fail to see how it can be categorized in "Components". A "SoC" is also a type of CPU containing the majority of the system either on-die or on-chip, it is not a component, CPUs are not constructed out of SoCs.
- Remove "FPGA", "ASIC" and "Logic Device" (which links to "PAL") as these are semiconductor technologies and integrated circuit types. CPUs might be built from these, but you don't see "Silicon Die" listed in this template, even though it is more obvious than these two entries. "FPGA", "ASIC" and "PAL" are not strictly CPU technologies anyway, and neither is "Silicon Die", if anyone reads this and thinks it should be added.
- Move "ASIP" to "Types" as it is a type of processor, the term stands for "Application-Specific Instruction-set Processor" anyway so it should have been very obvious. You don't see CPUs built from ASIPs do you?
- Remove "Multiprocessing" as how a technique or an architecture (depending on how you look at it) can be categorized as a "component" is beyond any comprehension.
- Remove "MCM" and "DCM" as these are packaging technologies used by many things other than CPUs. You don't see entries such as DIP, PGA or Flip-chip here do you? That's because they are also packaging technologies used by many other things than CPUs.
- Remove "APM", "APCI" and "DPMS". These are all used by computers, and while they might (I didn't bother to check through the five hundred pages or so specifications for these standards) be used by microprocessors, it is irrelevant anyways as they were designed for computer power management rather than exclusively or 90% for microprocessors. Why "DPMS" is included is actually beyond comprehension as it stands for "VESA Display Power Management Signaling". In other words it is for computer displays, a subject that is very distant from CPUs (unless you think the display is the CPU. If you did, the boxy thing beside the display is also not the CPU.
- Remove entire "Parallelism|Types" as it consists of "Distributed computing", "Grid computing" and "Cloud computing" as they are are all system architectures.
- Remove "Task parallelism" as it is programming concept rather than a CPU technology.
- Move "Parallelism|Threads" into "Parallelism|Level" as "thread-level parallelism" is a level of parallelism and the entries in "Parallelism|Threads" are implementations or sub-types of "thread-level parallelism". You don't see "pipelining" as its own category as it is a implementation to exploit "instruction-level parallelism", which is an entry in "Parallelism|Level" as "Instruction".
- Remove "Parallelism|Logic" and its only entry, "Bitwise operation" as they are logic operations and the link to parallelism is either non-existent or extremely weak.
After these entries have been moved to better categorized or removed, I will once again review the template to ensure that everything is where it should be and that anything that should not be in the template will be nominated for removal, until this template it right. What does everyone think of this proposal?
With all this removal of entries, I think it should be balanced with the adding of entries. Why not add entries for "things" that CPUs and microprocessors are actually "made" of into the "Components" section such as Adder (electronics), Adder-subtracter, Binary multiplier and Multiplication ALU?
Finally, I think the template should be renamed. It is clear that the title has potential for confusion as it covers a topic that is too vast and it encourages adding every single technology invented that is used by CPUs regardless of whether it is theoretical, experimental, rare, one-of-a-kind, etc. Rilak (talk) 08:21, 21 September 2008 (UTC)
So architecture doesn't have 128 bit, because 128 bit is currently only use in graphic card stream processors.
DMA is explained below. Graphic Processing Unit is named GPGPU, because it has some capability of processing audio due to its architecture processing characteristics (aka sampling).
Decoders refers to the CPU FPU instruction decoders, not the complex multimedia codec decoders, that is only present in TV tuners. Encoders refer to the multimedia encoders for encryptions. --Ramu50 (talk) 20:29, 14 October 2008 (UTC)
- I think this is a completely specious point. This is another example of feature creep. It is not the purpose of a cross-reference template like this to include everything that anyone ever heard of that can possibly be related to the template topic. I think that anyone coming into this template hoping to learn about CPU technologies and clicking on the "Lambda calculus" link would end up more confused than before, particularly as that article does not mention CPUs or processors in any way and does not even include this template. And no, the right answer is not to edit the "Lambda calculus" article to "fix" these problems (they are not problems). Jeh (talk) 03:56, 16 October 2008 (UTC)
I have already contributed to Larrabee article, before you are even involve in this template. So whether I want to add that info or not is none of your business to be concern with. I am not currently adding it back on, rather making a draft template, if so there is too many list, I will try to collapse it with relevant topic such as Supercomputing, Military design...etc. --Ramu50 (talk) 20:19, 20 October 2008 (UTC)
- Status: Discussion ended, resolve on talk page. Final decision not adding.
So after much thoughts on DMA and NUMA. I think DMA does belong to CPU technologies. They are some OS features such as prefeteching, TurboCache that utilize the chipsets' memory but is not controlled by the OS, but rather the CPU / ASICs through the assistance of Northbridge and Southbridge and I think not all CPU components must reside within the CPU. For example, L3 MDRAM is doesn't reside in CPU in some cases.
GMA that utilize the system memory through MMIO mapping can be considered a FPU technologies. Note that even CPU decoders architecture query uses LIFO and FIFO sometimes. I don't know how to explain it, but I think you'll understand better through visually by looking the IBM PowerArchitecture decoders. --Ramu50 (talk) 20:29, 14 October 2008 (UTC)
- Ramu50, I think that statements such as: "I think DMA does belong to CPU technologies" is in conflict with WP:OR.
- To quote from WP:OR: "Wikipedia does not publish original research or original thought. This includes unpublished facts, arguments, speculation, and ideas; and any unpublished analysis or synthesis of published material that serves to advance a position. This means that Wikipedia is not the place to publish your own opinions, experiences, or arguments." (Emphasis is original).
- Unless there is a reliable source, or multiple reliables sources that state explicitly that DMA belongs in this category, then it should be removed as multiple editors have concerns with the categorization. This applies to every other contested categorization as well. Rilak (talk) 05:39, 15 October 2008 (UTC)
DMA is an essential feature of all modern computers, as it allows devices to transfer data without subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy each piece of data from the source to the destination, making the CPU unavailable for other tasks.
This statement (from DMA article) shows the technologies is design for CPU. Also the cache coherency problems stated in the article proves that it is a parallel computing architecture concern and parallel computing is a mechanims of CPU design. --Ramu50 (talk) 16:47, 15 October 2008 (UTC)
- "essential feature of all modern computers" is not equivalent to "part of the CPU". Similarly, provisions for cache coherency between CPU and DMA access to memory are not part of the CPU proper, they are part of the memory controller. Note that even if the memory controller is on the same die as the CPU (as it is in the AMD hypertransport architecture) it is still not part of the "central processing unit". Jeh (talk) 03:14, 16 October 2008 (UTC)
- Please note that in older PC architecture the "system DMA" controllers are in a chip (Intel 8257) on the motherboard apart from the CPU. In modern PC architecture the "system DMA" controllers are in the chipset, and the DMA "engine" for busmaster DMA is on each PCI or PCI-E option that implements it (just as it was on older minicomputer buses like DEC's Unibus). In fact, in no computer design that I'm familiar with (and I'm familiar with quite a few, going back to the HP 2100, PDP-11, and VAX) is the DMA logic a "CPU technology", that is, actually part of the CPU. There may be provision or allowance for the existence of DMA somewhere in the CPU, but that doesn't make DMA part of "CPU technologies". Failure to understand this is failure to understand the boundary of what is called the "CPU". Jeh (talk) 03:35, 16 October 2008 (UTC)
- Give a specific reference, please. For example, in the 970MP User Guide version 2.3, dated March 7 2008, figure 1-1, you will find the 970MP's block diagram. There is no "DMA" block there. Nor in figure 1-2, showing the interconnection of two CPUs; still no "DMA" block. It's true that the description of the bus transactions (section 8.4) includes one constraint related to DMA, and a bus transaction that can be used by DMA, but there is no hint that the CPU implements the DMA. In fact the very definition of DMA is that logic outside the CPU is moving data between an I/O device and memory. If the CPU performed the data movement we'd call it programmed I/O! Jeh (talk) 03:15, 17 October 2008 (UTC)
Sorry I type it wrong, I mean by IBM Cell not IBM Power. Cell (microprocessor)#Overview
To achieve the high performance needed for mathematically intensive tasks, such as decoding/encoding MPEG streams, generating or transforming three-dimensional data, or undertaking Fourier analysis of data, the Cell processor marries the SPEs and the PPE via EIB to give access, via fully cache coherent DMA (direct memory access), to both main memory and to other external data storage.
Also you should give evidence why does CPU technologies has to reside within the CPU, there is no one that claims that, as L3 cache can remain out of CPU die. --Ramu50 (talk) 03:18, 17 October 2008 (UTC)
- You seem to be missing Jeh's point... DMA is a computer architecture, bus, component technology. It is not a CPU technology - because it specifically involves other components doing direct memory access without going through the CPU to do it. By definition it's excluded... Georgewilliamherbert (talk) 03:22, 17 October 2008 (UTC)
Read this http://www.ibm.com/developerworks/power/library/pa-celldmas/ By the way don't tell me that MFC (memory flow controller) isn't part of the CPU, because the Intel Larrabee has a partition cache and information can be migrate it from any other field of study they wish, in this case it is from OS. --Ramu50 (talk) 03:43, 17 October 2008 (UTC)
- Funny, I'm looking at that same page and I have the opposite conclusion. I will explain.
- Yes, in the Cell architecture there is a DMA engine in each SPE. That doesn't mean that the DMA engine is part of the SPE's processor, the SPU. In fact, making the DMA engine separate logic from the SPU means that the DMA engine can be operating in parallel with the SPU... which is part of the whole point of DMA.
- Look at Figure 2 here. The SPE's processor is the SPU, the "Synergistic Processing Unit". The DMA component is in that block to the lower left, providing a bunch of DMA channels to the SPU. Note also the description in the intro: "Each of the SPEs has its own DMA engine that can take multiple commands from the PowerPC and the SPE." This makes it quite clear that the DMA engine is separate from the SPU. It's outside of the SPU (the processor, the thing that "runs programs" in the SPE), but still part of the SPE. This is exactly analagous to a system DMA controller in the PC architecture, except that the SPU relies on its DMA controller for general memory access.
- Look also at Figure 4, "DMA data flow". It shows clearly that the MFC DMA engine is a component separate from the SPU.
- Yes, in this case the DMA logic is pretty tightly coupled to the SPU, but it still is logically separate logic, very separate from what we think of as a CPU. You don't run opcodes in the DMA controller the way you do in the SPU (CPU).
- At most this is an edge case. My position is that someone looking for "CPU technologies" is not going to be edified by finding "DMA" in this template. DMA is part of overall computer architecture, yes, but not part of "CPU technologies." On the other hand, learning about caches is very much part of learning about CPU technologies, even though not every cache is part of the actual CPU part. Jeh (talk) 03:49, 17 October 2008 (UTC)
So what if they use Synergistic Processing Unit and PPE, that doesn't mean they aren't coprocessors, that being said they are still packaged as the same die, just because they use MCM doesn't mean they aren't part of CPU. --Ramu50 (talk) 03:58, 17 October 2008 (UTC)
- According to that IBM page you linked the MFC (Memory Flow Controller) is where the DMA engine is (note "Memory Flow Controllers (MFCs) with DMA engines"). Now note on this block diagram, also from IBM: The MFCs are not within the boundary of the SPU. This shows pretty clearly that they're not part of the SPU. If you still disagree, please provide authoritative references, not merely your conclusions based on your own beliefs and logic (which so far are WP:OR). MCM, Intel Larrabee, etc., is similarly irrelevant.
- btw, I don't see how "Intel Larrabee has a partition cache and information can be migrate it from any other field of study they wish, in this case it is from OS" in any way proves anything about the MFC being in the SPU. Details of the Intel Larrabee really don't prove anything about the IBM Cell architecture. As for the IBM Cell, the MFC is in the SPE, but not in the SPU. The block diagram proves it.
- As I said, I admit this is an edge case, because these DMA functions are really essential to the SPEs' working even if no I/O is happening (the SPEs can't access system memory without DMA!). But edge cases are not what a template like this should be covering. IBM System/360 included instructions for packed decimal (BCD) arithmetic, does that mean this template should include a link to Packed decimal? Of course not. Jeh (talk) 04:21, 17 October 2008 (UTC)
You are the one that keep saying MFC reside in SPU, which I never said. I said they are within the same die package as CPU. Any coprocessors mechanism of CPU can be considered a CPU technologies. Intel Larrabee, just an examples I refer to that anything that is implanted in the CPU can be considered as part of CPU technologies.
I am not trying to edge anything. But the fact is CPU technologies are already changing dramatically by improving technologies from other fields of science and they should be inclusive, because many companies have obviously unable to achieve 5.0GHz easily like PowerPC does. Not saying we should favor PowerPC technologies, but we should try to be more openminded about recognize any technologies that has been successful thus far.
Side note thinking (just for refernece) Isn't it obvious Intel quad core failed miserably, that is why they copied AMD quad core topological architecture and Larrabee is using P45 core instead core microarchitecture which was sucessful. --Ramu50 (talk) 04:07, 19 October 2008 (UTC)
- I'm saying MFC does not reside in the SPU - please do not misquote me (and please do not go back and change your text now that your error has been pointed out, as you did before). You are insisting that DMA is a "CPU technology". Well, the SPU is the SPE's CPU. You said
I refer to that anything that is implanted in the CPU can be considered as part of CPU technologies
- The fact that the DMA engine is in the MFC, which is not "implanted in the SPE's CPU," belies your position.
- That it is on the same die is irrelevant. There are one-chip computers with I/O bus interfaces on the CPU die. Heck, there are one-chip computers with serial interfaces, UART, RS-232 level shifters and all, on the same die. (And I think these have sold in much greater quantity than IBM Cell processors.) Does that mean we include "serial ports" here as a "CPU technology"? Nonsense. Jeh (talk) 19:00, 19 October 2008 (UTC)
- This "if its on a die, its a CPU technology" nonsense is more nonsensical if you consider the fact that CPUs are built from discreet components, eg. 74-series TTLs. Jeh, would you be willing to support a revert to the previous version of the template? There are more major issues with this template than DMA as evidenced by the many pages of debate above this current one. Rilak (talk) 06:08, 20 October 2008 (UTC)
- Done. I reverted to my version dated 1 October. If there are improvements to be made (I may have demolished a few due to the daunting task of sorting through the entries), they should be made now before they get caught up in the next round of "xyz is a CPU technology". Rilak (talk) 07:07, 20 October 2008 (UTC)
DMA is part of CPU technologies, contrary to traditional memory controller, DMA is the only memory technology that uses x86 instruction on multiprocessor System-on-Chip, therefore it satisfy the definition of a coprocessors. Coprocessors are meant to handle Instruction Set Architecture (depending on what type of architecture, obviously the instruction will vary from each to another e.g. (CISC, CISC-RISC (x86), VLIW...etc.)
Article: Scratchpad RAM
It can be considered as similar to an L1 cache in that it is the memory next closest to the ALU's after the internal registers, with explicit instructions to move data from and to main memory, often using DMA-based data transfer.
The move refers to MOV (x86 instruction).
In addition, other memory implementations such as ECC (cyclic redundancy checks), EPP and XMP are not CPU technologies. XMP and EPP are both concerned with memory specific enhancement (e.g. reducing latencies) and contribute absolutely nothing is assisting the CPU processing. By processing I mean the execution of a program used in Computer Science. (Process (computing)). ECC CRC is a programming methodologies that can be used in HLGL or binary implementations in instruction set architecture, therefore it is not a CPU technologies. It is an instruction set architecture.
The mechanism of NUMA and COMA and other type of memory controller are also not part of CPU technologies, because they are more concerned with the "topological design" and parallel computing design. Also to be more clear they are concerned with the microarchitecture of memory architecture.
- Your "evidence" is specious.
- 1) You are not supposed to use one WP article as a citation for another.
- 2) "The move refers to MOV (x86 instruction)." - you are wrong about this. When DMA does a "move" of data from one place to another the CPU is not involved. The CPU of course is involved in programming the DMA controller (start address, length, possibly a chain table or similar) but the actual moving of data is absolutely not done by the CPU's "MOV" instructions or equivalent. If it were this would be the direct opposite of DMA. Have you ever actually programmed a DMA controller? I suspect not. If you had you would not be making such silly claims.
- 2a) Since no x86 processor I'm aware of actually has scratchpad RAM, how an x86 MOV instruction could be involved in accessing scratchpad RAM is a mystery to me.
- 3) Off-topic from DMA, but: ECC CRC is not just "a programming methodologies". ECC CRC is very often implemented in hardware.
- 4) Also off-topc from DMA, but: NUMA is very much a "CPU technology", as the CPU's interface to memory has be designed for it (or else there has to be some pretty extensive glue logic). It isn't just a choice made by the chipset maker, unless the chipset has that glue in it.
- 5) Simply "providing evidence" is not sufficient grounds for your continued editing against consensus. Even if your evidence was not specious (as it most certainly is) you do not simply post it to the talk page and then go do what you want to the article (or to the template in this case). Even BRD only allows for one initial "bold" edit. If others revert your "bold" edit you are then supposed to engage in discussion, and if discussion goes against you, you do not simply restore your edits against consensus. Doing so violates WP:DIS, in three out of four points (tendentious, verifiability, and rejecting community input). Jeh (talk) 06:50, 21 October 2008 (UTC)
If I am wrong, you should give evidence, in my reply I never said they require x86 (do you mind reading more "carefully"), I clearly said the mechanism of DMA engine using x86 instructions shows a clear mechanism of a coprocessor. Whether you have program a DMA controller or not so what, the fact is your reply as of current shows your inability to provide citation from anywhere and Wikipedia never said you can't quote from other articles.
3) Off-topic from DMA, but: ECC CRC is not just "a programming methodologies". ECC CRC is very often implemented in hardware.
CRC, cyclic reduency check is a mechanism of HLGL (also known as looping in HLGL programming), just because it is implanted or migrated otherwise doesn't prove these are CPU technologies.
For NUMA and COMA I am not going argue on whether it is a CPU technologies or not, some people consider memory controller standalone from the CPU, because in the early design some of the memory controller are consider part of the chipset technologies. Anyhow aside from that I don't think there is further need of getting off-topic.
Specious or not why the hell should I care, does Wikipedia states only one person idea or your ideas matter while all other Wikipedians have to be followers and a fool to follow you. Obviously, so you mine as well shut the fuck up and stop getting off-topic and your unncessary talks page.
Also WP:BRD is a suggestions contributed by Wikipedia and not a necesity, so even try to use use those resources to back up of what you is correct, because this is a place where people agree on idea, not a place where people "promote" their ideas by "interpreting things" in the way they want to. --Ramu50 (talk) 16:32, 21 October 2008 (UTC)
whatever methodologies you think that will work
- "If I am wrong, you should give evidence" - You are the one challenging consensus - therefore you are the one who needs to provide evidence. So far your "evidence" for your position has been mostly specious, marginal at best. I have asked you to provide block diagrams of CPUs of which the DMA controller is an integral part of the CPU (whether it's on the same die is irrelevant), or could in some other way be legitimately considered part of "CPU technologies," and you have provided none. In fact, the examples you have provided show the opposite.
- If you have any training in logic (reasoning) you should appreciate that it is impossible to prove a universal negative, e.g. "DMA is not part of any CPU", or "DMA is not considered a CPU technology". Since few technical writers would ever consider the possibility that someone would consider DMA to be a CPU technology, there are not going to be many explicit statements disabusing this notion. However that does not give you license to put into the template anything for which people cannot find countering evidence.
- The very definition of DMA is that it is a mechanism usually used to allow I/O devices to do data transfers in or out of memory without involving the CPU. Indeed, the whole point of DMA is to unburden the CPU from such transfers. You have yet to counter this point. The best you did was to bring up Cell, but Cell's use of DMA to do transfers between system RAM and the SPE's RAM, etc., is very definitely an exception to the rule, and even there the DMA engines are separate from the "CPUs" (the SPUs). It doesn't justify putting "DMA" in the template. It's an edge case, and nav templates like this do not generally include edge cases.
- You write:
"This sentence of yours obviously show little insights to your so-called professional acclaimed skills of CPU. Larrabee has a scratpad RAM in case you didn't know"
2a) Since no x86 processor I'm aware of actually has scratchpad RAM, how an x86 MOVinstruction could be involved in accessing scratchpad RAM is a mystery to me.
- The tone of this comment ("your so-called skills," "in case you didn't know") reveals your attitude for all to see, and it is not a pretty sight. You do not seem interested in working cooperatively with other editors here and certainly not in learning when you have things to learn. Instead you seem more interested in proving other people wrong and, as in your very early edits to talk:AT Attachment, explaining everything to all the other poor confused editors: "This wil answer more of your questions in the discussion", "So I finally understand why is so much confusion and unanswer question and I have made some updates", etc. Need I remind you that that text of yours introduced over two dozen points], all unreferenced and all of which were wrong? Need I remind you that you introduced this unreferenced screed of yours with the challenging statement that anybody who disagreed with you needed to give proof, while you had given no proof whatsoever?
- Eventually (after I had spent many hours patiently trying to explain things to you) you decided to back off from your tenditious editing on those points; you even wrote something that could be construed as admission that you were wrong.
- However you did not learn from this lesson. You continued to tenditiously edit the same article, this time pushing your ideas about "support" of SSDs. Eventually you backed off from that as well, but only after countless more editors' hours (mine) were wasted.
- As I see from reviewing your edit history, you have repeated this process on many other articles. You continue to use personal attacks, as you just did above, rather than reliable sources. When RS's are offered to counter your ideas you simply ignore them, or say they don't apply, or engage in more personal attacks. When your sources are pointed out as irrelevant, erroneous, or unreliable, you simply continue citing them. None of this behavior is indicative of a cooperative spirit.
- As I said earlier, I consider this behavior highly disruptive. You have wasted countless hours of editors' time, and the net result, as far as I can tell, is a net loss to Wikipedia.
- You should consider that when so many of your changes are challenged by so many different editors, this might not be a case of many editors being "biased on you", as you put it. It might be the case that you are often wrong. At least in the opinions you express here.
- As for the technical point you raised: well obviously I've never programmed for Larrabee: it hasn't been released yet! However 1) it's a graphics processor, not a general-purpose CPU; 2) even though it does have scratchpad RAM, there's no evidence I can find that it uses DMA for transfer between scratchpad and memory. Indeed, here is someone claiming that it doesn't support DMA to or from scratchpad at all (see the first comment). (I wouldn't call that a RS, but I don't need any more than that here; the point that Larrabee uses DMA for accessing its scratchpad is yours to prove, not mine to disprove.) 3) You are still wrong when you assert
"I clearly said the mechanism of DMA engine using x86 instructions shows a clear mechanism of a coprocessor."
- because there is no such thing as "a DMA engine using x86 instructions". DMA engines explicitly do not use CPU instructions to perform their transfers; that is the whole point of DMA. Your claim that
"The move refers to MOV (x86 instruction)."
- was purely your conjecture, and it was WRONG. Of course (as I said before) there are CPU instructions involved in setting up the DMA engine and telling it to start, but that doesn't mean the actual DMA transfer is using the CPU's instructions.
- Re citing WP, WP:PSTS says "Wikipedia itself is a tertiary source", but also "Wikipedia articles should rely mainly on published reliable secondary sources and, to a lesser extent, on tertiary sources. All interpretive claims, analyses, or synthetic claims about primary sources must be referenced to a secondary source" (emphasis mine).
- Re WP:BRD, you are correct that it is not even at the "guideline" level. On the other hand, WP:Consensus is an "official Wikipedia policy" and you are not following that either. Note that the "flowchart" on WP:CON is not much different from the one on WP:BRD. You made a bold edit to the page - fine so far. It was reverted. The next stage, if you disagree with the revert, is supposed to be "discuss," or as it says on WP:CON, "seek a compromise." But you are not engaging in proper discussion to seek compromise, you are simply raising one erroneous point after another. Of course, there isn't much of a "compromise" between "include DMA" and "don't include DMA", but that's how some things are.
- You are also continuing to use personal attacks, which are also against official Wikipedia policy.
- Bottom line: Consensus here is clearly against your changes and your so-called "evidence" has not changed consensus. Nor is your attitude as expressed here, and in your many edits against consensus, in any way encouraging of a belief that you are interested in building consensus. Or any sort of civil participation in WP for that matter.
- I'm sorry, but it's gotten so that whenever I see your name on an edit in my "watched pages" list I think "oh, no, now what's he done?" and I can't imagine that I'm alone in this. Think about it: Is that really the Wikipedian you want to be? Jeh (talk) 20:55, 21 October 2008 (UTC)
These are very definitely technologies used in CPUs. Mux'ing is commonly used to allow one bus to carry several different signals. For example, the same set of pins out of the CPU can carry "address" bits at one moment and "data" bits at another. This is widely used in many CPU buses. It's also used inside CPUs. same reason. Just because you have found the concepts associated with ADC and DAC doesn't mean that's the only thing to which they apply. Removing this from the template "because it's used with DACs" is another example of "a little knowledge can be a dangerous thing." Jeh (talk) 18:25, 22 October 2008 (UTC)
- I was wondering should we create another template. Because technologies like Multiplexing, Multipliers...etc are very electrical engineering centric than the CPU. --Ramu50 (talk) 18:37, 22 October 2008 (UTC)
- What do you think CPUs are made of, if not logic elements? "CPU technologies" are all about EE, EE at the "digital logic design" level at any rate. Now if we had some links to articles like Transmission line theory, which is used in e.g. chip layout, I'd agree that that's too far into the EE side of things. Jeh (talk) 18:52, 22 October 2008 (UTC)
I really don't see why should expand Parallel Computing on this template. There is no need, not every single CPU processors is built on that technologies and we shouldn't make it so that only the dominating technologies counts. --Ramu50 (talk) 18:30, 22 October 2008 (UTC)
- The template does not say "parallel computing", just "parallelism" and there is a great deal of parallelism (pipelining, etc.) happening in even the cheapest CPUs these days. This is not a matter of "only the dominating technologies counts." As for Flynn, Flynn describes a relationship between computing elements and work streams and this very definitely is implemented at the CPU level (other levels too, of course). I could see an argument that Flynn is maybe a little too esoteric for the casual learner, but I would also argue that it's a good look at formal models, which the casual learner will likely not have encountered. Jeh (talk) 18:44, 22 October 2008 (UTC)
The Flynn's taxtonmy of computational theories and ideas are only a choice of methodologies that can be implanted in CPU accordingly to a systematic design, the very fact that the study of that particular science has no relationship to CPU. Not everybody who work in the processor may choose to be believe a CPU must be design that way and nor should this template be pushed. That is why I also decided to remove LIFO and FIFO in designing the near-final verison. --Ramu50 (talk) 01:43, 23 October 2008 (UTC)
- No, you misunderstand the paradigm. You can work on one or multiple instruction streams, and you can work on one or multiple data sets at a time. That gives four possibilities and there aren't any others. Every CPU or multi-CPU complex must fit into one of these. Flynn's taxonomy is a useful model for analyzing and describing any CPU. Jeh (talk) 04:34, 23 October 2008 (UTC)
- And by the way... for a long time you were arguing that DMA should be included in this list of "CPU technologies" even though you had found exactly one example (IBM Cell) of a CPU that used DMA for memory-to-memory transfer. In typical designs DMA is an add-on at the I/O bus or "frontside bus" level, and it exists only for the benefit of I/O devices. Yet you were insisting that DMA be included as a "CPU technology" based on the one very marginal example of IBM Cell.
- Now here you're arguing that "not every CPU" has any parallel processing technology in it so the parallel processing category should not be so expanded.
- Why didn't you apply that criterion to the question of DMA?
- More to the point: My god, man - are you going to challenge every last term in the template this way? Making us spend an hour or four on each writing here on the talk page until you either change your mind or give up... only to move on to another ill-considered position, requiring more time from other editors to refute? Jeh (talk) 08:27, 23 October 2008 (UTC)
I only decided to remove it, because the template is already very messy after the numerous addition of article. Before even editing editing the article, I already could tell the template might be overloading after a while, it just a matter of time. Anyhow stop getting off-topic. --Ramu50 (talk) 22:21, 23 October 2008 (UTC)
- 1) Nobody but you thinks the template "might be overloading". Consensus is against you.
- 2) The fact that you seem insistent on making changes without discussion, then dragging editors through lengthy discussions on points you quite evidently know very little about, is very much on topic. Your arguments re. DMA and parallel processing are completely inconsistent with each other, one arguing for an "all-inclusive" approach and the other for excluding anything you don't think is used everywhere. This is extremely telling: It tells me that you aren't interested in merit of argument, you just like to change things.
- Not every template you look at should be changed to fit your personal view. It is one thing to add new entries (say for new products) but when you want to make major organizational changes, as you often have, you should discuss them on the template talk page first, and if you get no response there, take it to WP:RFC. In my opinion templates are part of Wikipedia's "backbone" and organizational changes to them should not be made so cavalierly. Jeh (talk) 22:53, 23 October 2008 (UTC)
First if I was going to be all inclusive I would of kept some of the article in the experimental template. Also if such technicality of multiplexing should be added, then before why did kept on removing bitwise operation and x86 since they are CPU technologies.
Regarding Parallel Computing you guys are making up synthesis on article without evidence. There was no evidence on Distributed Computing nor Grid Computing references that claims they are a form of Parallel Computing. Also if you agree things such as Flynn's Taxtonomy, Register Renaming are Parallel Computing only, not any form types of methodologies. Then accroding to the article Register renaming#Details: tag-indexed register file LIFO and FIFO should be included as it is one of the concerns of the design architecture. Why did you remove it? You guys are making a lot of synthetical crap as you go along the way. You think everybody is that dumb to believe you. --Ramu50 (talk) 19:12, 24 October 2008 (UTC)
- Register renaming is absolutely not "parallel computing only". Flynn's taxonomy is not a "methodology" - that's what I said - it is a categorization scheme. I can't make any sense of the and I have other things to do today. Suffice it to say that if you continue to make changes to this template against consensus I have no doubt that your changes will be both reverted and reported as further evidence of your tenditious editing and WP:POINT violations.
- So now your massive re-write was just an "experimental template"? If you want to make "experimental templates" preparatory to a discussion on their merits, please do so in your own talkspace, not to the live template.
- As I said to you a long time ago, you are in a very deep hole. My advice is that you stop digging. Jeh (talk) 20:10, 24 October 2008 (UTC)
- Register renaming is for parallel computing only? We are making up things as we go?
- May I suggest:
- Dezsö Sima, Budapest Polytechnic. "The Design Space of Register Renaming Techniques", IEEE Micro, September-October 2000, pp. 70-83.
I greatly expanded the bit sizes - a few list articles would help?, simplified row headings - I left the old structure of subcats but that should probably be removed. More details on the Russian computers would help, anyone? Widefox (talk) 13:51, 17 August 2012 (UTC)