Talk:Non-uniform memory access
|This is the talk page for discussing improvements to the Non-uniform memory access article.
This is not a forum for general discussion of the article's subject.
|WikiProject Computing||(Rated Start-class)|
Why No Sun / Fireplane?
This architecture predates Intel/AMD NUMA by quite a bit and is still very common in many datacenters. This article almost makes NUMA sound like a new idea or one that Intel/AMD invented recently.
For that matter, why is Intel/AMD the only mention here anyway? There are many other, larger scale, proven examples of NUMA. Opteron/HT is a tiny baby in comparison. It's silly that we're even talking about it.
<a href=http://www.sun.com/servers/x64/x4500/arch-wp.pdf>Example of Opteron/HT architecture</a>
<a href=http://docs.sun.com/source/817-4136-13/1_Introduction.html#98130>REAL NUMA</a>
<a href=http://www.repton.co.uk/newsletter/repton_pages/docs/v490_v890_wp.pdf>Sun Fireplane diagrams</a>
"Current implementations of ccNUMA systems are multi processor systems based on the AMD Opteron processor." Were you born yesterday?
Opteron integrated memory management versus a new chipset to handle memory more efficiently.
The Opteron has a good idea but I don't think they took it to the level they could have. With multiple procs there are still CPU cycles used to communicate with ALL the other processors in the event that a proc needs to use memory that's being held by a different proc. The dedicated memory for each CPU takes care of alot of these snoops but not all of them. This is why the Opteron's STILL require a dedicated hypertransport for inter CPU communication. wasted cpu cycles are minimized, yes, but not to the level it could be.
Why not just rearchitect the chipset to maintain a table of what proc has what memory to virtually elliminate cpu snoops all together? A one stop memory shop. This will elliminate the need for L3 cache all together and possibly elliminate a northbridge chip as well.
The article says:
- Current implementations of ccNUMA systems are multi processor systems based on the AMD Opteron processor. Earlier ccNUMA approaches were systems based on the Alpha processor EV7 of Digital Equipment Corporation (DEC).
What about the MIPS and Itanium processors used in SGI ccNUMA systems? Ericfluger 12:11, 16 October 2006 (UTC)
- AFAIK SGI actually invented ccNUMA. This was the base of their SN series, starting with the Origin 200 and Origin 2000 (aka SN-0) in 1996. They used MIPS R10000 CPUs; the 200 had only 4 of them, the 2000 had up to 512. The Origin 3000 (aka SN-1 or SN-MIPS) and 3900 completed the MIPS-based ccNUMA systems. Then came the Altix (aka SN-IA) with Itanium2 processors. Actually, the Silicon Graphics page has most of this information, and this corresponds to talks that I had with former SGI engineers. But since I don't have references handy, I won't edit the page. jschrod 11:27, 7 December 2006 (UTC)
Sandra Memory Bandwidth Benchmarks
I don't know if this should really go into the article, but even with the highly superior Core 2 processors today, AMD with its NUMA memory capability still outperforms Intel in memory-bandwidth benchmarks. For example: http://www.gamepc.com/labs/view_content.asp?id=o2000&page=6 —The preceding unsigned comment was added by DonPMitchell (talk • contribs) 22:07, 26 February 2007 (UTC).
Cache coherence needed for NUMA?
I would take issue with the statement "Although simpler to design and build, non-cache-coherent NUMA systems become prohibitively complex to program in the standard von Neumann architecture programming model" The ICL 2900 Series implemented NUMA in its Series 39 incarnation and it was very successful - and commercial programmers found it quite friendly to program. Semaphore instructions were introduced in the original 2900s and were always needed when using shared memory so there wasn't a pile of packages hanging around like with Unix trying to do tricks with normal reads and writes. Series 39 nodes could be up to 500 meters apart and were connected using fibre optic bundles. -Dmcq 14:27, 4 April 2007 (UTC)
The article very correctly states that Burroughs was a NUMA pioneer, but Sperry-Univac bought Burroughs (called it a merger; the result was Unisys) in the 1980's. Hard to understand how 'Burroughs' was doing ANYTHING in the 1990s - Burroughs did not EXIST in 1990. —The preceding unsigned comment was added by 126.96.36.199 (talk) 08:41, 15 April 2007 (UTC).
I think a history section would be good. I can't find much about the history and there's few other comments on this page about early NUMA systems. One early example I would point to is ICL Series 39 which came out in 1985 and had cache coherent NUMA on nodes 1000 meters apart for resilience purposes though later versions reduced that. There is a picture of a building with a hole right down the middle where the Americans bombed one of these systems early in the Gulf war but I heard it continued working without incident. The writes to shared areas and semaphore operations were sent automatically as necessary along some fibre optics connecting the nodes. Dmcq (talk) 10:22, 2 May 2014 (UTC)
Diagram purporting to show processor => memory bandwidth varying
Doesn't AFAICS show that at all. Each processor -> memory channel is exactly the same width. Am I missing something? — Preceding unsigned comment added by 188.8.131.52 (talk) 10:33, 6 November 2014 (UTC)
- Sorry, I'm a bit confused regarding what actually you're asking about? Any chances, please, to elaborate it a bit further? — Dsimic (talk | contribs) 06:23, 7 November 2014 (UTC)
Support in Windows 7
The article says there's support in Windows 7. However, the link is to MSDN, aimed at software developers, and the access is via C++ code or equivalent. Is there any access for the ordinary user to either get or set NUMA parameters in Windows 7. My question is prompted by the following message on my Windows 7 Pro system: "
HPMPI:CPWB(3076):(2716) Warning, hardware (host FMHS-CFD-2) contains numa-like packaging of sockets/cores (4/12) but the Operating System does not recognize this topology as numa (2) " —DIV (184.108.40.206 (talk) 23:25, 26 October 2015 (UTC))