Talk:Cell (microprocessor)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Computing (Rated B-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 High  This article has been rated as High-importance on the project's importance scale.
WikiProject Computer science (Rated B-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
WikiProject Brands  
WikiProject icon This article is within the scope of WikiProject Brands, a collaborative effort to improve the coverage of Brands on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 ???  This article has not yet received a rating on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.
edit·history·watch·refresh Stock post message.svg To-do list for Cell (microprocessor):

Here are some tasks awaiting attention:
  • Article requests : Cell BE Security Architecture
  • Expand : Power consumption
  • Verify : *update=information on cells - were expected last year (2008)

Differences between CEll/ Multi-CPU Systems[edit]

It would be interesting to read what differences there are between those, both from a coder and consumer point of view. Intel is gradually introducing multi-cernel CPUs with more and more cores so aren't these becoming competing concepts?

We should stay away from points of view and qualitative speculation on Wikipedia. However.. I can say that the consumer is only interessted in CPUs that can run Windows, and Microsoft won't do that. It has nothing to do about technology and all about politics and marketing. Intel's processors and Cell isn't competing since they have totally different targets. -- Henriok 08:35, 9 February 2007 (UTC)
They may not be competing in the consumer desktop space, but they certainly are in others. The use of PC-like devices versus PS3s and set-top boxes for home media applications is one obvious example. The other one, which I am more directly familiar with, is in the high-performance computing space. I just reviewed a paper that a labmate is submitting to a conference that directly compares performance between the two, with a focus on getting the best performance of each without having to manually manage the SPE's local stores. (talk) 06:07, 13 April 2009 (UTC)

Trusted Computing / DRM[edit]

I'm extremely surprised that this article makes no mention of the Trusted Computing / DRM system built into the Cell Processor. Some people think it is a good thing and some people think it is bad, but either way it is extremely noteworthy. Just to cite a single link, IBM has a technical document on it here. IBM themselves explicitly discuss Digitial Rights Management there, which should forestall any controversy over applying the term DRM. I may try to add this to the article myself in the future, but I don't have time right now... and to be honest I know I'd have to work pretty hard to produce a suitably Neutral POV writeup. I'm not here to grind an ax, I came here LOOKING for information and was befuddled by its absence. Alsee 11:22, 31 October 2006 (UTC)

you can run AES and other encryption technolegy on the CPU, but this has nothing to do with hardcoded DRM support in the CPU itself so it's non-existent fantasia talk. Markthemac 03:08, 05 april 2007 (UTC)

I believe he is speaking of the full spectrum of overarching, interacting, hardware based security features which are actually an integral part of the workings of the entire processor. They're not really trusted computing nor are they DRM... they can be used as such, but their main design intent were as general security features to take the place of software applications (e.g. anti-virus, anti-spyware, firewall ect.). Among the most central aspects of it all is the 'Secure Processing Vault' which, put simply, allows any number of SPE's to go into a hardware based run-time isolation mode of sorts whereby any SPE is able to disengage itself from the EIB (and subsequently the entire system) during runtime. This feature allows an SPE to run its calculations almost entirely without interference from the outside world (including the operating system) e.g. making the practice of alteration of program code during runtime and in memory almost entirely non-existent.
However that isn't the only feature of the security architecture; as I stated previously there is an entire overarching intarcting spectrum of security features and protection from almost every possible angle. They all work together to form one cohesive fortress and if any one of them is beaten it is not end-game. Many of these features are so heavily integrated into the processor's workings that it makes remote hacking nearly impossible and if you did attempt to break them locally you would probably end up bricking the entire processor thus making the hardware useless for any mal-purpose. These powerful security features are the primary reason why government agencies and militaries worldwide are investing so heavily in the Cell Processor. I too am quite suprised that absolutely nothing has been made of these features here... I guess to most people this is just a regular old processor :'( and not an entire next-generation architecture which I hope will influence the direction of the industry as a whole.
Obviously, I can see how these explanations (the first especially) may present plenty of confusion and headaches since I don't have the time to more thoroughly explain it, as such you should read the full IBM white-paper on it here:
I really do hope that someone will come along and write something about these features as they really are among the key benefits of the archetecture. There is soo much missing from this article about this and many many other things it's absurd.
These features combined with the Cell's built in networking features to be used in the planned worldwide Cell Distributed Computing Network will make for some very interesting implications... possibly allowing for intelligent threat level assessment capabilities on a worldwide scale (e.g. bricking an illegally modified or rogue device as soon as it goes online). However that part of it is all just speculation ;). There is indeed no proof anywhere in the public space of this; but you know, there are a lot of things about the full capabilities of this unique processor architecture that have yet to be publicised. All of its features are all mere side effects of our true design intent; the goal around which its entire development was based.
The future of computing lies beyond the box. ;)
Enjoy yourselves! :)
( 04:23, 14 October 2007 (UTC))

I created this section more than a year ago, and I am astounded that the article still makes no mention of this issue. I explicitly linked a PDF in my post to forestall an inevitable tin-foil-hat/fantasia accusation, as Markthemac made. I see that IBM has taken down the PDF that I linked, now yielding an error page. Instead I'll post a Google link that should survive any attempt to whitewash away references to the EXPLICIT DRM support designed into the Cell: "Cell Broadband Engine Support for Privacy Security and Digital Rights Management". Right there in the title of IBM's own publication, explicit statement of the Cell's explicit DRM support. I may add a DRM section to the main Cell article myself, but it's very frustrating because I am only half-familiar with the technical implementation in the Cell and I have very exacting expectations and standards on technical issues. I only half-grasp the final resultant DRM implications of the specific crypto keys that ARE in fact embedded in the chip and the various crypto mechanisms that ARE in fact embedded in the chip. I've seen these crypto keys and crypto mechanisms documented in other IBM technical papers. I'd have to do quite a bit more research on the technical design before I'd really be comfortable in my own expertise to write anything more substantial than a general explanation that the Cell carries DRM hardware with who-knows-what DRM implications. Alsee 18:59, 4 December 2007 (UTC)

As the unidentified author above your comment tried to point out, I suspect that there is a confusion of terminology. The CBEA provides for secure execution of encrypted code with hardware-enforced process isolation and hardware-supported code authentication. Here are some details IBM published in a archival, peer-reviewed journal. This should be available in a "technical library near you". It is not at all hidden. (Various articles in vol 51, no 5 of the IBM JR&D discuss these features. The Shimizu, Hoffstee, Liberty article gives the most details.) In less-technical language, this allows programmers to be sure that their programs cannot be tampered with. If these programs manage encrypted data, then this data cannot be tampered with (if the programs are designed properly, of course).
This, however, is not Digital Rights Management, per se, which is probably Markthemac's point. These features could be used to create a digital rights management system if that were desired. The historic business model for a games console involves selling hardware at a loss (and many web sites have done teardowns and analyses of the PS/3 which suggest that the console is sold at a loss) and making up for that loss with sales of software. In order to support this business model, Sony (and Microsoft & Nintendo as well) must have absolute control over the software market so that the flow of royalties is guaranteed. Also, having uncontrolled software could interfere with various "family friendly" and ESRB-related features of the console, but this is speculation on my part.
In fact, in Microsoft's case (a bit off-topic for this article) their spokesman, Major Nelson, has said explicitly that the strict control Microsoft has over the software (IIIRC he even said DRM) is what enables them to get content providers to agree to their video marketplace....they can be assured that only Microsoft's software can access those files and that said software will enforce the terms of the contract Microsoft signed.
Anyway, certainly the CBEA security features should be part of any complete discussion of the Cell processor. Using the term, DRM, would probably be unnecessarily inflammatory. Perhaps the link I provide above would enable someone to write such a section. --Philhower (talk) 22:30, 18 December 2007 (UTC) (as an IBM employee who played a small part in the Cell processor implementation, I believe that directly editing the article may violate my terms of employment)
"Using the term, DRM, would probably be unnecessarily inflammatory." As I cited, it was IBM itself that applied the term DRM. You are accusing IBM itself being "unnecessarily inflammatory" about their own product. As far as the wikipedia page using the term DRM, it cannot possibly be wikipedia-inflammatory to accurately repeat a term in the article which was applied by the very producer of the product. Alsee (talk) 08:25, 31 December 2007 (UTC)

What DRM are you talking about, application DRM or media based DRM (i.e. music DRM)? If you mean DRM in general, then NO! Because the PS3 can rip DRM based Itunes music files to its HDD. DRM ripping software for the PC is rare, but the PS3 can rip the files perfectly out of the box (not considering that it renames your music files) If you would like to know more please visit the discussion section of the PlayStation 3 on Wiki, I have a discussion going on that topic. DevonTheDude (talk) 01:23, 4 March 2008 (UTC)

DevonTheDude, this is about very technical points of the hardware innards of the Cell relating to DRM. This is (mostly) unconnected to whether or not the SP3 can rip iTunes files. The point here is that the cell has hardware features designed to prevent "unauthorized" DRM reading software, and when you *are* able to play DRM files to more throughly lock your files against you and lock them into the approved DRM software. Alsee (talk) 03:18, 28 March 2008 (UTC)

Open Source Hardware[edit]

I read at many places, that Cell will be open-source hardware:

  • Is it only the interface specification to the hardware, which is open?
  • Or is the whole hardware design (including verilog source code etc.) public available?
  • Or is even the hardware design licensed under a licence which fulfills the 4 freedoms of software/OS Definition?

If anybody knows sth. about it => Please add it!

No links to one of the many places? I find it very hard to believe. -- Henriok (talk) 14:23, 18 February 2011 (UTC)

Major Rework Proposed[edit]

In anticipation of this, I first moved the existing talk page to Achive-1.

At this point I don't have any fixed ideas about how to best proceed. I'm merely aware that the existing article has a number of defects that need to be somehow addressed, and I have certain intentions to add new content that won't help matters while the article remains in its present form.

I don't wish to tackle all the issues all at once. First, a general survey of what I see as the big fish.

citation system[edit]

The good side is that there are lots of citations, references and links. The bad side is that the article has sprouted no end of unsourced statements. In my own contributions I wasn't careful to expose my citations, I tended to hide them inline as HTML comments until I could attack this problem more systematically.

I've looked at all the Wikipedia citation methods, and none of them are issue free. My own opinion is that the new cite.php method which inlines the citations is a bad direction. I feel the overall framework could be salvaged with a version 2 rewrite, more along the lines of one of the bioinformatics citation systems I once saw, which it already partially resembles.

The existing ref|note system works fine for shorter articles, but not so great for articles of the present length, and the numbering system sometimes conflicts with other things such as footnotes, if those are employed.

Given a choice, for the more technical material, where I'm primarily citing IBM design engineers, I would lean toward the Harvard referencing style, which I find the most easy to verify and maintain while reading the text in article format; I also feel it handles a large body of references much better than the ref|note system.

One possibility is that this article should be split, leaving the main article to keep the existing style while freeing the more technical sub-articles to adopt Harvard style.


Cell is a complex topic. I tend to feel a core article is required that tries to remain light on the jargon (EIB, coherent interconnect, DMA controller, etc.) while presenting a servicable overview.

Note that the Cell microprocessor is conceptually distinct from the Cell BE architecture. This follows the model X86 architecture as distinct from derivative devices such as Intel 80486 or Pentium. A first cut at improving comprehensibility would be to slice off some of the technical description of Cell as a architecture, since that is not what most casual readers are intending to pursue: they mostly wish to find out about Cell the commercial phenomena.

I sense a third distinct article will be required to cover Cell software development methologies. An example of this is the "porting VMX to SPU" text that I added to the main article somewhat prematurely. There's so much more that could be said on this subject esp. pertaining the unnatural supervisory relationship between the PPE and SPE cores, the coherent DMA controllers, and how to cope with that mess.


I already opened this subject under comprehension because I feel that is the primary driving force in shaping the main article. Finding natural cleavage points to simplify life for the expositors (myself among them) is secondary.

From my present vantage point, I would probably begin by introducing a split into these three titles:

I chose those titles not to confuse people looking for Cell biology.

I'll leave it off here for tonight. Does anyone have any observations about whether this will work or not work? More sections? Fewer sections? Different emphasis? Exactly where to cut? Please make your views known. MaxEnt 07:54, 7 June 2006 (UTC)

controversy of the day[edit]

Not even worth mentioning since it's just a case of stupidity or willful ignorance on the part of some hasty reporters. This has nothing to do with the Cell microprocessor or architecture. -- uberpenguin @ 2006-06-08 00:07Z

I come here exactly for the same reason. As far as I know, those numbers were presented by Sony. How exactly they managed to broke their measurements bests me.
I am not really concerned about the slower triangle transform rate (NV4x is not triangle limited by using 2 tris per pixel) but the memory access performance is something awful.
What's the real deal with this issue?
MaxDZ8 talk 06:40, 19 July 2006 (UTC)

The "Local Memory" in the Sony slides is the RSX's video memory, the bandwidth is the bandwidth for reading back from the video memory into the Cell. AFAIK that wasn't even possible at all with the Emotion Engine and isn't something that is usefull in computer games. If you want to make a screenshot, you can just write to main memory. --Silvestre Zabala 19:46, 19 July 2006 (UTC)==

(Sorry for the late reply, I don't really understand how did I miss this).
It's still something quite ugly if you want to GPGPU something and stream it to CPU for further processing. It looks quite useful to me on next-gen games, unless they have very high GPGPU capabilities and this would mean RSX to be USM 4+ (or they just have a very good shader compiler). MaxDZ8 talk 08:03, 19 September 2006 (UTC)

English usage of "belies"[edit]

I don't quite understand the intent of this phrase in the Overview: "The name [Cell BE] belies its intended use, namely as a component in current and future digital distribution systems;"

The American Heritage dictionary defines "belies" as follows:

  1. To give a false representation to; misrepresent.
  2. To show to be false; contradict.

What is the contradiction between the concept of "cell" and the concept of "digital distribution"? I wonder if the author of this phrase actually has something like "underlies" or "gives a hint about" in mind. At any rate, I think the statement as it is will not parse for many non-native speakers of English, and should be rephrased to not use the word "belies".

Fbkintanar 03:25, 20 July 2006 (UTC) Cebu City, PH


"Cell" may also lead to imagery of a jail/prison cell. Something that would promote separation and individuality, rather than something promoting cooperative work aimed on media dispersion. anon 1:55pm 29, December 2009 (UTC) —Preceding unsigned comment added by (talk)

How about "The name of the Cell implies the smallest structural part of a larger organism, as a component in current and future digital distribution systems;" I don't want to change this article as it's really good considering the complexity and points of view involved. --Notagoodname 21:32, 3 September 2006 (UTC)

belies, for some reason seems to be an unusually common victim of malapropism Varka (talk) 23:03, 6 July 2008 (UTC)

subarticle navbar mockup[edit]

First, I familiarized myself with Wikipedia:Summary style and considered some other issues in choosing good names, and also found one about how to break up a long article in sub-articles.

Then I started to mock up a possible approach. Created category:Cell BE architecture and then modeled a navbar after the one in Isaac Newton, and mocked up some text in the podlet articles.

The sw-dev podlet could at some point split in a separate Linux on Cell article, but I think it would be best for now to keep all the development concerns in one place until it gets large enough, if that happens, to spill over.

I'll be starting soon to move large blocks of text out of the main article into the corresponding podlets. Any suggestions about where to draw the scalpel are welcome. MaxEnt 23:29, 9 June 2006 (UTC)

fat reduction and restructure[edit]

The main purpose thus far was to reduce the overall size of the main article and align it better with the topics designation for subarticle. Much remains to be done. MaxEnt 06:41, 10 June 2006 (UTC)


I was tracking down other WP articles that might have pertinent content.

Added a small screed to the MIMD article of the sort that got me in a little trouble here for being too colorful. I'm determined to educate, even at the cost of having to say something. Based on what I wrote there, expect the MIMD references in the present article to diminish and go into the west, as Galadriel once spoke. MaxEnt 09:11, 10 June 2006 (UTC)

Should the "Other MIMD-on-a-chip processors include" part of the "See Also" the "cell microprocessor" article be moved to MIMD? Leave behind only a link something like
* "The Cell microprocessor is one of several MIMD-on-a-chip processors"
 ? -- 04:28, 24 September 2006 (UTC)

Heavy copyedits history and overview[edit]

It's hard to dial-in to the major fault lines without cleaning up the text as it already exists. Note that the MIMD passage that bothered me so much was actually in the stream processing article, not here, as I realised again later. MaxEnt 00:39, 12 June 2006 (UTC)

I created a somewhat odd inline-footnote in the history section about the often-confused distinction between cores and threads. I think it works well to get that out in the open in a passage most readers are likely to at least skim. Appreciating the highly-threaded nature of the Cell is an matter of core (ouch) comprehension for the rest of the article. MaxEnt 00:39, 12 June 2006 (UTC)

Copyedit lede[edit]

I liked the content I introduced into the lede a week ago, but the wording was a little heavy and the resulting text too blocky to read comfortably. I split some sentences and paragraphs, added more intralinks, and fixed a bad parallel sentence construct by introducing the word favors.

After responding to some initial criticism that my stronger vocabulary (modest, ground breaking, prowess) was too POV, I no longer feel there is any inbalance here. My initial use of ground breaking was indeed excessive, but now that I've demoted and recast it as breaks ground and anchored the use to obtained many patents it no longer functions as a free-radical to convey an adulatory impression. Likewise I feel that my depiction of the PowerPC core as modest is properly balanced in describing the floating point capability as prowess. The people who hate Cell (Carmack) point at the crappy PPE; the people who adore Cell (supercomputing centers) point at the staggering floating-point subsystem; these two words together effectively flag and bookend a wide spectrum of opinion. Likewise exotic is an effective flag for the jaded architect who only wants to know "what does Cell have that I've never seen before?" If anyone still thinks the lede is too POV, you'll have to speak up again. MaxEnt 02:16, 12 June 2006 (UTC)

major edit extended through the weekend[edit]

It's going a little slower than I hoped as I wasn't able to find as many hours over the last couple of days as intended. Some of the material is being reworked in my personal MediaWiki (the conehead has landed) and won't be visible here until I'm ready to merge the new material. User:W.marsh came along and removed my major-edit pilons 48 hours before my reserved period expired I guess because he equates churn with progress, and there wasn't much churn visible here for a couple of days. If anyone wants to become involved with this restructuring effort I'm more than happy to negotiate sub-tasks, but please don't remove the work-in-progress pilons unless you plan to pitch in; they aren't impeded anyone who wishes to contribute, just those who wish to contribute without checking in before-hand with those of us who are already active. MaxEnt 09:35, 15 June 2006 (UTC)


It seems I have to make frequent reports or the killer T cells come along and abscond with my pilons.

Content for software development[edit]

Today I'm trying to work up some content for the software development podlet. I'm busy extracting glossary terms from this series of articles:

No new content has landed yet. Just one grouse about why this process takes so long. IBM regards the PPE and SPE processors as two distinct processor types, but refuses to provide consistent formal names. The PPE ISA is usually termed a PowerPC ISA, or more precisely a PowerPC 970 compatible ISA, while the SPE ISA has no formal name at all. MaxEnt 12:25, 15 June 2006 (UTC)

From GreenbDS -- we've worked to clean up the Power branding so it's more consistent. If you can update this page based on we appreciate it. The SPU ISA is still just the "SPU ISA". Since SPUs never stand alone, it probably does not pay to have some sort of branding associated with that aside from the Cell branding itself. But your grouse is gratefully accepted!

Some wrong numbers?[edit]

There is a text "At 3.2 GHz, each SPE gives a theoretical 256 GFLOPS of single precision performance. The PPE's VMX128 (AltiVec) unit is fully pipelined for double precision floating point and can complete two double precision operations per clock cycle, which translates to 6.4 GFLOPS at 3.2 GHz; or eight single precision operations per clock cycle, which translates to 256 GFLOPS at 3.2 GHz[7]." Well, this would give you 2.3 TFLOPS, while Cell spec. says 256 GFLOPS total... Also, 3.2 x 8 = 25.6, not 256.

From what i see this is actually an error on IBMs page. It says: "fully pipelined DP floating-point support in the PPE's VMX" - Since when does VMX support DP floats?

From GreenbDS -- please see The 200+ GFLOPS is accurate for the whole chip, not for "each SPU."


Which VMX on the VMX page should this link to? -- 22:56, 18 September 2006 (UTC)

AltiVec -- mattb @ 2006-09-19T00:34Z

Presently this is described as VMX128 which is incorrect for Cell.

All of the technical documents I've seen have referred to it as VMX128, a VMX unit with 128 bit wide vector registers and some extra odds and ends. Do you have a source to back up your assertion? -- mattb @ 2006-10-10T17:17Z
VMX128 is VMX with 128 registers (as implemented on Xenon), vs. "standard" VMX which has only 32 vector registers (which is what the PPE implements). Both have 128 bit wide registers.
You're correct. -- mattb @ 2006-10-16T15:29Z

IBM Blade[edit]

Disclaimer: I work for IBM.

IBM has announced that general availability of an IBM dual-Cell blade (called "QS20") will commence on September 29. Here are the specs:

Also -- in terms of the lenghy list of links on the main page -- let me encourage you to point people to a portal to many of them:

I apologize that I am new to posting comments here. For tracking purposes, please call me "GreenbDS".

branch prediction[edit]

"The SPU branch architecture does not include dynamic branch prediction, but instead relies on compiler-generated branch prediction using "prepare-to-branch" instructions to redirect instruction prefetch to branch targets". [1].

also "Two SIMD instructions can be issued per cycle: one compute instruction and one memory operation."

also, what is the power consumption? the berkeley article estimates 40 watts. -- 17:19, 20 October 2006 (UTC)


IBM have mentioned they may/will be making 'cells' with optimised double precision instructions on the SPU's. "JK: We used that first tape out to get the initial software up and running. There were modifications we did to the chip over time. The design center is still active and participating. Our roadmap shows we are continuing down the cost reduction path. We have a 65 nanometer part. We are continuing the cost reductions. We have another vector where we are going after more performance. We have talked about enhanced double-precision chips. Architecturally we have double precision but we will fully exploit that capability from a performance point of view. That will be useful in high-performance computing and open another set of markets" from

Perhaps in the future it will be neccessary to make the difference between the cell used in the PS3 and other cells - hope this is useful

400 million[edit]

What's with the first paragraph? It states that the project cost has been $400 million but gives no citation to this.

I aded this link. -- Henriok 10:18, 12 December 2006 (UTC)

About the Successors of Cell[edit]

Wouldn´t it be better to add a section about this? While it´s wonderful to know how it is good right now, but I just can´t see any future research on this architecture, like I see with all the others. Wouldn´t it be nice to add a section about this? —The preceding unsigned comment was added by MTd2 (talkcontribs) 13:01, 12 December 2006 (UTC).


To whom it may concern. says 65nm cells have been made. 16:33, 3 January 2007 (UTC)

Commercialization Section: Expansion Request[edit]

Please expand the following paragraph within the Commercialization section:

This Cell configuration will have one Power processing element (PPE) on the core, with eight physical SPEs in silicon. In the PlayStation 3 one SPE is locked-out during the test process—a practice which helps to improve manufacturing yields—leaving seven SPEs operational in PS3 software.

How does this process improve manufacturing yields? In what way is it 'locked out'?--RedPoptarts 23:41, 4 January 2007 (UTC)

When choosing CPUs for the PS3 Sony can allow one SPU to be broken. They can chose second rate processors, something that neither Mercury nor IBM does when they choose Cell CPUs for their products. I think that the disabling of SPUs is made inside the processors or the chipset. There are features that can detect and disable faulty circuitry. Either that or a combination with the operating system. I think that the operating system locks out or reserves an SPU for itselv leaving just 6 SPUs to the game developers. I have read this somewhere but a quick googling revealed nothing. Thre might need some more digging. Either way.. this article needs some rewriting. As it stands now, it's almost like its a future product, but it's been commercialized for about a year by now. -- Henriok 08:22, 5 January 2007 (UTC)
I've been told that adding an extra spare, then substituting that spare for any row that gets hit by a hardware defect, has long been used in DRAM manufacturing and ECC memory.
This improves yields -- producing more usable chips per wafer -- because in addition to the "normal" yield of perfect chips, the yield now also includes chips with a (correctable) defect.
But I've never heard of this strategy -- "locking out" a section of the chip -- being used for any CPU chip (other than the Intel 80486SX). Do you have any references? Or is this just speculation? -- 16:07, 8 May 2007 (UTC)

Cell using the Windows Vista?[edit]

Hi everyone, i want to know if the Cell proccesor would be capable of working with windows vista or future microsoft operative systems one day, do u think that would be possible if the cell is heavily commercialized and if it´s used with new servers? I believe this can happend, but i´d like to know ur opinion. And one more thing, how has yellow dog linux been working on the PS3? is that a nice OS to work with?

There's no technical reason why Microsoft shouldn't be able to engineer Vista to work on computers based on Cell processors, but there are certainly A LOT of political and market reasons why that won't ever happen. Short: Microsoft won't spend resources backing a direct competitor's product (the Sony PlayStation 3), and they won't spend resources backing such diverse niche products where the Cell is going to end up. Cell is bound for set top boxes, embedded computers, supercomputers and game systems, where Windows have virtually no precense at all. I don't see Cell processors breaking into Micorosfts core market of desktop and server computers anytime soon, or at all.
Yellow Dog Linux is certainly running on PS3s and is running good. It's as nice to work with as any Linux I'd say, but for traditional home and work use, I'd recommend a traditional cheap x86 box with some other Linux distro like Ubuntu. Cell processor shine when you can bring the use of the SPUs, and surfing, mailing, wordprocesing and the like, you'll only use 10% of the system's capacity. Are you doing 3D rendering, scientific computation of dynamic flows, or gene splicing, you might want tro try out a PS3 cluster with YDL though. -- Henriok 11:16, 5 February 2007 (UTC)
Do you think it would be possible to hack a version of windows XP or vista to make it work in a PlayStation 3? check my youtube´s videos!!!!!!!!!!! just put raidentheninja on the search bar on youtube 18:01, 24 March 2007 (UTC)
No, that would be completely impossible for someone not having access to the entire source code to Windows and even then it'd be a pretty enourmous undetaking, porting an entire operating system to a plattform it wasn't designed to run on. -- Henriok 19:44, 24 March 2007 (UTC)
Actually it probably wouldn't be that hard for MS to get NT working on the Cell. Remember that Windows NT4 supports the PowerPC. The Xbox360 uses a (highly derived) NT5 kernel I believe so they could probably get the Xbox360 OS working on the Cell too. The PPE is fairly similar to a PPC I believe so it wouldn't be that hard I presume. Nil Einne 20:46, 28 October 2007 (UTC)
The PPE _is_ PowerPC, and it's almost identical so the three cores in the Xbox 360-processor. However.. There a lot more to it than porting the kernel, some APIs and a select few drivers. After that.. No application will run. MS must use the same technology as Apple and IBM have used for binary emulation, or else not a single application will run, unsell recompiled. Waaaaay too much hassle. And it would make absolutely NO business sense to do it. Why on earth would MS back their biggest competitor in the gaming market? Supporting the PlayStation 3 would be insane for MS. Using Cell based accelerator boards in Windows is another matter completely and is already done. Mercury is making them, and have been making them for quite a while. They cost an arm and a leg though.

I agree with ^ the above statements^ It would be insane for Microsoft to give something to Sony...not a smart business decision... unless, they sold them only what they knew was not the best they had.... It makes me feel the same way about the DVD vs Blu-Ray debate.... Microsoft is going to what... Buy a Sony Blu-ray drive... ha... its also why i don't count the PS3 having a Blu-Ray drive as a costs them nothing to use one...It might have cost $200 to buy a Sony B-R at retail... but it doesn't cost Sony $200 to put one in their PS3. It cost them nothing (in the sense that, the 15 dollars it cost to manufacture was MORE than recovered with a sale of a PS3) they already had them just sitting there! so when people said: "oh well its worth the higher cost because it has a B-Ray player" i laughed.... Sony was ( i beleive)the first company to release a B-R player in the US market and together with pioneer (or panasonic cant remember) designed the first HD-ODD using the newly discovered blue (violet) 450nm laser. [1].Sony has had BIG involvement with everything blu-ray when they were making PS3, in terms of return on investment, they would have been stupid not to put one in...But if Microsoft had Put in a Blu-Ray drive they would have either had to sell the 360 for double the difference in cost of the PS3 & 360 ( so $500 instead of $300) . So Sony selling a PS3 for $400 w/ BR-drive and "X-Box 360 Blu-Ray Edition" (doesnt exist obviously, ha, but hypothetically)would have cost $500 because well....they don't make Blu-Ray optical Drives, they would have had to buy them, so they would have had to pay more, charge more, make less, and sell less. If Sony DID NOT make Blu-Rays, it wouldn't have one, because it would have either ran the companies sales into the ground or they would have had too charge even more for it.. just like X-Box would not have Live or Media center if they didn't already design 'em and made them...Its just good business to be able to make EVERYTHING in house... PS3 was a good candidate for The cell... X-box is not so after all that, ( i do apologize for going kinda off topic but i needed to explain the back-drop for my statements) i will say however.... Cell Processors in non PS3 based applications though wouldn't be a victim of the above described scenario....if Sony decided to start using the Cell CPU in their computers, heck i say go for it, but i wouldn't expect to see "out of the box" (the package that is) Microsoft Windows products on that PC...Sony Doesn't Make Computer OS's... so that's fine it would actually SAVE money and generate more Profit to buy an established OS from a company that started and basically only makes OS's. it would cost more to design their own from scratch especially for Cell... they could just as easily use Linux or whatever they want... Sony teamed up to get Cell into their Gaming product, They didn't make it themselves... there's tons of PC manufactures.... they could design their own OS or there own CPU but they all use whats available because it works...the only thing is that we need to differentiate PS3 Cell vs Cell Processors that can be sold to any company that wants to make a PC with it...for the longest time Apple used only Apple products and Pc's used a menagerie of stuff , you could say i want a computer! and they would say you like Intel with that? or AMD? ATI ? or Nvidia? and you could mix and match to your hearts content do u want windows? linux? etc etc.... but you couldn't build a Intel based PC with Nvidia and Asus parts and put OS-X on is apple... (Apple i do recall has started using Intel now...)) But thats how they differentiate them selves.... Product/Brand exclusivity. which is why Microsoft would not put a Blu-Ray drive on the X-Box if they had to buy it from Sony... and it's why Sony went with a Cell everything was from the ground up with it....and its why I pods use I tunes and APPLE computers always come with Mac OS [2] To summarize... Xbox and PS3 are THE consoles.... i would only see microsoft letting them use genuine "Windows for PS3" if Sony Agreed to put in free Sony Blu-Ray drives in the new 360's hahaha (talk) 06:57, 20 March 2011 (UTC)Mike

And.. that being said.. Look at Microsoft.They have problems with their core platform, x86. Two different versions for 32- and 64-bit. Problems with drivers, applications, and all sorts of crap. What would it look like bringing in some exotic hardware in that mix? It wouldn't look pretty at all. Just like they did with their Itanium-support. Didn't go well. Hardly any apps and no one is using it. -- Henriok 21:35, 28 October 2007 (UTC)
Mercury CEll Accelerator Board are a complete system by themselves, they run a linux OS are are in that respect not different to a standalone call based system that could be accessed through a network interface from a windows platform. --Dwarfpower 11:43, 29 October 2007 (UTC)

Only Windows that exists for PPC, and could possibly be ported to newer PPC hadrware is Windows NT 4. Apart from Windwos - Intel alliance, there are no other reasons why PPC computer should not run Windows. But, lack of Microsoft support has even made able go from PPC to Intel (beside raw CPU power per price, however newer PPC CPU`s tend to change that). —Preceding unsigned comment added by (talk) 17:21, 2 September 2009 (UTC)

The big problem there though is the lack of good computing hardware, e.g. nobody makes Cell-CPU-capable motherboards for real general purpose systems, and unless it could be done cheap, and also there was enough incentive to support the porting of large quantities of software to the platform (not really feasible unless the majority of software goes the F/OSS route and proprietary software is abandoned), likely no such thing would be constructed. This "chicken and egg" problem is what's kept us "hooked on X86" ever since it became a major player. It's kind of sad, too, since it could be possible that we would have even more performance from our computers if X86 had been abandoned for something better and that something better was developed to a similar degree. mike4ty4 (talk) 08:30, 23 November 2009 (UTC)

New pic[edit]

Peter Hofstee, chief architect of Cell, came to talk to my research group today. I got a picture of him and put it on commons - Image:Peter Hofstee.jpg Raul654 22:46, 25 April 2007 (UTC)

Clean up, make up to date[edit]

Is it just me or does someone else think that this article should be cleaned up quite a bit and brought up to date? I think it should be split into multiple parts and there seems to be a work in progress to that effect but it seems to be abandoned. Is there anyone that really cares and might we work together on this? -- Henriok 17:33, 27 April 2007 (UTC)

The cell microprocessor's nine cores?[edit]

I have seen information stating that the cell microprocessor has nine separate cores. Each processing element is a core. Is this information true? Out of all of the sections stating that the cell microprocessor has nine cores, none of them are cited. Can someone please cite those sections? Thank you, TheN1Armyguy. 16:30, 28 April 2007 (UTC)

It has a PowerPC as its main processor, and eight SPEs (although, IIRC, you can only use seven of them. The eight is enabled only if the fab has a failure. Thus, IBM gets higher yields) Raul654 16:31, 28 April 2007 (UTC)

I wasn't asking that. I was asking if each seperate processing element is a core. That would mean this thing has 9 cores. I was asking this about the Sony PlayStation 3, but I'm sure it is the same in that. Please answer the question, and or cite the info.TheN1Armyguy. 17:11, 28 April 2007 (UTC)

Raul did answer your question. What exactly do you mean by "core"? The question you're asking isn't altogether clear. Read the article's description of SPEs to decide whether they fit the definition you have in mind. What's more, as Raul pointed out, the eighth SPE doesn't really "count" since it is permanently disabled and is merely (at this time) a yield-increasing feature. -- mattb 18:12, 28 April 2007 (UTC)

In "Commercialization" paragraph 2, it states that the Cell processor provides 9 independent threads of execution. This would mean the cell has 9 cores and it can produce nonuple (tuple) multithreading, that seems like way too much.

"Note that the relationship between cores and threads is a common source of confusion. The PPE core is dual threaded and manifests in software as two independent threads of execution while each active SPE manifests as a single thread. In the PlayStation 3 configuration as described by Sony, the Cell processor provides nine independent threads of execution."

So: Does each processing element count as a core? (example: the Microsoft Xbox 360's CPU has three cores, which is a triple core CPU. That makes it able to produce triple multithreading because each core can do a single thread.) TheN1Armyguy. 18:52, 28 April 2007 (UTC)

A core is a functional unit in a processor capable of independent execution. By that definition, yes, each SPE is a separate core, as is the PowerPC, giving nine total cores in all. In principle, there is no reason why each of these cannot execute its own thread. Does that answer your question? Raul654 21:40, 28 April 2007 (UTC)

Yes it does, thank you. But think about how much that would cost, it seems like thousands of dollars. Then, look at the PlayStation 3, $600. It rises another question. Wouldn't this cost thousands of dollars? The cell having 9 cores seems possible, but it would cost a huge amount of money. Can you or someone else please cite the information of paragraph 2 in the section "Commercialization"? Thank you?TheN1Armyguy. 02:26, 29 April 2007 (UTC)

No, adding multiple cores does not dramatically increase the price for many reasons - some more compliated than others. (The reasons are described better in my PhD thesis, but that's not ready for publication yet) You have to realize that the cost of the silicon is only a tiny, tiny fraction of the total price of the processor. The cost of the processor is dominated by (a) total fabrication costs (a state-of-the-art chip fabrication plant capable of producing chips with transistors hundreds/dozens of nanometers wide costs around a billion dollars) and verification costs (It takes an army of computer engineers to verify Intel's stuff). Adding multiple identical functional units is relatively cheap, because once you verify one of them, you've verified all of them. My research group is working on a chip with 160 cores Raul654 02:36, 29 April 2007 (UTC)

Okay, all that's left now is a source. Amazingly, its hard to find on any Sony or IBM websites. Btw, I'm only 14, but I love my PlayStation 3.TheN1Armyguy. 03:45, 29 April 2007 (UTC)

Look at the picture of the Cell and just count. Or you can follow the link to this page to get the necessary quote. IBM ha a tremendous ammount of documentation for the Cell at this site and they also have some interessting reading about their Cell based QS20 blade. -- Henriok 11:00, 29 April 2007 (UTC)

I also saw that the PPE itself can do dual threading, while all of the SPEs can each do single threading. If you don't count the SPE that is disabled, and the one that is used for the OS, then that shows that the Cell B.E. can do 8 threads, not 9. So why does it say it can do 9? Also, since each element is a core, how come I've heard that the whole Cell B.E. can run at 3.2 GHz? If the PPE runs at 3.2 GHz, and it has 8 other cores, then how come it says the whole thing runs at 3.2 GHz? What happend to the other SPEs?

I want to know if the whole thing runs at 3.2 GHz, or if every core runs at 3.2 GHz.

I also want to know if it can do 8 or 9 threads at a time.

TheN1Armyguy. 16:39, 29 April 2007 (UTC)

The "one SPE reserved by the OS" is PlayStation 3 specific. It has nothing to do with any limitations or restrictions placed by the microprocessor. The entire Cell microprocessor AND all of its individual parts runs at X GHz (ignoring any clock gating trickery, which I don't think Cell utilizes). This is an indication of clock frequency, a signal that is used to coordinate data transfer throughout the microprocessor. I get the feeling that you may be getting hung up on single-dimensional performance metrics like thread count and clock speed. I'd advise you not to use these in direct comparisons of microprocessor capability, since capability is a much more complicated function of both hardware and software construction.
Incidentally, for your future reference, questions like this should be directed to the reference desk rather than asked on article talk pages. Article talk pages are for discussion of the article itself, not the subject of the article. Please keep this in mind for the future. -- mattb 17:05, 29 April 2007 (UTC)

Okay thanks, I'll try there. TheN1Armyguy. 17:12, 29 April 2007 (UTC)


The bit about the initial budget: "four-year period beginning March 2001 on a budget reported by IBM as approaching US$400 million" is surely wrong. $100 million/year would ack for about 6 people and no hardware, prototypes, test labs., etc. Should that be $4 billion? quota

That does look quite low, and following the citation through it only states that $400 million was put forward to its development by Sony (or as the article specifically states Ken Kutaragi), giving no other information on IBM and Toshiba's financial contributions.
Although I can't find anything else about the cost of the developement of the chip anywhere else (talk) 06:32, 20 April 2008 (UTC)

HDTV usage[edit]

"Reportedly, Toshiba is considering producing HDTVs using Cell. They have already presented a system to decode 48 MPEG-2 streams simultaneously on a 1920×1080 screen."

It should be added that each MPEG2 stream was SDTV (according to the first cited source). They were being scaled even smaller and shown on an HDTV screen, but the video streams themselves weren't HD. This is an important detail. Don't make people think the Cell has been used to decode forty-eight 1920×1080 video streams ;)

<nitpicker> Also, "1920×1080" links to 1080i. Was it an interlaced display or a progressive display? In the latter case it should be 1080p ;) </nitpicker> 20:18, 29 September 2007 (UTC)

System Memory (response to clarify me request)[edit]

I can't find a good source to give you the size of the processor system memory (I.E. how many real address bits are implemented) But, if you look here: (Cell BE Initialization guide) it mentions that system memory configurations supported range from 64 MB to 64 GB which may help clarify the situation. —Preceding unsigned comment added by Philhower (talkcontribs) 19:55, 28 November 2007 (UTC)

I can't seem to find anywhere if the RAM is hardwired on all examples of the CBEA or only those on ps3s or what? What RAM is compatible and how? etc., etc. This is of note for us ps3 owners who might want to make their ps3s into high end Linux running PCs or those who want to maximize the potential of some other CBEA based PC. P.S. I think that the new "successor" (its name escapes me atm.) deserves its own article(with links). Varka (talk) 23:38, 6 July 2008 (UTC)

The RAM in PS3 is soldered to the motherboard. There are no slots. The same goes for all other devices that use the Cell BE-processor that I know of. This is one of the new features of the PowerXCell that has an on board DDR2 controller able to use 1-32 GB of slotted RAM. -- Henriok (talk) 07:29, 7 July 2008 (UTC)
Thanks, this answers all of my questions although I still feel that WP needs more info on PowerXCell 8i units. highlights says "• Increases main memory capacity

up to 16 GB" though. Varka (talk) 23:59, 12 July 2008 (UTC)

16GB isn't the limit, it's due to cost Markthemac (talk) 00:26, 5 August 2009 (UTC)

Multi-core compatibility[edit]

Do applications which can take advantage of Intel Core 2 Duo processors (or AMD Operon) need aditional coding to be able to utilise the multiple cores of the Cell processor? This information would be a useful addition to the article. (talk) 13:10, 2 January 2008 (UTC)

I think it would be inappropriate to delve into this matter in this article. This is basic computer science, and the appropriate article is porting. Cell/PowerPC-processors and AMD/Intel/x86-processors are two completely different processor architectures to begin with. so, no, there ar exactly zero compatability of applications between the plattforms, multi-core or not. An application that's moved from one plattform to another will not run, period.
Every project that moves to Cell needs special treatment. First.. make it PowerPC, and that's a big issue in its own. Second, adapt the code that's going to run parallel to the highy specialized SPUs which has its of instruction set architecture. Then fix, tune, adapt, rinse, repeat.. So yes, it will need quite some additional coding to move. -- Henriok (talk) 13:41, 2 January 2008 (UTC)

if you would use all 10 SIMD's "2x SIMD on the PPE and 8 on the SPE's" with 90% efficiency on the cell, u wouldn't even want to dream of porting that(and expecting it to run on anything less than a 20 or 30 cores core2duo x86). the cell looks like it's designed for government use, especially in the field of DSP's for advanced satellite systems. (the cell is very interesting for filtering satellite images on-the-fly, it can process huge amounts of raw image data) Markthemac (talk) 00:37, 5 August 2009 (UTC)

POWER architecure[edit]

Does the Power Architecture link towards the beginning actually link to the wrong article? It links to the generic Power architecture article rather than the IBM POWER architecture link.

...I feel like a geek just for asking this :( —Preceding unsigned comment added by (talk) 20:19, 31 January 2008 (UTC)

It links to the correct article.
Have you read the different articles: IBM POWER and Power Architecture? It should make clear why Cell is "Power Architecture" and not "POWER architecture". The capilatization of the terms is really important. "Power Architecture" is not the same thing as "POWER architecture". As far as IBM is concerned, POWER as an architecture died with POWER2 in the early 1990's, and the following line of processors is "POWER" only in the name since it's really been PowerPC in different flavours and extensions. In 2003-2004 IBM started using "Power Achitecture" as a marketing name encompassing all that is POWER and PowerPC. And now, there's only Power Architecture and the technology is goverened by the consortium. A few straggler still use the term "PowerPC" as a name for the architecture for historic and/or lazy reasons. IBM, Freescale and other companies in authorative position haven't used "PowerPC" for some years now. And as I said, POWER as a microprocessor architecture died with POWER2, and the instructions was only included in the sucessors for backwards compatability reasons. The PPE in Cell was compliant with the PowerPC spec, version 2.03, and all succeeding Power Architecture specifications up to and beyond the current 2.05. It has never, ever been anything "POWER". For the same reason as "POWER" is dead, "PowerPC" is also dead.. It's all "Power Architecture".
These facts are something that isn't well known and IBM, Freescale and the other folks at is doing an abysmal job at conveying this to the public, they even has trouble making their own people use the correct terms. Marketing just isn't their thing. It was Apple's thing and when they left the camp there was an immense marketing vacuum that has yet to be filled. -- Henriok (talk) 20:58, 31 January 2008 (UTC)

Cell B.E. Chipset Shrinkage (90nm to 65nm to 45nm[edit]

Found info on the Cell processor shrinking down to 45nm, it will reduce power consumption 40% to to the older 65nm chipset. "Production is due to start in the summer so expect them on the shelves in time for Christmas (08')." There is also a handy graph detailing the differences in power use between 90nm, 65nm, 45nm chipsets. I got all this info from here:

OpenGl Implementation[edit]

Gallium3D is not the replacement of mesa3D, it is, I quote from mesa3D official site, the codename for the new Mesa device driver architecture. Current implementation is far from an OpenGl implementation since it supports, I again quote As of February 2008 the driver supports smooth/flat shaded triangle rendering with Z testing and simple texture mapping. This information as nothing IMHO to do in the cell processor page. You can have t added to the gallium3D page, where it might belong, but not here. This statement (not signed) was by User:Dwarfpower.

For why the fact that it's only a developpement version should not be on this article ? This is related to the cell processor usage in opensource software. Cell will be used by IBM in opensource solution for supercomputer, it's not unrelated... (talk) 23:39, 5 April 2008 (UTC) (Popolon)
A full OpenGL implementation making use of SPE to attain performance closer to an hardware implementation rather than a full software one, or at least a plan to implement such a thing would be worth mentionning, but a mere partial software implementing prototype is not worth mentionning IMHO; a vi port to power architecture would be more significant. --Dwarfpower (talk) 14:39, 6 April 2008 (UTC)

Article Neutrality[edit]

I separated this discussion from the one below as I'm more interested in purifying the text and removing weasel words than introducing new critical viewpoints or discussing controversy. In particular I edited a section in the Intro where the author seemed to be bragging about patents covering the memory architecture ahead of actually describing what makes it unique. The issuance of patents on a new CPU architecture is not any more noteworthy than saying the the CPU runs on electricity. (talk) 23:47, 4 February 2010 (UTC)

Software Developer Complaints[edit]

The article does not seem very neutral to the point it looks like IBMs marketing department wrote it. Also there is no mention of the massive amount of industry critism leveled at the architecture for being so difficult to develop software for ( Also along the same lines where does it mention the Cell BE architecture was so disappointing general computing performance wise to Apple that they moved their Mac lines to Intel cpus instead ( I would write a badly needed criticism section for the main article but with multibillion dollar vested interests I am sure it wouldn't last long (cough Sony, IBM). The fact is the market speaks loud and clear and this revolutionary architecture that was supposed to change the world has failed precisely because one of the design goals seems to have been to make it more expensive and time consuming for developers to develop software for it. Even it supposed advantages are being done better and faster by GPUs these days. The Cell is very just like the Initium market wise. It looked great on paper but in the real world it will die a long slow death and become irrelevant. How much you want to bet even Sony will move to a general purpose architecture for its PS4? —Preceding unsigned comment added by (talk)

I'm all for a critique section but I don't think the two examples you liked are representative for the complaints for Cell. First: Using Apple as a metric for what The Market thinks is always a mistake since Apple almost always trails its own path but in this case Apple wanted what everyone els was using in their segment, a desktop/laptop-chip/architecture. Apple dissed the whole PowerPC portfolio and what Freescale, IBM and others could deliver. Their beef was not for Cell specifically. Secondly: The Activision CEO does not complain about Cell, nor even the PlayStation 3, but rather Sony's business model and pricing. Since there has been complaints about Cell technology wise, it shouldn't be any problems finding such. The analogy to Itanium seemed apt. -- Henriok (talk) 09:45, 10 August 2009 (UTC)

clarify history[edit]

the history section is informative but a bit hard follow.

i was looking for info on when cell be processors first started shipping to OEMs and when cell be products first started shipping and after reading the history section i'm still not sure.

i suggest writing a simple timeline in list form and filling the additional text. (talk) 14:27, 25 August 2009 (UTC)

Good suggestion. For OEMS.. Mercury was the first that I know of that built Cell based stuff and the article says that that agreement with IBM was announced on June 28, 2005. This processor has always been free for IBM to sell to whoever they want, so I guess that IBM was selling to OEMs from day one. When they started shipping.. Somewhere in late 2005 or early 2006 probably, prototypes at least. -- Henriok (talk) 15:09, 25 August 2009 (UTC)

cell broadband is dead[edit]

I am too lazy to edit main article but as anyone who has ever had to program for the cell broadband happily this architecture has dead ended. Sure they will make PS3s for years to come but IBM is no longer selling or developing this architecture outside of the PS3. Apple was smart to run away from this fail design. Here is the source article if anyone wants to pee on Sonys parade. (talk) 11:50, 5 July 2010 (UTC)

Not selling anymore? So what's this: ? (talk) 13:28, 24 September 2010 (UTC)
Nice, way to put up a 404 link. Really proves your point. Cell BE is dead good riddance, good idea poor implementation we will hardly miss you.


So does the PlayStation 3 Gravity Grid still exist? The latest PS3 firmware/bios removed linux support. (talk) 10:06, 15 January 2011 (UTC)

It seems as though the PS3GG should have an article of its own, considering its notability. (talk) 10:06, 15 January 2011 (UTC)

not 25.6, but 12.8 GFLOPS or 25 or even 50 billions integer operations[edit]

From article:

"Synergistic Processing Elements (SPE)

Each SPE is composed of a "Synergistic Processing Unit", SPU, and a "Memory Flow Controller", MFC (DMA, MMU, and bus interface).[3] An SPE is a RISC processor with 128-bit SIMD organization[4][5][6] for single and double precision instructions. With the current generation of the Cell, each SPE contains a 256 KiB embedded SRAM for instruction and data, called "Local Storage" (not to be mistaken for "Local Memory" in Sony's documents that refer to the VRAM) which is visible to the PPE and can be addressed directly by software. Each SPE can support up to 4 GiB of local store memory. The local store does not operate like a conventional CPU cache since it is neither transparent to software nor does it contain hardware structures that predict which data to load. The SPEs contain a 128-bit, 128-entry register file and measures 14.5 mm2 on a 90 nm process. An SPE can operate on sixteen 8-bit integers, eight 16-bit integers, four 32-bit integers, or four single-precision floating-point numbers in a single clock cycle, as well as a memory operation. Note that the SPU cannot directly access system memory; the 64-bit virtual memory addresses formed by the SPU must be passed from the SPU to the SPE memory flow controller (MFC) to set up a DMA operation within the system address space.

In one typical usage scenario, the system will load the SPEs with small programs (similar to threads), chaining the SPEs together to handle each step in a complex operation. For instance, a set-top box might load programs for reading a DVD, video and audio decoding, and display, and the data would be passed off from SPE to SPE until finally ending up on the TV. Another possibility is to partition the input data set and have several SPEs performing the same kind of operation in parallel. At 3.2 GHz, each SPE gives a theoretical 25.6 GFLOPS of single precision performance."

So you see, 3.2 GHz * 4 (32 bits *4=128bits) =12.8 GFLOPS for one SPE. Or 3.2 * 16 (16*8bit=128 bits)= 51.2 GIPS ( 51.2 Billion 8 bit integer operations per second). It seems only PPE capable 25.6 GFLOPs single precision (32 bits), because can do two operations in one cycle. BTW, Nintendo GameCube CPU can also do two operations in one cycle like PPE, what seems is not common for over IBM CPUs before Gamecube.
Since seems Xbox 360 have more like 3 PPE, then XBOX360 CPU performance is 25.6*3= 76.8 GFLOPS single precision. Sony PS3 CPU performance is 7*12.8+25.6=89.6+25.6=115.2 GFLOPS single precision. From article Xbox 360 hardware there is writen that xbox360 can do 9.6 billion dot products (dot product is 4 multiplications and 3 additions). So 9.6 billion dot products means 9.6*4=38.6 billion multiplications and 9.6*3=28.8 billion additions. But be aware of misunderstanding. 32 bits precision means 15 decimal digits. So actual number of dot products can be very big chance millions dot products (I would bet that 15 floating point operations is one 15 digits decimal number operation). — Preceding unsigned comment added by Versatranitsonlywaytofly (talkcontribs) 11:23, 14 June 2012 (UTC) Dot products used for bump mapping and normal mapping, which is currently hardest part in 3D (even harder than shadows) if not counting not widely used parallax mapping. Since, because of smart z-buffer technicues don't draw unvisible parts of scene, there not very likely that can be done more dot products than pixels on screen per one frame. At 60 FPS there is 640/60=10.67 millions dot products, which is 10.67 dot product for frame of resolution 1000*1000. Why would even someone need 3D graphics accelerator? For fast texturing and fast pixels shifting operations for blur and distortions, which are also bump maps?
You may ask, why single precision (32 bits) is then faster than double precision (64 bits) if need all gigaFLOPS divide by 15. For addition it should be equal, but for multiplication to multiply two integers from 0 to 9 need one operation. To multiply two integers from 0 to 99 need 4 multiplications and 3 additions. To multiply two integers from 0 to 999 need 9 multiplications and 8 additions. To multiply two integers from 0 to 9999 need 16 multiplications and 15 additions. No wonder AMD GPU double precision doing 4 times slower than single precision, because precision rising twice, the number of operations rising square.
That's simply wrong: SPEs are 25,6 GFLOPs (see IBM papers) -- (talk) 19:24, 12 November 2013 (UTC)

Post Mortem - pros/cons of architecture[edit]

I think it can be argued that the Cell's day has come and is currently sunsetting. Outside of the PS3, it hasn't taken off in any other mass market consumer product (no currently shipping home/office computer or other consumer electronic device uses it). As its architecture was so exotic in its day, I'd like to hear a little of the pros and cons of the chip now that we have a little historical perspective. -- (talk) 04:49, 17 November 2012 (UTC)

Apple rejected the Cell processor...[edit]

Whenever someone decides to properly rewrite this article, which should be in the context of why the Cell architecture essentially failed to survive outside of a few niche applications despite having mainstream wholesale adoption via PlayStation 3 (and to a significant extent the Xbox 360!), it should also be noted that IBM desperately tried to push it on Apple:

Had Apple decided to run with Cell instead of switching to Intel, this might have been a very, very different story. — Preceding unsigned comment added by (talk) 07:47, 30 November 2014 (UTC)

We cant speculate on what story that might have been for Apple, and there's not much of confirmed rejection by Apple. There probably should be a note about Apple not using it, but not much besides that. Wikipedia is not a place for speculation så with an eventual rewrite, the Apple angle should probably not be the focus of such an article. I am a big supporter of a rewrite though. -- Henriok (talk) 13:43, 1 December 2014 (UTC)

Why did Cell fail?[edit]

There isn't any post-PS3 history or post mortem on why it failed. -- (talk) 01:51, 22 March 2015 (UTC)

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Cell (microprocessor). Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

Question? Archived sources still need to be checked

Cheers.—cyberbot IITalk to my owner:Online 05:37, 8 January 2016 (UTC)

  1. ^
  2. ^ 95% of this rant is Personal Opinion...I apologize if i Offended the Orig. Article' writer/s and feel free to delete anything that is rule breaking....I tried to follow the Nuetral position rule... i think all the products discussed are great!
  3. ^ "IBM Research - Cell". IBM. Retrieved 11 June 2005. 
  4. ^ Cite error: The named reference seminar was invoked but never defined (see the help page).
  5. ^ "Synergistic Processing in Cell's Multicore Architecture" (PDF). IEEE Micro. 2006. Retrieved 1 November 2006.  Unknown parameter |month= ignored (help)
  6. ^ "A novel SIMD architecture for the Cell heterogeneous chip-multiprocessor" (PDF). Hot Chips 17. August 15, 2005. Retrieved 1 January 2006.