Jump to content

Cell (processor)

From Wikipedia, the free encyclopedia
(Redirected from Cell architecture)

CELL BE
DesignerSTI (Sony, Toshiba and IBM)
Bits64-bit
IntroducedNovember 2006; 18 years ago (2006-11)
VersionPowerPC 2.02[1]
DesignRISC
TypeLoad–store
EncodingFixed/Variable (Book E)
BranchingCondition code
EndiannessBig/Bi

Cell, a shorthand for Cell Broadband Engine Architecture,[a] is a 64-bit multi-core microprocessor and microarchitecture that combines a general-purpose PowerPC core of modest performance with streamlined coprocessing elements[2] which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.[2]

It was developed by Sony, Toshiba, and IBM, an alliance known as "STI". The architectural design and first implementation were carried out at the STI Design Center in Austin, Texas over a four-year period beginning March 2001 on a budget reported by Sony as approaching US$400 million.[3] The first major commercial application of Cell was in Sony's PlayStation 3 game console, released in 2006. In May 2008, the Cell-based IBM Roadrunner supercomputer became the first TOP500 LINPACK sustained 1.0 petaflops system.[4][5] Mercury Computer Systems also developed designs based on the Cell.

The Cell architecture includes a memory coherence architecture that emphasizes power efficiency, prioritizes bandwidth over low latency, and favors peak computational throughput over the simplicity of program code. For these reasons, Cell is widely regarded as a challenging environment for software development.[6] IBM provided a Linux-based development platform to help developers program for Cell chips.[7]

History

[edit]
Cell BE as it appears in the PS3 on the motherboard
Peter Hofstee, one of the chief architects of the Cell microprocessor
Michael Gschwind, one of the chief architects of the Cell microprocessor

In mid-2000, Sony Computer Entertainment, Toshiba Corporation, and IBM formed an alliance known as "STI" to design and manufacture the processor.[8]

The STI Design Center opened in March 2001.[9] The Cell was designed over a period of four years, using enhanced versions of the design tools for the POWER4 processor. Over 400 engineers from the three companies worked together in Austin, with critical support from eleven of IBM's design centers.[9] During this period, IBM filed many patents pertaining to the Cell architecture, manufacturing process, and software environment. An early patent version of the Broadband Engine was shown to be a chip package comprising four "Processing Elements", which was the patent's description for what is now known as the Power Processing Element (PPE). Each Processing Element would contain 8 "Synergistic Processing Elements" (SPEs) on the chip. This chip package was supposed to run at a clock speed of 4 GHz and with 32 SPEs providing 32 gigaFLOPS each SPE (FP32 Single precision), (Per SPU 4 Flops FPU + 4 Flops by integers Unit: 8 Flops per Cycle Per Spu), the Broadband Engine was meant to have 1 teraFLOPS of raw computing power in theory.

The design with 4 PPEs and 32 SPEs was never realized. Instead, Sony and IBM only manufactured a design with one PPE and 8 SPEs. This smaller design, the Cell Broadband Engine or Cell/BE was fabricated using a 90 nm SOI process.[10]

In March 2007, IBM announced that the 65 nm version of Cell/BE was in production at its plant (at the time, now GlobalFoundries') in East Fishkill, New York,[10][11] with Bandai Namco Entertainment using the Cell/BE processor for their 357 arcade board as well as the subsequent 369.

In February 2008, IBM announced that it would begin to fabricate Cell processors with the 45 nm process.[12]

In May 2008, IBM introduced the high-performance double-precision floating-point version of the Cell processor, the PowerXCell 8i,[13] at the 65 nm feature size.

In May 2008, an Opteron- and PowerXCell 8i-based supercomputer, the IBM Roadrunner system, became the world's first system to achieve one petaFLOPS, and was the fastest computer in the world until third quarter 2009. The world's three most energy-efficient supercomputers, as represented by the Green500 list, are similarly based on the PowerXCell 8i.

In August 2009 the 45 nm Cell processor was introduced in concert with Sony's PlayStation 3 Slim.[14]

By November 2009, IBM had discontinued the development of a Cell processor with 32 APUs[15][16] but was still developing other Cell products.[17]

Commercialization

[edit]

On May 17, 2005, Sony Computer Entertainment confirmed some specifications of the Cell processor that would be shipping in the then-forthcoming PlayStation 3 console.[18][19][20] This Cell configuration has one PPE on the core, with eight physical SPEs in silicon.[20] In the PlayStation 3, one SPE is locked-out during the test process, a practice which helps to improve manufacturing yields, and another one is reserved for the OS, leaving 6 free SPEs to be used by games' code.[21] The target clock-frequency at introduction is 3.2 GHz.[19] The introductory design is fabricated using a 90 nm SOI process, with initial volume production slated for IBM's facility in East Fishkill, New York.[10]

The relationship between cores and threads is a common source of confusion. The PPE core is dual threaded and manifests in software as two independent threads of execution while each active SPE manifests as a single thread. In the PlayStation 3 configuration as described by Sony, the Cell processor provides nine independent threads of execution.

On June 28, 2005, IBM and Mercury Computer Systems announced a partnership agreement to build Cell-based computer systems for embedded applications such as medical imaging, industrial inspection, aerospace and defense, seismic processing, and telecommunications.[22] Mercury has since then released blades, conventional rack servers and PCI Express accelerator boards with Cell processors.[22]

In the fall of 2006, IBM released the QS20 blade module using double Cell BE processors for tremendous performance in certain applications, reaching a peak of 410 gigaFLOPS in Fp32 Single precision per module. The QS22 based on the PowerXCell 8i processor was used for the IBM Roadrunner supercomputer. Mercury and IBM uses the fully utilized Cell processor with eight active SPEs. On April 8, 2008, Fixstars Corporation released a PCI Express accelerator board based on the PowerXCell 8i processor.[23]

Sony's high-performance media computing server ZEGO uses a 3.2 GHz Cell/B.E processor.

Overview

[edit]

The Cell Broadband Engine, or Cell as it is more commonly known, is a microprocessor intended as a hybrid of conventional desktop processors (such as the Athlon 64, and Core 2 families) and more specialized high-performance processors, such as the NVIDIA and ATI graphics-processors (GPUs). The longer name indicates its intended use, namely as a component in current and future online distribution systems; as such it may be utilized in high-definition displays and recording equipment, as well as HDTV systems. Additionally the processor may be suited to digital imaging systems (medical, scientific, etc.) and physical simulation (e.g., scientific and structural engineering modeling). As used in the PlayStation 3 it has 250 million transistors.[24]

In a simple analysis, the Cell processor can be split into four components: external input and output structures, the main processor called the Power Processing Element (PPE) (a two-way simultaneous-multithreaded PowerPC 2.02 core),[25] eight fully functional co-processors called the Synergistic Processing Elements, or SPEs, and a specialized high-bandwidth circular data bus connecting the PPE, input/output elements and the SPEs, called the Element Interconnect Bus or EIB.

To achieve the high performance needed for mathematically intensive tasks, such as decoding/encoding MPEG streams, generating or transforming three-dimensional data, or undertaking Fourier analysis of data, the Cell processor marries the SPEs and the PPE via EIB to give access, via fully cache coherent DMA (direct memory access), to both main memory and to other external data storage. To make the best of EIB, and to overlap computation and data transfer, each of the nine processing elements (PPE and SPEs) is equipped with a DMA engine. Since the SPE's load/store instructions can only access its own local scratchpad memory, each SPE entirely depends on DMAs to transfer data to and from the main memory and other SPEs' local memories. A DMA operation can transfer either a single block area of size up to 16KB, or a list of 2 to 2048 such blocks. One of the major design decisions in the architecture of Cell is the use of DMAs as a central means of intra-chip data transfer, with a view to enabling maximal asynchrony and concurrency in data processing inside a chip.[26]

The PPE, which is capable of running a conventional operating system, has control over the SPEs and can start, stop, interrupt, and schedule processes running on the SPEs. To this end, the PPE has additional instructions relating to the control of the SPEs. Unlike SPEs, the PPE can read and write the main memory and the local memories of SPEs through the standard load/store instructions. Despite having Turing complete architectures, the SPEs are not fully autonomous and require the PPE to prime them before they can do any useful work. As most of the "horsepower" of the system comes from the synergistic processing elements, the use of DMA as a method of data transfer and the limited local memory footprint of each SPE pose a major challenge to software developers who wish to make the most of this horsepower, demanding careful hand-tuning of programs to extract maximal performance from this CPU.

The PPE and bus architecture includes various modes of operation giving different levels of memory protection, allowing areas of memory to be protected from access by specific processes running on the SPEs or the PPE.

Both the PPE and SPE are RISC architectures with a fixed-width 32-bit instruction format. The PPE contains a 64-bit general-purpose register set (GPR), a 64-bit floating-point register set (FPR), and a 128-bit Altivec register set. The SPE contains 128-bit registers only. These can be used for scalar data types ranging from 8-bits to 64-bits in size or for SIMD computations on a variety of integer and floating-point formats. System memory addresses for both the PPE and SPE are expressed as 64-bit values for a theoretic address range of 264 bytes (16 exabytes or 16,777,216 terabytes). In practice, not all of these bits are implemented in hardware. Local store addresses internal to the SPU (Synergistic Processor Unit) processor are expressed as a 32-bit word. In documentation relating to Cell a word is always taken to mean 32 bits, a doubleword means 64 bits, and a quadword means 128 bits.

PowerXCell 8i

[edit]

In 2008, IBM announced a revised variant of the Cell called the PowerXCell 8i,[27] which is available in QS22 Blade Servers from IBM. The PowerXCell is manufactured on a 65 nm process, and adds support for up to 32 GB of slotted DDR2 memory, as well as dramatically improving double-precision floating-point performance on the SPEs from a peak of about 12.8 GFLOPS to 102.4 GFLOPS total for eight SPEs, which, coincidentally, is the same peak performance as the NEC SX-9 vector processor released around the same time. The IBM Roadrunner supercomputer, the world's fastest during 2008–2009, consisted of 12,240 PowerXCell 8i processors, along with 6,562 AMD Opteron processors.[28] The PowerXCell 8i powered super computers also dominated all of the top 6 "greenest" systems in the Green500 list, with highest MFLOPS/Watt ratio supercomputers in the world.[29] Beside the QS22 and supercomputers, the PowerXCell processor is also available as an accelerator on a PCI Express card and is used as the core processor in the QPACE project.

Since the PowerXCell 8i removed the RAMBUS memory interface, and added significantly larger DDR2 interfaces and enhanced SPEs, the chip layout had to be reworked, which resulted in both larger chip die and packaging.[30]

Architecture

[edit]

While the Cell chip can have a number of different configurations, the basic configuration is a multi-core chip composed of one "Power Processor Element" ("PPE") (sometimes called "Processing Element", or "PE"), and multiple "Synergistic Processing Elements" ("SPE").[31] The PPE and SPEs are linked together by an internal high speed bus dubbed "Element Interconnect Bus" ("EIB").

Power Processor Element (PPE)

[edit]
PPE

The PPE[32][33][34] is the PowerPC based, dual-issue in-order two-way simultaneous-multithreaded CPU core with a 23-stage pipeline acting as the controller for the eight SPEs, which handle most of the computational workload. PPE has limited out of order execution capabilities; it can perform loads out of order and has delayed execution pipelines. The PPE will work with conventional operating systems due to its similarity to other 64-bit PowerPC processors, while the SPEs are designed for vectorized floating point code execution. The PPE contains a 32 KiB level 1 instruction cache, a 32 KiB level 1 data cache, and a 512 KiB level 2 cache. The size of a cache line is 128 bytes in all caches.[27]: 136–137, 141  Additionally, IBM has included an AltiVec (VMX) unit[35] which is fully pipelined for single precision floating point (Altivec 1 does not support double precision floating-point vectors.), 32-bit Fixed Point Unit (FXU) with 64-bit register file per thread, Load and Store Unit (LSU), 64-bit Floating-Point Unit (FPU), Branch Unit (BRU) and Branch Execution Unit(BXU).[32] PPE consists of three main units: Instruction Unit (IU), Execution Unit (XU), and vector/scalar execution unit (VSU). IU contains L1 instruction cache, branch prediction hardware, instruction buffers, and dependency checking logic. XU contains integer execution units (FXU) and load-store unit (LSU). VSU contains all of the execution resources for FPU and VMX. Each PPE can complete two double-precision operations per clock cycle using a scalar fused-multiply-add instruction, which translates to 6.4 GFLOPS at 3.2 GHz; or eight single-precision operations per clock cycle with a vector fused-multiply-add instruction, which translates to 25.6 GFLOPS at 3.2 GHz.[36]

Xenon in Xbox 360

[edit]

The PPE was designed specifically for the Cell processor but during development, Microsoft approached IBM wanting a high-performance processor core for its Xbox 360. IBM complied and made the tri-core Xenon processor, based on a slightly modified version of the PPE with added VMX128 extensions.[37][38]

Synergistic Processing Element (SPE)

[edit]
SPE

Each SPE is a dual issue in order processor composed of a "Synergistic Processing Unit",[39] SPU, and a "Memory Flow Controller", MFC (DMA, MMU, and bus interface). SPEs do not have any branch prediction hardware (hence there is a heavy burden on the compiler).[40] Each SPE has 6 execution units divided among odd and even pipelines on each SPE : The SPU runs a specially developed instruction set (ISA) with 128-bit SIMD organization[35][2][41] for single and double precision instructions. With the current generation of the Cell, each SPE contains a 256 KiB embedded SRAM for instruction and data, called "Local Storage" (not to be mistaken for "Local Memory" in Sony's documents that refer to the VRAM) which is visible to the PPE and can be addressed directly by software. Each SPE can support up to 4 GiB of local store memory. The local store does not operate like a conventional CPU cache since it is neither transparent to software nor does it contain hardware structures that predict which data to load. The SPEs contain a 128-bit, 128-entry register file and measures 14.5 mm2 on a 90 nm process. An SPE can operate on sixteen 8-bit integers, eight 16-bit integers, four 32-bit integers, or four single-precision floating-point numbers in a single clock cycle, as well as a memory operation. Note that the SPU cannot directly access system memory; the 64-bit virtual memory addresses formed by the SPU must be passed from the SPU to the SPE memory flow controller (MFC) to set up a DMA operation within the system address space.

In one typical usage scenario, the system will load the SPEs with small programs (similar to threads), chaining the SPEs together to handle each step in a complex operation. For instance, a set-top box might load programs for reading a DVD, video and audio decoding, and display and the data would be passed off from SPE to SPE until finally ending up on the TV. Another possibility is to partition the input data set and have several SPEs performing the same kind of operation in parallel. At 3.2 GHz, each SPE gives a theoretical 25.6 GFLOPS of single-precision performance.

Compared to its personal computer contemporaries, the relatively high overall floating-point performance of a Cell processor seemingly dwarfs the abilities of the SIMD unit in CPUs like the Pentium 4 and the Athlon 64. However, comparing only floating-point abilities of a system is a one-dimensional and application-specific metric. Unlike a Cell processor, such desktop CPUs are more suited to the general-purpose software usually run on personal computers. In addition to executing multiple instructions per clock, processors from Intel and AMD feature branch predictors. The Cell is designed to compensate for this with compiler assistance, in which prepare-to-branch instructions are created. For double-precision floating-point operations, as sometimes used in personal computers and often used in scientific computing, Cell performance drops by an order of magnitude, but still reaches 20.8 GFLOPS (1.8 GFLOPS per SPE, 6.4 GFLOPS per PPE). The PowerXCell 8i variant, which was specifically designed for double-precision, reaches 102.4 GFLOPS in double-precision calculations.[42]

Tests by IBM show that the SPEs can reach 98% of their theoretical peak performance running optimized parallel matrix multiplication.[36]

Toshiba has developed a co-processor powered by four SPEs, but no PPE, called the SpursEngine designed to accelerate 3D and movie effects in consumer electronics.

Each SPE has a local memory of 256 KB.[43] In total, the SPEs have 2 MB of local memory.

Element Interconnect Bus (EIB)

[edit]

The EIB is a communication bus internal to the Cell processor which connects the various on-chip system elements: the PPE processor, the memory controller (MIC), the eight SPE coprocessors, and two off-chip I/O interfaces, for a total of 12 participants in the PS3 (the number of SPU can vary in industrial applications). The EIB also includes an arbitration unit which functions as a set of traffic lights. In some documents, IBM refers to EIB participants as 'units'.

The EIB is presently implemented as a circular ring consisting of four 16-byte-wide unidirectional channels which counter-rotate in pairs. When traffic patterns permit, each channel can convey up to three transactions concurrently. As the EIB runs at half the system clock rate the effective channel rate is 16 bytes every two system clocks. At maximum concurrency, with three active transactions on each of the four rings, the peak instantaneous EIB bandwidth is 96 bytes per clock (12 concurrent transactions × 16 bytes wide / 2 system clocks per transfer). While this figure is often quoted in IBM literature, it is unrealistic to simply scale this number by processor clock speed. The arbitration unit imposes additional constraints.

IBM Senior Engineer David Krolak, EIB lead designer, explains the concurrency model:

A ring can start a new op every three cycles. Each transfer always takes eight beats. That was one of the simplifications we made, it's optimized for streaming a lot of data. If you do small ops, it does not work quite as well. If you think of eight-car trains running around this track, as long as the trains aren't running into each other, they can coexist on the track.[44]

Each participant on the EIB has one 16-byte read port and one 16-byte write port. The limit for a single participant is to read and write at a rate of 16 bytes per EIB clock (for simplicity often regarded 8 bytes per system clock). Each SPU processor contains a dedicated DMA management queue capable of scheduling long sequences of transactions to various endpoints without interfering with the SPU's ongoing computations; these DMA queues can be managed locally or remotely as well, providing additional flexibility in the control model.

Data flows on an EIB channel stepwise around the ring. Since there are twelve participants, the total number of steps around the channel back to the point of origin is twelve. Six steps is the longest distance between any pair of participants. An EIB channel is not permitted to convey data requiring more than six steps; such data must take the shorter route around the circle in the other direction. The number of steps involved in sending the packet has very little impact on transfer latency: the clock speed driving the steps is very fast relative to other considerations. However, longer communication distances are detrimental to the overall performance of the EIB as they reduce available concurrency.

Despite IBM's original desire to implement the EIB as a more powerful cross-bar, the circular configuration they adopted to spare resources rarely represents a limiting factor on the performance of the Cell chip as a whole. In the worst case, the programmer must take extra care to schedule communication patterns where the EIB is able to function at high concurrency levels.

David Krolak explained:

Well, in the beginning, early in the development process, several people were pushing for a crossbar switch, and the way the bus is designed, you could actually pull out the EIB and put in a crossbar switch if you were willing to devote more silicon space on the chip to wiring. We had to find a balance between connectivity and area, and there just was not enough room to put a full crossbar switch in. So we came up with this ring structure which we think is very interesting. It fits within the area constraints and still has very impressive bandwidth.[44]

Bandwidth assessment

[edit]

At 3.2 GHz, each channel flows at a rate of 25.6 GB/s. Viewing the EIB in isolation from the system elements it connects, achieving twelve concurrent transactions at this flow rate works out to an abstract EIB bandwidth of 307.2 GB/s. Based on this view many IBM publications depict available EIB bandwidth as "greater than 300 GB/s". This number reflects the peak instantaneous EIB bandwidth scaled by processor frequency.[45]

However, other technical restrictions are involved in the arbitration mechanism for packets accepted onto the bus. The IBM Systems Performance group explained:

Each unit on the EIB can simultaneously send and receive 16 bytes of data every bus cycle. The maximum data bandwidth of the entire EIB is limited by the maximum rate at which addresses are snooped across all units in the system, which is one per bus cycle. Since each snooped address request can potentially transfer up to 128 bytes, the theoretical peak data bandwidth on the EIB at 3.2 GHz is 128Bx1.6 GHz = 204.8 GB/s.[36]

This quote apparently represents the full extent of IBM's public disclosure of this mechanism and its impact. The EIB arbitration unit, the snooping mechanism, and interrupt generation on segment or page translation faults are not well described in the documentation set as yet made public by IBM.[citation needed]

In practice, effective EIB bandwidth can also be limited by the ring participants involved. While each of the nine processing cores can sustain 25.6 GB/s read and write concurrently, the memory interface controller (MIC) is tied to a pair of XDR memory channels permitting a maximum flow of 25.6 GB/s for reads and writes combined and the two IO controllers are documented as supporting a peak combined input speed of 25.6 GB/s and a peak combined output speed of 35 GB/s.

To add further to the confusion, some older publications cite EIB bandwidth assuming a 4 GHz system clock. This reference frame results in an instantaneous EIB bandwidth figure of 384 GB/s and an arbitration-limited bandwidth figure of 256 GB/s.

All things considered the theoretic 204.8 GB/s number most often cited is the best one to bear in mind. The IBM Systems Performance group has demonstrated SPU-centric data flows achieving 197 GB/s on a Cell processor running at 3.2 GHz so this number is a fair reflection on practice as well.[36]

Memory and I/O controllers

[edit]

Cell contains a dual channel Rambus XIO macro which interfaces to Rambus XDR memory. The memory interface controller (MIC) is separate from the XIO macro and is designed by IBM. The XIO-XDR link runs at 3.2 Gbit/s per pin. Two 32-bit channels can provide a theoretical maximum of 25.6 GB/s.

The I/O interface, also a Rambus design, is known as FlexIO. The FlexIO interface is organized into 12 lanes, each lane being a unidirectional 8-bit wide point-to-point path. Five 8-bit wide point-to-point paths are inbound lanes to Cell, while the remaining seven are outbound. This provides a theoretical peak bandwidth of 62.4 GB/s (36.4 GB/s outbound, 26 GB/s inbound) at 2.6 GHz. The FlexIO interface can be clocked independently, typ. at 3.2 GHz. 4 inbound + 4 outbound lanes are supporting memory coherency.

Applications

[edit]

Video processing card

[edit]

Some companies, such as Leadtek, have released PCI-E cards based upon the Cell to allow for "faster than real time" transcoding of H.264, MPEG-2 and MPEG-4 video.[46]

Blade server

[edit]

On August 29, 2007, IBM announced the BladeCenter QS21. Generating a measured 1.05 giga–floating point operations per second (gigaFLOPS) per watt, with peak performance of approximately 460 GFLOPS it is one of the most power efficient computing platforms to date. A single BladeCenter chassis can achieve 6.4 tera–floating point operations per second (teraFLOPS) and over 25.8 teraFLOPS in a standard 42U rack.[47]

On May 13, 2008, IBM announced the BladeCenter QS22. The QS22 introduces the PowerXCell 8i processor with five times the double-precision floating point performance of the QS21, and the capacity for up to 32 GB of DDR2 memory on-blade.[48]

IBM has discontinued the Blade server line based on Cell processors as of January 12, 2012.[49]

PCI Express board

[edit]

Several companies provide PCI-e boards utilising the IBM PowerXCell 8i. The performance is reported as 179.2 GFlops (SP), 89.6 GFlops (DP) at 2.8 GHz.[50][51]

Console video games

[edit]

Sony's PlayStation 3 video game console was the first production application of the Cell processor, clocked at 3.2 GHz and containing seven out of eight operational SPEs, to allow Sony to increase the yield on the processor manufacture. Only six of the seven SPEs are accessible to developers as one is reserved by the OS.[21]

Home cinema

[edit]
B-CAS cards in a Toshiba Cell Regza set-top box, based on the Cell Broadband Engine

Toshiba has produced HDTVs using Cell. They presented a system to decode 48 standard definition MPEG-2 streams simultaneously on a 1920×1080 screen.[52][53] This can enable a viewer to choose a channel based on dozens of thumbnail videos displayed simultaneously on the screen.

Laptop PCs

[edit]

Toshiba produced a laptop, Qosmio G55, released in 2008, that contains Cell technology embedded into it. Its CPU otherwise is an Intel Core x86-based chip as is common on Toshiba computers.[54]

Supercomputing

[edit]

IBM's supercomputer, IBM Roadrunner, was a hybrid of General Purpose x86-64 Opteron as well as Cell processors. This system assumed the #1 spot on the June 2008 Top 500 list as the first supercomputer to run at petaFLOPS speeds, having gained a sustained 1.026 petaFLOPS speed using the standard LINPACK benchmark. IBM Roadrunner used the PowerXCell 8i version of the Cell processor, manufactured using 65 nm technology and enhanced SPUs that can handle double precision calculations in the 128-bit registers, reaching double precision 102 GFLOPs per chip.[55][56]

Cluster computing

[edit]

Clusters of PlayStation 3 consoles are an attractive alternative to high-end systems based on Cell blades. Innovative Computing Laboratory, a group led by Jack Dongarra, in the Computer Science Department at the University of Tennessee, investigated such an application in depth.[57] Terrasoft Solutions is selling 8-node and 32-node PS3 clusters with Yellow Dog Linux pre-installed, an implementation of Dongarra's research.

As first reported by Wired on October 17, 2007,[58] an interesting application of using PlayStation 3 in a cluster configuration was implemented by Astrophysicist Gaurav Khanna, from the Physics department of University of Massachusetts Dartmouth, who replaced time used on supercomputers with a cluster of eight PlayStation 3s. Subsequently, the next generation of this machine, now called the PlayStation 3 Gravity Grid, uses a network of 16 machines, and exploits the Cell processor for the intended application which is binary black hole coalescence using perturbation theory. In particular, the cluster performs astrophysical simulations of large supermassive black holes capturing smaller compact objects and has generated numerical data that has been published multiple times in the relevant scientific research literature.[59] The Cell processor version used by the PlayStation 3 has a main CPU and 6 SPEs available to the user, giving the Gravity Grid machine a net of 16 general-purpose processors and 96 vector processors. The machine has a one-time cost of $9,000 to build and is adequate for black-hole simulations which would otherwise cost $6,000 per run on a conventional supercomputer. The black hole calculations are not memory-intensive and are highly localizable, and so are well-suited to this architecture. Khanna claims that the cluster's performance exceeds that of a 100+ Intel Xeon core based traditional Linux cluster on his simulations. The PS3 Gravity Grid gathered significant media attention through 2007,[60] 2008,[61][62] 2009,[63][64][65] and 2010.[66][67]

The computational Biochemistry and Biophysics lab at the Universitat Pompeu Fabra, in Barcelona, deployed in 2007 a BOINC system called PS3GRID[68] for collaborative computing based on the CellMD software, the first one designed specifically for the Cell processor.

The United States Air Force Research Laboratory has deployed a PlayStation 3 cluster of over 1700 units, nicknamed the "Condor Cluster", for analyzing high-resolution satellite imagery. The Air Force claims the Condor Cluster would be the 33rd largest supercomputer in the world in terms of capacity.[69] The lab has opened up the supercomputer for use by universities for research.[70]

Distributed computing

[edit]

With the help of the computing power of over half a million PlayStation 3 consoles, the distributed computing project Folding@home has been recognized by Guinness World Records as the most powerful distributed network in the world. The first record was achieved on September 16, 2007, as the project surpassed one petaFLOPS, which had never previously been attained by a distributed computing network. Additionally, the collective efforts enabled PS3 alone to reach the petaFLOPS mark on September 23, 2007. In comparison, the world's second-most powerful supercomputer at the time, IBM's Blue Gene/L, performed at around 478.2 teraFLOPS, which means Folding@home's computing power is approximately twice Blue Gene/L's (although the CPU interconnect in Blue Gene/L is more than one million times faster than the mean network speed in Folding@home). As of May 7, 2011, Folding@home runs at about 9.3 x86 petaFLOPS, with 1.6 petaFLOPS generated by 26,000 active PS3s alone.

Mainframes

[edit]

IBM announced on April 25, 2007, that it would begin integrating its Cell Broadband Engine Architecture microprocessors into the company's System z line of mainframes.[71] This has led to a gameframe.

Password cracking

[edit]

The architecture of the processor makes it better suited to hardware-assisted cryptographic brute-force attack applications than conventional processors.[72]

Software engineering

[edit]

Due to the flexible nature of the Cell, there are several possibilities for the utilization of its resources, not limited to just different computing paradigms:[73]

Job queue

[edit]

The PPE maintains a job queue, schedules jobs in SPEs, and monitors progress. Each SPE runs a "mini kernel" whose role is to fetch a job, execute it, and synchronize with the PPE.

Self-multitasking of SPEs

[edit]

The mini kernel and scheduling is distributed across the SPEs. Tasks are synchronized using mutexes or semaphores as in a conventional operating system. Ready-to-run tasks wait in a queue for an SPE to execute them. The SPEs use shared memory for all tasks in this configuration.

Stream processing

[edit]

Each SPE runs a distinct program. Data comes from an input stream and is sent to SPEs. When an SPE has terminated the processing, the output data is sent to an output stream.

This provides a flexible and powerful architecture for stream processing, and allows explicit scheduling for each SPE separately. Other processors are also able to perform streaming tasks but are limited by the kernel loaded.

Open source software development

[edit]

In 2005, patches enabling Cell support in the Linux kernel were submitted for inclusion by IBM developers.[74] Arnd Bergmann (one of the developers of the aforementioned patches) also described the Linux-based Cell architecture at LinuxTag 2005.[75] As of release 2.6.16 (March 20, 2006), the Linux kernel officially supports the Cell processor.[76]

Both PPE and SPEs are programmable in C/C++ using a common API provided by libraries.

Fixstars Solutions provides Yellow Dog Linux for IBM and Mercury Cell-based systems, as well as for the PlayStation 3.[77] Terra Soft strategically partnered with Mercury to provide a Linux Board Support Package for Cell, and support and development of software applications on various other Cell platforms, including the IBM BladeCenter JS21 and Cell QS20, and Mercury Cell-based solutions.[78] Terra Soft also maintains the Y-HPC (High Performance Computing) Cluster Construction and Management Suite and Y-Bio gene sequencing tools. Y-Bio is built upon the RPM Linux standard for package management, and offers tools which help bioinformatics researchers conduct their work with greater efficiency.[79] IBM has developed a pseudo-filesystem for Linux coined "Spufs" that simplifies access to and use of the SPE resources. IBM is currently maintaining a Linux kernel and GDB ports, while Sony maintains the GNU toolchain (GCC, binutils).[80][81]

In November 2005, IBM released a "Cell Broadband Engine (CBE) Software Development Kit Version 1.0", consisting of a simulator and assorted tools, to its web site. Development versions of the latest kernel and tools for Fedora Core 4 are maintained at the Barcelona Supercomputing Center website.[82]

In August 2007, Mercury Computer Systems released a Software Development Kit for PlayStation 3 for High-Performance Computing.[83]

In November 2007, Fixstars Corporation released the new "CVCell" module aiming to accelerate several important OpenCV APIs for Cell. In a series of software calculation tests, they recorded execution times on a 3.2 GHz Cell processor that were between 6x and 27x faster compared with the same software on a 2.4 GHz Intel Core 2 Duo.[84]

In October 2009, IBM released an OpenCL driver for POWER6 and CBE. This allows programs written in the cross-platform API to be easily run on Cell PSE.[85]

[edit]

Illustrations of the different generations of Cell/B.E. processors and the PowerXCell 8i. The images are not to scale; All Cell/B.E. packages measures 42.5×42.5 mm and the PowerXCell 8i measures 47.5×47.5 mm.

See also

[edit]

Notes

[edit]
  1. ^ CBEA in full or Cell BE in part.

References

[edit]
  1. ^ "PowerPC Architecture Book, Version 2.02". IBM. November 16, 2005. Archived from the original on November 29, 2020.
  2. ^ a b c Gschwind, Michael; Hofstee, H. Peter; Flachs, Brian; Hopkins, Martin; Watanabe, Yukio; Yamazaki, Takeshi (March–April 2006). "Synergistic Processing in Cell's Multicore Architecture" (PDF). IEEE Micro. 26 (2). IEEE: 10–24. doi:10.1109/MM.2006.41. S2CID 17834015.
  3. ^ "Cell Designer talks about PS3 and IBM Cell Processors". Archived from the original on August 21, 2006. Retrieved March 22, 2007.
  4. ^ Gaudin, Sharon (June 9, 2008). "IBM's Roadrunner smashes 4-minute mile of supercomputing". Computerworld. Archived from the original on December 24, 2008. Retrieved June 10, 2008.
  5. ^ Fildes, Jonathan (June 9, 2008). "Supercomputer sets petaflop pace". BBC News. Retrieved June 9, 2008.
  6. ^ Shankland, Stephen (February 22, 2006). "Octopiler seeks to arm Cell programmers". CNET. Retrieved March 22, 2007.
  7. ^ "Cell Broadband Engine Software Development Kit Version 1.0". LWN. November 10, 2005. Retrieved March 22, 2007.
  8. ^ Krewell, Kevin (February 14, 2005). "Cell Moves Into the Limelight". Microprocessor Report.
  9. ^ a b "Introduction to the Cell multiprocessor". IBM Journal of Research and Development. August 7, 2005. Archived from the original on February 28, 2007. Retrieved March 22, 2007.
  10. ^ a b c "IBM Produces Cell Processor Using New Fabrication Technology". X-bit labs. Archived from the original on March 15, 2007. Retrieved March 12, 2007.
  11. ^ "65nm CELL processor production started". PlayStation Universe. January 30, 2007. Archived from the original on February 2, 2007. Retrieved May 18, 2007.
  12. ^ Stokes, Jon (February 7, 2008). "IBM shrinks Cell to 45nm. Cheaper PS3s will follow". Arstechnica.com. Retrieved September 19, 2012.
  13. ^ "IBM Offers Higher Performance Computing Outside the Lab". IBM. Retrieved May 15, 2008.
  14. ^ "Sony answears our questions about the new PlayStation 3". Ars Technica. August 18, 2009. Retrieved August 19, 2009.
  15. ^ "Will Roadrunner Be the Cell's Last Hurrah?". October 27, 2009. Archived from the original on October 31, 2009.
  16. ^ "SC09: IBM lässt Cell-Prozessor auslaufen". HeiseOnline. November 20, 2009. Retrieved November 21, 2009.
  17. ^ "IBM have not stopped Cell processor development". DriverHeaven.net. November 23, 2009. Archived from the original on November 25, 2009. Retrieved November 24, 2009.
  18. ^ Becker, David (February 7, 2005). "PlayStation 3 chip has split personality". CNET. Retrieved May 18, 2007.
  19. ^ a b Thurrott, Paul (May 17, 2005). "Sony Ups the Ante with PlayStation 3". WindowsITPro. Archived from the original on September 30, 2007. Retrieved March 22, 2007.
  20. ^ a b Roper, Chris (May 17, 2005). "E3 2005: Cell Processor Technology Demos". IGN. Retrieved March 22, 2007.
  21. ^ a b Martin Linklater. "Optimizing Cell Core". Game Developer Magazine, April 2007. pp. 15–18. To increase fabrication yields, Sony ships PlayStation 3 Cell processors with only seven working SPEs. And from those seven, one SPE will be used by the operating system for various tasks, This leaves six SPEs and 1 PPE for game programmers to use.
  22. ^ a b "Mercury Wins IBM PartnerWorld Beacon Award". Supercomputing Online. April 12, 2007. Retrieved May 18, 2007.[dead link]
  23. ^ "Fixstars Releases Accelerator Board Featuring the PowerXCell 8i". Fixstars Corporation. April 8, 2008. Archived from the original on January 5, 2009. Retrieved August 18, 2008.
  24. ^ "A Glimpse Inside The Cell Processor". Gamasutra. July 13, 2006. Retrieved June 19, 2019.
  25. ^ Koranne, Sandeep (July 15, 2009). "Chapter 2 - The Power Processing Element (PPE)". Practical Computing on the Cell Broadband Engine. Springer Science+Business Media. p. 17. doi:10.1007/978-1-4419-0308-2_2. ISBN 978-1-4419-0307-5.
  26. ^ Gschwind, Michael (2006). "Chip multiprocessing and the cell broadband engine". Proceedings of the 3rd conference on Computing frontiers - CF '06. ACM. pp. 1–8. doi:10.1145/1128022.1128023. ISBN 1595933026. S2CID 14226551. Retrieved June 29, 2008.
  27. ^ a b Cell Broadband Engine Programming Handbook Including the PowerXCell 8i Processor (PDF). Version 1.11. IBM. May 12, 2008. Archived from the original (PDF) on March 11, 2018. Retrieved March 10, 2018.
  28. ^ "IBM announces PowerXCell 8i, QS22 blade server". Beyond3D. May 2008. Archived from the original on June 16, 2008. Retrieved June 10, 2008.
  29. ^ "The Green500 List - November 2009". Archived from the original on February 23, 2011.
  30. ^ "Packaging the Cell Broadband Engine Microprocessor for Supercomputer Applications" (PDF). Archived from the original (PDF) on January 4, 2014. Retrieved January 4, 2014.
  31. ^ "Cell Microprocessor Briefing". IBM, Sony Computer Entertainment Inc., Toshiba Corp. February 7, 2005.
  32. ^ a b Kim, Hyesoon (Spring 2011). "CS4803DGC Design and Programming of Game Console" (PDF).
  33. ^ Koranne, Sandeep (2009). Practical Computing on the Cell Broadband Engine. Springer Science+Business Media. p. 19. ISBN 9781441903082.
  34. ^ Hofstee, H. Peter (2005). "All About the Cell Processor" (PDF). Archived from the original (PDF) on September 6, 2011.
  35. ^ a b "Power Efficient Processor Design and the Cell Processor" (PDF). IBM. February 16, 2005. Archived from the original (PDF) on April 26, 2005. Retrieved June 12, 2005.
  36. ^ a b c d Chen, Thomas; Raghavan, Ram; Dale, Jason; Iwata, Eiji (November 29, 2005). "Cell Broadband Engine Architecture and its first implementation". IBM developerWorks. Archived from the original on October 27, 2012. Retrieved September 9, 2012.
  37. ^ Alexander, Leigh (January 16, 2009). "Processing The Truth: An Interview With David Shippy]". Gamasutra.
  38. ^ Last, Jonathan V. (December 30, 2008). "Playing the Fool". Wall Street Journal.
  39. ^ SPU Application Binary Interface Specification (PDF). July 18, 2008. Archived from the original (PDF) on November 18, 2014. Retrieved January 24, 2015.
  40. ^ "IBM Research - Cell". IBM. Archived from the original on June 14, 2005. Retrieved June 11, 2005.
  41. ^ "A novel SIMD architecture for the Cell heterogeneous chip-multiprocessor" (PDF). Hot Chips 17. August 15, 2005. Archived from the original (PDF) on July 9, 2008. Retrieved January 1, 2006.
  42. ^ "Cell successor with turbo mode - PowerXCell 8i". PPCNux. November 2007. Archived from the original on January 10, 2009. Retrieved June 10, 2008.
  43. ^ "Supporting OpenMP on Cell" (PDF). IBM T. J Watson Research. Archived from the original (PDF) on January 8, 2019.
  44. ^ a b "Meet the experts: David Krolak on the Cell Broadband Engine EIB bus". IBM. December 6, 2005. Retrieved March 18, 2007.
  45. ^ "Cell Multiprocessor Communication Network: Built for Speed" (PDF). IEEE. Archived from the original (PDF) on January 7, 2007. Retrieved March 22, 2007.
  46. ^ "Leadtek PxVC1100 MPEG-2/H.264 Transcoding Card". November 12, 2009.
  47. ^ "IBM Doubles Down on Cell Blade" (Press release). Armonk, New York: IBM. August 29, 2007. Retrieved July 19, 2017.
  48. ^ "IBM Offers High Performance Computing Outside the Lab" (Press release). Armonk, New York: IBM. May 13, 2008. Retrieved July 19, 2017.
  49. ^ Morgan, Timothy Prickett (June 28, 2011). "IBM to snuff last Cell blade server". The Register. Retrieved July 19, 2017.
  50. ^ "Fixstars Press Release". Archived from the original on January 5, 2009. Retrieved August 18, 2008.
  51. ^ "Cell-based coprocessor card runs Linux". Archived from the original on May 2, 2009.
  52. ^ "Toshiba Demonstrates Cell Microprocessor Simultaneously Decoding 48 MPEG-2 Streams". Tech-On!. April 25, 2005.
  53. ^ "Winner: Multimedia Monster". IEEE Spectrum. January 1, 2006. Archived from the original on January 18, 2006. Retrieved January 22, 2006.
  54. ^ Eaton, Kit (July 15, 2008). "Toshiba Qosmio G55 is First Laptop With Cell Processor Aboard". Gizmodo. Retrieved November 22, 2024.
  55. ^ "Beyond a Single Cell" (PDF). Los Alamos National Laboratory. Archived from the original (PDF) on July 8, 2009. Retrieved April 6, 2017.
  56. ^ Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine (2005). "The Potential of the Cell Processor for Scientific Computing". ACM Computing Frontiers. Retrieved April 6, 2017.
  57. ^ "SCOP3: A Rough Guide to Scientific Computing On the PlayStation 3" (PDF). Computer Science Department, University of Tennessee. Archived from the original (PDF) on October 15, 2008. Retrieved May 8, 2007.
  58. ^ Gardiner, Bryan (October 17, 2007). "Astrophysicist Replaces Supercomputer with Eight PlayStation 3s". Wired. Retrieved October 17, 2007.
  59. ^ "PS3 Gravity Grid". Gaurav Khanna, Associate Professor, College of Engineering, University of Massachusetts Dartmouth.
  60. ^ "PS3 cluster creates homemade, cheaper supercomputer". October 24, 2007.
  61. ^ Highfield, Roger (February 17, 2008). "Why scientists love games consoles". The Daily Telegraph. London. Archived from the original on September 6, 2009.
  62. ^ Peckham, Matt (December 23, 2008). "Nothing Escapes the Pull of a PlayStation 3, Not Even a Black Hole". The Washington Post.
  63. ^ Malik, Tariq (January 28, 2009). "Playstation 3 Consoles Tackle Black Hole Vibrations". Space.com.
  64. ^ Lyden, Jacki (February 21, 2009). "Playstation 3: A Discount Supercomputer?". NPR.
  65. ^ Wallich, Paul (April 1, 2009). "The Supercomputer Goes Personal". IEEE Spectrum.
  66. ^ "The PlayStation powered super-computer". BBC News. September 4, 2010.
  67. ^ Farrell, John (November 12, 2010). "Black Holes and Quantum Loops: More Than Just a Game". Forbes.
  68. ^ "PS3GRID.net".
  69. ^ "Defense Department discusses new Sony PlayStation supercomputer". November 30, 2010.
  70. ^ "PlayStation 3 Clusters Providing Low-Cost Supercomputing to Universities". Archived from the original on May 14, 2013.
  71. ^ "IBM Mainframes Go 3-D". eWeek. April 26, 2007. Retrieved May 18, 2007.
  72. ^ "PlayStation speeds password probe". BBC News. November 30, 2007. Retrieved January 17, 2011.
  73. ^ "CELL: A New Platform for Digital Entertainment". Sony Computer Entertainment Inc. March 9, 2005. Archived from the original on October 28, 2005.
  74. ^ Bergmann, Arnd (June 21, 2005). "ppc64: Introduce Cell/BPA platform, v3". Retrieved March 22, 2007.
  75. ^ "The Cell Processor Programming Model". LinuxTag 2005. Archived from the original on November 18, 2005. Retrieved June 11, 2005.
  76. ^ Shankland, Stephen (March 21, 2006). "Linux gets built-in Cell processor support". CNET. Retrieved March 22, 2007.
  77. ^ "Terra Soft to Provide Linux for PLAYSTATION3". Archived from the original on March 30, 2009.
  78. ^ Terra Soft - Linux for Cell, PlayStation PS3, QS20, QS21, QS22, IBM System p, Mercury Cell, and Apple PowerPC Archived February 23, 2007, at the Wayback Machine
  79. ^ "Y-Bio". August 31, 2007. Archived from the original on September 2, 2007.
  80. ^ "Arnd Bergmann on Cell". IBM developerWorks. June 25, 2005.
  81. ^ An Open Source Environment for Cell Broadband Engine System Software, IEEE Computer, 40(6), June 2007, https://ieeexplore.ieee.org/document/4249810
  82. ^ "Linux on Cell BE-based Systems". Barcelona Supercomputing Center. Archived from the original on March 8, 2007. Retrieved March 22, 2007.
  83. ^ "Mercury Computer Systems Releases Software Development Kit for PLAYSTATION(R)3 for High-Performance Computing" (Press release). Mercury Computer Systems. August 3, 2007. Archived from the original on August 18, 2007.
  84. ^ ""CVCell" - Module developed by Fixstars that accelerates OpenCV Library for the Cell/B.E. processor". Fixstars Corporation. November 28, 2007. Archived from the original on July 17, 2010. Retrieved December 12, 2008.
  85. ^ "IBM Releases OpenCL Drivers for POWER6 and Cell/B.E." The Khronos Group. September 2, 2023.
[edit]