Jump to content

Multi-core processor: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Line 8: Line 8:


The first step in the multi-core revolution
==The first step in the multi-core revolution==

<gallery>
Image:Example.jpg|Caption1
Image:Example.jpg|Caption2
</gallery>
Intel® Pentium® processor Extreme Edition die
Intel® Pentium® processor Extreme Edition die
In April of 2005, Intel announced the Intel® Pentium® processor Extreme Edition, featuring an Intel® dual-core processor, which can provide immediate advantages for people looking to buy systems that boost multitasking computing power and improve the throughput of multithreaded applications. An Intel dual-core processor consists of two complete execution cores in one physical processor (right), both running at the same frequency. Both cores share the same packaging and the same interface with the chipset/memory. Overall, an Intel dual-core processor offers a way of delivering more capabilities while balancing energy-efficient performance, and is the first step in the multi-core processor future.
In April of 2005, Intel announced the Intel® Pentium® processor Extreme Edition, featuring an Intel® dual-core processor, which can provide immediate advantages for people looking to buy systems that boost multitasking computing power and improve the throughput of multithreaded applications. An Intel dual-core processor consists of two complete execution cores in one physical processor (right), both running at the same frequency. Both cores share the same packaging and the same interface with the chipset/memory. Overall, an Intel dual-core processor offers a way of delivering more capabilities while balancing energy-efficient performance, and is the first step in the multi-core processor future.
Line 21: Line 25:


The new Intel® Core™ Duo processors have ushered in a new era in processor architecture design in which multi-core processors become the standard for delivering greater performance, improved performance per watt, and new capabilities across Intel's desktop, mobile, and server platforms. The Intel dual-core products also represent a vital first step on the road to realizing Platform 2015, Intel's vision for the future of computing and the evolving processor and platform architectures that support it.
The new Intel® Core™ Duo processors have ushered in a new era in processor architecture design in which multi-core processors become the standard for delivering greater performance, improved performance per watt, and new capabilities across Intel's desktop, mobile, and server platforms. The Intel dual-core products also represent a vital first step on the road to realizing Platform 2015, Intel's vision for the future of computing and the evolving processor and platform architectures that support it.


==Development motivation==
==Development motivation==

Revision as of 19:33, 20 February 2007

Diagram of an Intel Core 2 dual core processor, with CPU-local Level 1 caches, and a shared, on-die Level 2 cache.
AMD X2 3600 dual core processor

A multi-core microprocessor is one that combines two or more independent processors into a single package, often a single integrated circuit (IC). A dual-core device contains two independent microprocessors. In general, multi-core microprocessors allow a computing device to exhibit some form of thread-level parallelism (TLP) without including multiple microprocessors in separate physical packages. This form of TLP is often known as chip-level multiprocessing.

Terminology

There is some discrepancy in the semantics by which the terms "multi-core" and "dual-core" are defined. Most commonly they are used to refer to some sort of central processing unit (CPU), but are sometimes also applied to DSPs and SoCs. Additionally, some use these terms only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. These people generally prefer to refer to separate microprocessor dies in the same package by another name, such as "multi-chip module", "double core", or even "twin core". This article uses both the terms "multi-core" and "dual-core" to reference microelectronic CPUs manufactured on the same integrated circuit, unless otherwise noted.


The first step in the multi-core revolution

Intel® Pentium® processor Extreme Edition die In April of 2005, Intel announced the Intel® Pentium® processor Extreme Edition, featuring an Intel® dual-core processor, which can provide immediate advantages for people looking to buy systems that boost multitasking computing power and improve the throughput of multithreaded applications. An Intel dual-core processor consists of two complete execution cores in one physical processor (right), both running at the same frequency. Both cores share the same packaging and the same interface with the chipset/memory. Overall, an Intel dual-core processor offers a way of delivering more capabilities while balancing energy-efficient performance, and is the first step in the multi-core processor future.

An Intel dual-core processor-based PC will enable new computing experiences as it delivers value by providing additional computing resources that expand the PC's capabilities in the form of higher throughput and simultaneous computing. Imagine that a dual-core processor is like a four-lane highway—it can handle up to twice as many cars as its two-lane predecessor without making each car drive twice as fast. Similarly, with an Intel dual-core processor-based PC, people can perform multiple tasks such as downloading music and gaming simultaneously.

And when combined with Hyper-Threading Technology¹ (HT Technology) the Intel dual-core processor is the next step in the evolution of high-performance computing. Intel dual-core products supporting Hyper-Threading Technology can process four software threads simultaneously by more efficiently using resources that otherwise may sit idle.

By introducing its first dual-core processor for desktop PCs, Intel continues its commitment and investment in PC innovation as enthusiasts are running ever-more demanding applications. A new Intel dual-core processor-based PC gives people the flexibility and performance to handle robust content creation or intense gaming, plus simultaneously managing background tasks such as virus scanning and downloading. Cutting-edge gamers can play the latest titles and experience ultra-realistic effects and gameplay. Entertainment enthusiasts will be able to create and improve digital content while encoding other content in the background.

The new Intel® Core™ Duo processors have ushered in a new era in processor architecture design in which multi-core processors become the standard for delivering greater performance, improved performance per watt, and new capabilities across Intel's desktop, mobile, and server platforms. The Intel dual-core products also represent a vital first step on the road to realizing Platform 2015, Intel's vision for the future of computing and the evolving processor and platform architectures that support it.

Development motivation

While CMOS manufacturing technology continues to improve, reducing the size of single gates, physical limits of semiconductor-based microelectronics have become a major design concern. Some effects of these physical limitations can cause significant heat dissipation and data synchronization problems. The demand for more complex and capable microprocessors causes CPU designers to utilize various methods of increasing performance. Some instruction-level parallelism (ILP) methods like superscalar pipelining are suitable for many applications, but are inefficient for others that tend to contain difficult-to-predict code. Many applications are better suited to thread level parallelism(TLP) methods, and multiple independent CPUs is one common method used to increase a system's overall TLP. A combination of increased available space due to refined manufacturing processes and the demand for increased TLP is the logic behind the creation of multi-core CPUs.

Commercial incentives

Several business motives drive the development of dual-core architectures. Since Symmetric multiprocessing (SMP) designs have been long implemented using discrete CPUs, the issues regarding implementing the architecture and supporting it in software are well known. Additionally, utilizing a proven processing core design (e.g. Freescale's e700 core) without architectural changes reduces design risk significantly. Finally, the connotation of the terminology "dual-core" (and other multiples) lends itself to marketing efforts.

Additionally, for general-purpose processors, much of the motivation for multi-core processors comes from the increasing difficulty of improving processor performance by increasing the operating frequency (frequency-scaling). In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such as Intel and AMD have turned to multi-core designs, sacrificing lower manufacturing costs for higher performance in some applications and systems.

Multi-core architectures are being developed, but so are the alternatives. An especially strong contender for established markets is to integrate more peripheral functions into the chip.

Advantages

  • Proximity of multiple CPU cores on the same die have the advantage that the cache coherency circuitry can operate at a much higher clock rate than is possible if the signals have to travel off-chip, so combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that because signals between different CPUs travel shorter distances, those signals degrade less. These higher quality signals allow more data to be sent in a given time period since individual signals can be shorter and do not need to be repeated as often.
  • Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less Printed Circuit Board (PCB) space than multi-chip SMP designs.
  • A dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages; such reduction reduces latency. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB).
  • In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core design. Also, adding more cache suffers from diminishing returns.

Disadvantages

  • In addition to operating system (OS) support, adjustments to existing software are required to maximize utilization of the computing resources provided by multi-core processors. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications. For example, most current (2006) video games will run faster on a 3 GHz single-core processor than on a 2GHz dual-core processor (of the same core architecture), despite the dual-core theoretically having more processing power, because they are incapable of efficiently using more than one core at a time [1] [2].
  • Integration of a multi-core chip drives production yields down and they are more difficult to manage thermally than lower-density single-chip designs.
  • From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence.
  • Raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage. If a single core is close to being memory bandwidth limited, going to dual-core might only give 30% to 70% improvement. If memory-bandwidth is not a problem a 90% improvement can be expected. It would be possible for an application that used 2 CPUs to end up running faster on one dual-core if communication between the CPUs was the limiting factor, which would count as more than 100% improvement.

Hardware trend

  • Multi-core to many-core: from dual-, quad-, eight-core to tens or even hundreds of cores.
  • Mixed with simultaneous multithreading or hyperthreading
  • Heterogeneous: special purpose processors cores in addition to general purpose cores for higher efficiency in processing multimedia, recognition and networking applications
  • Energy-efficiency: focus on performance-per-watt with advanced fine-grain or ultra fine-grain power management and dynamic voltage and frequency scaling (DVFS)
  • Hardware-assisted platform virtualization
  • Memory-on-chip

Software impact

Software benefits from multicore architectures where code can be executed in parallel. Under most common operating systems this requires code to execute in separate threads. Each application running on a system runs in its own thread so multiple applications will benefit from multicore architectures. Each application may also have multiple threads but, in most cases, it must be specifically written to utilize multiple threads. Operating system software also tends to run many threads as a part of its normal operation. Running virtual machines will benefit from adoption of multiple core architectures since each virtual machine runs independently of others and can be executed in parallel.

Most application software is not written to use multiple concurrent threads intensively because of the challenge of doing so. A frequent pattern in multithreaded application design is where a single thread does the intensive work while other threads do much less. For example a virus scan application may create a new thread for the scan process, while the gui thread waits for commands from the user (e.g. cancel the scan). In such cases, multicore architecture is of little benefit for the application itself due to the single thread doing all heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult to find bugs due to the interleaving of processing on data shared between threads (thread-safety). Debugging such code when it breaks is also much more difficult than single-threaded code. Also there has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level multiprocessor hardware. Although threaded applications incur little additional performance penalty on single-processor machines, the extra overhead of development was difficult to justify due to preponderance of single-processor machines.

As of Fall 2006, with the typical mix of mass-market applications the main benefit to an ordinary user from a multi-core CPU will be improved multitasking performance, which may apply more often than expected. Ordinary users are already running many threads; operating systems utilize multiple threads, as well as antivirus programs and other 'background processes' including audio and video controls. The largest boost in performance will likely be noticed in improved response time while running CPU-intensive processes, like antivirus scans, defragmenting, ripping/burning media (requiring file conversion), or searching for folders. Example: if the automatic virus scan initiates while a movie is being watched, the movie is far less likely to lag, as the antivirus program will be assigned to a different processor than the processor running the movie playback.

Given the increasing emphasis on multicore chip design, stemming from the grave thermal and power consumption problems posed by any further significant increase in processor clock speeds, the extent to which software can be multithreaded to take advantage of these new chips is likely to be the single greatest constraint on computer performance in the future. If developers are unable to design software to fully exploit the resources provided by multiple cores, then they will ultimately reach an insurmountable performance ceiling.

Current software titles designed to utilize multi-core technologies include: NewTek Lightwave, World of Warcraft, City of Heroes, City of Villains, Maya, Blender3D, Quake 3 & Quake 4, Elder Scrolls: Oblivion, Falcon 4: Allied Force, 3DS Max, Adobe Photoshop, Paint.NET, Windows XP Professional, Windows 2003, Windows Vista, Mac OS X, Linux, Tangosol Coherence, GigaSpaces EAG, DataRush from Pervasive Software, numerous Ulead products including MediaStudio Pro 7 & 8 (pro video editor), VideoStudio 10 and 10 Plus (consumer video editor), DVD MovieFactory 5 & 5 Plus (DVD authoring) and PhotoImpact 12 (graphics tool), and many operating systems that are streamlined for server use.

Most video games designed to run on Sony's Playstation 3 are expected to take advantage of its multi-core Cell microprocessor. The first-person shooter Resistance: Fall of Man reportedly dedicates one of the Cell's SPE cores to processing enemy AI.[citation needed]

Parallel programming techniques can benefit from multiple cores directly. Some existing parallel programming models such as OpenMP and MPI can be used on multi-core platforms. Other research efforts have been seen also, like Cray’s Chapel, Sun’s Fortress, and IBM’s X10.

Concurrency acquires a central role in true parallel application. The basic steps in designing parallel applications are:

Partitioning
The partitioning stage of a design is intended to expose opportunities for parallel execution. Hence, the focus is on defining a large number of small tasks in order to yield what is termed a fine-grained decomposition of a problem.
Communication
The tasks generated by a partition are intended to execute concurrently but cannot, in general, execute independently. The computation to be performed in one task will typically require data associated with another task. Data must then be transferred between tasks so as to allow computation to proceed. This information flow is specified in the communication phase of a design.
Agglomeration
In the third stage, we move from the abstract toward the concrete. We revisit decisions made in the partitioning and communication phases with a view to obtaining an algorithm that will execute efficiently on some class of parallel computer. In particular, we consider whether it is useful to combine, or agglomerate, tasks identified by the partitioning phase, so as to provide a smaller number of tasks, each of greater size. We also determine whether it is worthwhile to replicate data and/or computation.
Mapping
In the fourth and final stage of the parallel algorithm design process, we specify where each task is to execute. This mapping problem does not arise on uniprocessors or on shared-memory computers that provide automatic task scheduling.

On the other hand, on the server side, multicore processors are ideal because they allow many users to connect to a site simultaneously and have independent threads of execution. This allows for Web servers and application servers that have much better throughput.

Licensing

Another issue is the question of software licensing for multi-core CPUs. Typically enterprise server software is licensed "per processor". In the past a CPU was a processor (and moreover most computers had only one CPU) and there was no ambiguity. Now there is the possibility of counting cores as processors and charging a customer for two licenses when they use a dual-core CPU. However, the trend seems to be counting dual-core chips as a single processor as Microsoft, Intel, and AMD support this view. Oracle counts AMD and Intel dual-core CPUs as a single processor but has other numbers for other types. IBM, HP and Microsoft count a multi-chip module as multiple processors. If multi-chip modules counted as one processor then CPU makers would have an incentive to make large expensive multi-chip modules so their customers saved on software licensing. So it seems that the industry is slowly heading towards counting each die (see Integrated circuit) as a processor, no matter how many cores each die has. Intel has released Paxville which is really a multi-chip module but Intel is calling it a dual-core. It is not clear yet how licensing will work for Paxville. This is an unresolved and thorny issue for software companies and customers.

Commercial examples

  • Intel launched its quad core processor on 13th Dec 2006. The Intel "Kentsfield" chip, with the commercial name "Core 2 Extreme Q6700", launched at a speed of 2.66Ghz with 8MB of L2 Cache. Intel then launched the Q6600 mainstream chip at 2.4ghz on Jan 7th 2007. AMD announced[3] its quad core processors would be produced in 2007.

Notes

  1. ^ Digital signal processors, DSPs, have utilized dual-core architectures for much longer than high-end general purpose processors. A typical example of a DSP-specific implementation would be a combination of a RISC CPU and a DSP MPU. This allows for the design of products that require a general purpose processor for user interfaces and a DSP for real-time data processing; this type of design is suited to e.g. mobile phones.
  2. ^ Two types of operating systems are able to utilize a dual-CPU multiprocessor: partitioned multiprocessing and symmetric multiprocessing (SMP). In a partitioned architecture, each CPU boots into separate segments of physical memory and operate independently; in an SMP OS, processors work in a shared space, executing threads within the OS independently.

See also

References

  1. ^ The American video game developer Valve Corporation have stated that they will use multi core optimizations for the next version of their Source engine, shipped with Half-Life 2: Episode Two, the next installment of their Half-Life franchise [1].
  2. ^ 80-core prototype from Intel
  3. ^ Quad-cores from AMD