Multi Core System
Multi Core System
Diagram of a generic dual-core processor, with CPU-local level 1 caches, and a shared, on-die
level 2 cache.
A multi-core processor is a single computing component with two or more independent actual
processors (called "cores"), which are the units that read and execute program instructions.[1] The
instructions are ordinary cpu instructions like add, move data, and branch, but the multiple cores
can run multiple instructions at the same time, increasing overall speed for programs amenable to
parallel computing. Manufacturers typically integrate the cores onto a single integrated circuit
die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package.
Processors were originally developed with only one core. A many-core processor is a multi-core
processor in which the number of cores is large enough that traditional multi-processor
techniques are no longer efficient[citation needed] — largely because of issues with congestion in
supplying instructions and data to the many processors. The many-core threshold is roughly in
the range of several tens of cores; above this threshold network on chip technology is
advantageous. Tilera processors feature a switch in each core to route data through an on-chip
mesh network to lessen the data congestion, enabling their core count to scale up to 100 cores.
A dual-core processor has two cores (e.g. AMD Phenom II X2, Intel Core Duo), a quad-core
processor contains four cores (e.g. AMD Phenom II X4, intel's quad-core processors, see i3, i5,
and i7 at Intel Core), a hexa-core processor contains six cores (e.g. AMD Phenom II X6, Intel
Core i7 Extreme Edition 980X), an octa-core processor containes eight cores (e.g. AMD FX-
8150). A multi-core processor implements multiprocessing in a single physical package.
Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or
may not share caches, and they may implement message passing or shared memory inter-core
communication methods. Common network topologies to interconnect cores include bus, ring,
two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical
cores, heterogeneous multi-core systems have cores which are not identical. Just as with single-
processor systems, cores in multi-core systems may implement architectures such as superscalar,
VLIW, vector processing, SIMD, or multithreading.
Multi-core processors are widely used across many application domains including general-
purpose, embedded, network, digital signal processing (DSP), and graphics.
The improvement in performance gained by the use of a multi-core processor depends very much
on the software algorithms used and their implementation. In particular, possible gains are
limited by the fraction of the software that can be parallelized to run on multiple cores
simultaneously; this effect is described by Amdahl's law. In the best case, so-called
embarrassingly parallel problems may realize speedup factors near the number of cores, or even
more if the problem is split up enough to fit within each core's cache(s), avoiding use of much
slower main system memory. Most applications, however, are not accelerated so much unless
programmers invest a prohibitive amount of effort in re-factoring the whole problem [2]. The
parallelization of software is a significant ongoing topic of research.
Contents
1 Terminology
2 Development
o 2.1 Commercial incentives
o 2.2 Technical factors
o 2.3 Advantages
o 2.4 Disadvantages
3 Hardware
o 3.1 Trends
o 3.2 Architecture
4 Software impact
o 4.1 Licensing
5 Embedded applications
6 Hardware examples
o 6.1 Commercial
o 6.2 Free
o 6.3 Academic
7 Notes
8 See also
9 References
10 External links
Terminology
The terms multi-core and dual-core most commonly refer to some sort of central processing unit
(CPU), but are sometimes also applied to digital signal processors (DSP) and system-on-a-chip
(SoC). The terms are generally used only to refer to multi-core microprocessors that are
manufactured on the same integrated circuit die; separate microprocessor dies in the same
package are generally referred to by another name, such as multi-chip module. This article uses
the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit,
unless otherwise noted.
In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate
processing-units (which often contain special circuitry to facilitate communication between each
other).
The terms many-core and massively multi-core are sometimes used to describe multi-core
architectures with an especially high number of cores (tens or hundreds).
Some systems use many soft microprocessor cores placed on a single FPGA. Each "core" can be
considered a "semiconductor intellectual property core" as well as a CPU core[citation needed].
Development
While manufacturing technology improves, reducing the size of individual gates, physical limits
of semiconductor-based microelectronics have become a major design concern. These physical
limitations can cause significant heat dissipation and data synchronization problems. Various
other methods are used to improve CPU performance. Some instruction-level parallelism (ILP)
methods such as superscalar pipelining are suitable for many applications, but are inefficient for
others that contain difficult-to-predict code. Many applications are better suited to thread level
parallelism (TLP) methods, and multiple independent CPUs are commonly used to increase a
system's overall TLP. A combination of increased available space (due to refined manufacturing
processes) and the demand for increased TLP led to the development of multi-core CPUs.
Commercial incentives
Several business motives drive the development of dual-core architectures. For decades, it was
possible to improve performance of a CPU by shrinking the area of the integrated circuit, which
drove down the cost per device on the IC. Alternatively, for the same circuit area, more
transistors could be utilized in the design, which increased functionality, especially for CISC
architectures. Clock rates also increased by orders of magnitude in the decades of the late 20th
century, from several megahertz in the 1980s to several gigahertz in the early 2000s.
As the rate of clock speed slowed, increased use of parallel computing in the form of multi-core
processors has been pursued to improve overall processing performance. Multiple cores were
used on the same CPU chip, which could then lead to better sales of CPU chips which had two or
more cores. Intel has produced a 48-core processor for research in cloud computing.[3]
Technical factors
Additionally:
Advantages
The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to
operate at a much higher clock-rate than is possible if the signals have to travel off-chip.
Combining equivalent CPUs on a single die significantly improves the performance of cache
snoop (alternative: Bus snooping) operations. Put simply, this means that signals between
different CPUs travel shorter distances, and therefore those signals degrade less. These higher-
quality signals allow more data to be sent in a given time period, since individual signals can be
shorter and do not need to be repeated as often.
The largest boost in performance will likely be noticed in improved response-time while running
CPU-intensive processes, like antivirus scans, ripping/burning media (requiring file conversion),
or file searching. For example, if the automatic virus-scan runs while a movie is being watched,
the application running the movie is far less likely to be starved of processor power, as the
antivirus program will be assigned to a different processor core than the one running the movie
playback.
Assuming that the die can fit into the package, physically, the multi-core CPU designs require
much less printed circuit board (PCB) space than do multi-chip SMP designs. Also, a dual-core
processor uses slightly less power than two coupled single-core processors, principally because
of the decreased power required to drive signals external to the chip. Furthermore, the cores
share some circuitry, like the L2 cache and the interface to the front side bus (FSB). In terms of
competing technologies for the available silicon die area, multi-core design can make use of
proven CPU core library designs and produce a product with lower risk of design error than
devising a new wider core-design. Also, adding more cache suffers from diminishing returns.
[citation needed]
Multi-core chips also allow higher performance at lower energy. This can be a big factor in
mobile devices that operate on batteries. Since each core in multi-core is generally more energy-
efficient, the chip becomes more efficient than having a single large monolithic core. This allows
to get higher performance with less energy. The challenge of writing parallel code clearly offsets
this benefit.[4]
Disadvantages
Integration of a multi-core chip drives chip production yields down and they are more difficult to
manage thermally than lower-density single-chip designs. Intel has partially countered this first
problem by creating its quad-core designs by combining two dual-core on a single die with a
unified cache, hence any two working dual-core dies can be used, as opposed to producing four
cores on a single die and requiring all four to work to produce a quad-core. From an architectural
point of view, ultimately, single CPU designs may make better use of the silicon surface area
than multiprocessing cores, so a development commitment to this architecture may carry the risk
of obsolescence. Finally, raw processing power is not the only constraint on system performance.
Two processing cores sharing the same system bus and memory bandwidth limits the real-world
performance advantage. If a single core is close to being memory-bandwidth limited, going to
dual-core might only give 30% to 70% improvement. If memory bandwidth is not a problem, a
90% improvement can be expected[citation needed]. It would be possible for an application that used
two CPUs to end up running faster on one dual-core if communication between the CPUs was
the limiting factor, which would count as more than 100% improvement.
Hardware
Trends
The general trend in processor development has moved from dual-, tri-, quad-, hexa-, octo-core
chips to ones with tens or even hundreds of cores. In addition, multi-core chips mixed with
simultaneous multithreading, memory-on-chip, and special-purpose "heterogeneous" cores
promise further performance and efficiency gains, especially in processing multimedia,
recognition and networking applications. There is also a trend of improving energy-efficiency by
focusing on performance-per-watt with advanced fine-grain or ultra fine-grain power
management and dynamic voltage and frequency scaling (i.e. laptop computers and portable
media players).
Architecture
The composition and balance of the cores in multi-core architecture show great variety. Some
architectures use one core design repeated consistently ("homogeneous"), while others use a
mixture of different cores, each optimized for a different, "heterogeneous" role.
The article CPU designers debate multi-core future[8] by Rick Merritt, EE Times 2008, includes
comments:
"Chuck Moore [...] suggested computers should be more like cellphones, using a variety
of specialty cores to run modular software scheduled by a high-level applications
programming interface.
[...] Atsushi Hasegawa, a senior chief engineer at Renesas, generally agreed. He
suggested the cellphone's use of many specialty cores working in concert is a good model
for future multi-core designs.
[...] Anant Agarwal, founder and chief executive of startup Tilera, took the opposing
view. He said multi-core chips need to be homogeneous collections of general-purpose
cores to keep the software model simple."
Software impact
An outdated version of an anti-virus application may create a new thread for a scan process,
while its GUI thread waits for commands from the user (e.g. cancel the scan). In such cases, a
multicore architecture is of little benefit for the application itself due to the single thread doing
all heavy lifting and the inability to balance the work evenly across multiple cores. Programming
truly multithreaded code often requires complex co-ordination of threads and can easily
introduce subtle and difficult-to-find bugs due to the interleaving of processing on data shared
between threads (thread-safety). Consequently, such code is much more difficult to debug than
single-threaded code when it breaks. There has been a perceived lack of motivation for writing
consumer-level threaded applications because of the relative rarity of consumer-level demand for
maximum utilisation of computer hardware. Although threaded applications incur little
additional performance penalty on single-processor machines, the extra overhead of development
has been difficult to justify due to the preponderance of single-processor machines. Also, serial
tasks like decoding the entropy encoding algorithms used in video codecs are impossible to
parallelize because each result generated is used to help create the next result of the entropy
decoding algorithm.
Given the increasing emphasis on multicore chip design, stemming from the grave thermal and
power consumption problems posed by any further significant increase in processor clock
speeds, the extent to which software can be multithreaded to take advantage of these new chips is
likely to be the single greatest constraint on computer performance in the future. If developers
are unable to design software to fully exploit the resources provided by multiple cores, then they
will ultimately reach an insurmountable performance ceiling.
The telecommunications market had been one of the first that needed a new design of parallel
datapath packet processing because there was a very quick adoption of these multiple-core
processors for the datapath and the control plane. These MPUs are going to replace[9] the
traditional Network Processors that were based on proprietary micro- or pico-code.
Parallel programming techniques can benefit from multiple cores directly. Some existing parallel
programming models such as Cilk++, OpenMP, OpenHMPP, FastFlow, Skandium, and MPI can
be used on multi-core platforms. Intel introduced a new abstraction for C++ parallelism called
TBB. Other research efforts include the Codeplay Sieve System, Cray's Chapel, Sun's Fortress,
and IBM's X10.
Multi-core processing has also affected the ability of modern computational software
development. Developers programming in newer languages might find that their modern
languages do not support multi-core functionality. This then requires the use of numerical
libraries to access code written in languages like C and Fortran, which perform math
computations faster than newer languages like C#. Intel's MKL and AMD's ACML are written in
these native languages and take advantage of multi-core processing.
Managing concurrency acquires a central role in developing parallel applications. The basic steps
in designing parallel applications are:
Partitioning
The partitioning stage of a design is intended to expose opportunities for parallel
execution. Hence, the focus is on defining a large number of small tasks in order to yield
what is termed a fine-grained decomposition of a problem.
Communication
The tasks generated by a partition are intended to execute concurrently but cannot, in
general, execute independently. The computation to be performed in one task will
typically require data associated with another task. Data must then be transferred between
tasks so as to allow computation to proceed. This information flow is specified in the
communication phase of a design.
Agglomeration
In the third stage, development moves from the abstract toward the concrete. Developers
revisit decisions made in the partitioning and communication phases with a view to
obtaining an algorithm that will execute efficiently on some class of parallel computer. In
particular, developers consider whether it is useful to combine, or agglomerate, tasks
identified by the partitioning phase, so as to provide a smaller number of tasks, each of
greater size. They also determine whether it is worthwhile to replicate data and/or
computation.
Mapping
In the fourth and final stage of the design of parallel algorithms, the developers specify
where each task is to execute. This mapping problem does not arise on uniprocessors or
on shared-memory computers that provide automatic task scheduling.
On the other hand, on the server side, multicore processors are ideal because they allow many
users to connect to a site simultaneously and have independent threads of execution. This allows
for Web servers and application servers that have much better throughput.
Licensing
Typically, proprietary enterprise-server software is licensed "per processor". In the past a CPU
was a processor and most computers had only one CPU, so there was no ambiguity.
Now there is the possibility of counting cores as processors and charging a customer for multiple
licenses for a multi-core CPU. However, the trend seems to be counting dual-core chips as a
single processor: Microsoft, Intel, and AMD support this view. Microsoft have said they would
treat a socket as a single processor.[10]
Oracle counts an AMD X2 or Intel dual-core CPU as a single processor but has other numbers
for other types, especially for processors with more than two cores. IBM and HP count a multi-
chip module as multiple processors. If multi-chip modules count as one processor, CPU makers
have an incentive to make large expensive multi-chip modules so their customers save on
software licensing. It seems that the industry is slowly heading towards counting each die (see
Integrated circuit) as a processor, no matter how many cores each die has.
Embedded applications
Embedded computing operates in an area of processor technology distinct from that of
"mainstream" PCs. The same technological drivers towards multicore apply here too. Indeed, in
many cases the application is a "natural" fit for multicore technologies, if the task can easily be
partitioned between the different processors.
In addition, embedded software is typically developed for a specific hardware release, making
issues of software portability, legacy code or supporting independent developers less critical than
is the case for PC or enterprise computing. As a result, it is easier for developers to adopt new
technologies and as a result there is a greater variety of multicore processing architectures and
suppliers.
As of 2010, multi-core network processing devices have become mainstream, with companies
such as Freescale Semiconductor, Cavium Networks, Wintegra and Broadcom all manufacturing
products with eight processors. For the system developer, a key challenge is how to exploit all
the cores in these devices to achieve maximum networking performance at the system level,
despite the performance limitations inherent in an SMP operating system. To address this issue,
companies such as 6WIND provide portable packet processing software architected so that the
networking data plane runs in a fast path environment outside the OS, while retaining full
compatibility with standard OS APIs[11].
In digital signal processing the same trend applies: Texas Instruments has the three-core
TMS320C6488 and four-core TMS320C5441, Freescale the four-core MSC8144 and six-core
MSC8156 (and both have stated they are working on eight-core successors). Newer entries
include the Storm-1 family from Stream Processors, Inc with 40 and 80 general purpose ALUs
per chip, all programmable in C as a SIMD engine and Picochip with three-hundred processors
on a single die, focused on communication applications.