Parallel computing is a form of computing in which many instructions
are carried out simultaneously.Parallel computing operates on the principle that
large problems can almost always be divided into smaller ones, which may be
carried out concurrently ("in parallel"). Parallel computing exists in several
different forms: bit-level parallelism, instruction level parallelism, data
parallelism, and task parallelism. It has been used for many years, mainly in
high performance computing, but interest in it has become greater in recent
years due to physical constraints preventing frequency scaling. Parallel
computing has recently become the dominant paradigm in computer architecture,
mainly in the form of multicore processors
Parallel computer programs are harder to write than sequential ones because
concurrency introduces several new classes of potential software bugs, of which
raceconditions are the most common. Communication and synchronization between
the different subtasks is typically one of the greatest barriers to getting good
parallel program performance. In recent years, power consumption in parallel
computers has also become a great concern.The speed up of a program as a result
of parallelization is given by Amdahl's law.
Background
Traditionally, computer software has been written for serial computation. To
solve a problem, an algorithm is constructed which produces a serial stream of
instructions. These instructions are executed on a central processing unit on
one computer. Only one instruction may execute at a given time - after that
instruction is finished, the next is executed
Parallel computing on the other hand uses multiple processing elements
simultaneously to solve a problem. The problem is broken into parts which are
independent so that each processing element can execute its part of the
algorithm simultaneously with others. The processing elements can be diverse and
include resources such as a single computer with multiple processors, a number
of networked computers, specialized hardware or any combination of the above
Frequency scaling was the dominant reason for computer performance increases
from the mid 1980s until 2004. The total runtime of a program is proportional to
the total number of instructions multiplied by the average time per instruction.
Everything else constant, increasing the clock frequency decreases the average
time it takes to execute an instruction. An increase in frequency thus decreases
runtime for all computation-bounded program
However, power consumption in a chip is given by the equation
where P is power, C is the capacitance being switched per clock cycle
(proportional to the number of transistors whose inputs change), V is voltage,
and F is the processor frequency (cycles per second).Increases in frequency thus
increase the amount of power used in a processor. Increasing processor powerconsumption led ultimately to Intel's May 2004 cancellation of its Tejas and
Jayhawk processors, which is generally cited as the end of frequency scaling as
the dominant computer architecture paradigm.
Moore's Law is the empircal observation that transistor density in a
microprocessor doubles every 18 to 24 months. Despite power issues, and repeated
predictions of its end, Moore's law is still in effect. With the end of
frequency scaling, these additional transistors (which are no longer used to
facilitate frequency scaling) can be used to add extra hardware to facilitate
parallel computing.
|