0% found this document useful (0 votes)
122 views

Introduction To Parallel Computing

Serial computing involves executing instructions one by one on a single CPU core. This is inefficient for complex problems. Parallel computing breaks problems into multiple instructions that can be executed simultaneously across multiple CPU cores. It reduces computation time and makes better use of hardware resources. Examples of parallelism include bit-level parallelism using wider registers, instruction-level parallelism by reordering instructions, and task parallelism by decomposing a task into parallel subtasks. Parallel computing is necessary for real-time simulations of large, complex systems.

Uploaded by

AnilRanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views

Introduction To Parallel Computing

Serial computing involves executing instructions one by one on a single CPU core. This is inefficient for complex problems. Parallel computing breaks problems into multiple instructions that can be executed simultaneously across multiple CPU cores. It reduces computation time and makes better use of hardware resources. Examples of parallelism include bit-level parallelism using wider registers, instruction-level parallelism by reordering instructions, and task parallelism by decomposing a task into parallel subtasks. Parallel computing is necessary for real-time simulations of large, complex systems.

Uploaded by

AnilRanga
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Introduction to Parallel Computing

Before taking a toll on Parallel Computing, first let’s take a look at the background of computations of computer
software and why it failed for the modern era.

Computer software was written conventionally for serial computing. This meant that to solve a problem, an algorithm
divides the problem into smaller instructions. These discrete instructions are then executed on Central Processing
Unit of a computer one by one. Only after one instruction is finished, next one starts.

Real life example of this would be people standing in a queue waiting for movie ticket and there is only 1
cashier. Cashier is giving ticket one by one to the persons. Complexity of this situation increases when there
are 2 queues and only one cashier.

So, in short Serial Computing is following:

1. In this, a problem statement is broken into discrete instructions.


2. Then the instructions are executed one by one.
3. Only one instruction is executed at any moment of time.
Look at point 3. This was causing a huge problem in computing industry as only one instruction was getting executed
at any moment of time. This was a huge waste of hardware resources as only one part of the hardware will be
running for a particular instruction and of time. As problem statements were getting heavier and bulkier, so does the
amount of time in execution of those statements. Example:- Pentium 3 and Pentium 4 processors.

Now let’s come back to our real life problem. We could definitely say that complexity will decrease when there are 2
queues and 2 cashier giving tickets to 2 persons simultaneously. This is an example of Parallel Computing.

Parallel Computing
It is the use of multiple processing elements simultaneously for solving any problem. Problems are broken down into
instructions and are solved concurrently as each resource which has been applied to work is working at the same
time.
Advantages of Parallel Computing over Serial Computing are as follows:
1. It saves time and money as many resources working together will reduce the time and cut potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes better work of
hardware.
Types of Parallelism:
1. Bit-level parallelism: It is the form of parallel computing which is based on the increasing processor’s size. It
reduces the number of instructions that the system must execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute the sum of two 16-bit integers. It must
first sum up the 8 lower-order bits, then add the 8 higher-order bits, thus requiring two instructions to perform the
operation. A 16-bit processor can perform the operation with just one instruction.
2. Instruction-level parallelism: A processor can only address less than one instruction for each clock cycle
phase. These instructions can be re-ordered and grouped which are later on executed concurrently without
affecting the result of the program. This is called instruction-level parallelism.
3. Task Parallelism: Task parallelism employs the decomposition of a task into subtasks and then allocating each
of the subtasks for execution. The processors perform execution of sub tasks concurrently.
Why parallel computing?
 The whole real world runs in dynamic nature i.e. many things happen at a certain time but at different places
concurrently. This data is extensively huge to manage.
 Real world data needs more dynamic simulation and modeling, and for achieving the same, parallel computing
is the key.
 Parallel computing provides concurrency and saves time and money.
 Complex, large datasets, and their management can be organized only and only using parallel computing’s
approach.
 Ensures the effective utilization of the resources. The hardware is guaranteed to be used effectively whereas in
serial computation only some part of hardware was used and the rest rendered idle.
 Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing:
 Data bases and Data mining.
 Real time simulation of systems.
 Science and Engineering.
 Advanced graphics, augmented reality and virtual reality.
Limitations of Parallel Computing:
 It addresses such as communication and synchronization between multiple sub-tasks and processes which is
difficult to achieve.
 The algorithms must be managed in such a way that they can be handled in the parallel mechanism.
 The algorithms or program must have low coupling and high cohesion. But it’s difficult to create such programs.
 More technically skilled and expert programmers can code a parallelism based program well.

You might also like