0% found this document useful (0 votes)
43 views3 pages

Intro PDC1

The document discusses parallel and distributed computing, explaining that parallel computing involves multiple processors performing tasks simultaneously while distributed computing involves multiple autonomous computers communicating over a network. It provides examples of parallelism at different levels and discusses applications and limitations of parallel computing.

Uploaded by

gamingwithbhatti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views3 pages

Intro PDC1

The document discusses parallel and distributed computing, explaining that parallel computing involves multiple processors performing tasks simultaneously while distributed computing involves multiple autonomous computers communicating over a network. It provides examples of parallelism at different levels and discusses applications and limitations of parallel computing.

Uploaded by

gamingwithbhatti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

INTRODUCTION TO PARALLEL AND DISTRIBUTED COMPUTING

The simultaneous growth in availability of big data and in the number of


simultaneous users on the Internet places particular pressure on the need to carry
out computing tasks “in parallel,” or simultaneously. Parallel and distributed
computing occurs across many different topic areas in computer science,
including algorithms, computer architecture, networks, operating systems,
and software engineering. During the early 21st century there was explosive
growth in multiprocessor design and other strategies for complex applications to
run faster. Parallel and distributed computing builds on fundamental systems
concepts, such as concurrency, mutual exclusion, consistency in state/memory
manipulation, message-passing, and shared-memory models.

Parallel Computing:

In parallel computing multiple processors performs multiple tasks assigned to them


simultaneously. Memory in parallel systems can either be shared or distributed. Parallel computing
provides concurrency and saves time and money.
Distributed Computing:

In distributed computing we have multiple autonomous computers which seems to the user as
single system. In distributed systems there is no shared memory and computers communicate with
each other through message passing. In distributed computing a single task is divided among
different computers.
Difference between Parallel Computing and Distributed Computing:
S.NO PARALLEL COMPUTING DISTRIBUTED COMPUTING

Many operations are System components are located at

1. performed simultaneously different locations


2. Single computer is required Uses multiple computers

Multiple processors perform Multiple computers perform multiple

3. multiple operations operations

It may have shared or

4. distributed memory It have only distributed memory

Processors communicate with Computer communicate with each

5. each other through bus other through message passing.

Improves system scalability, fault

Improves the system tolerance and resource sharing

6. performance capabilities

Parallel Computing –
It is the use of multiple processing elements simultaneously for solving any problem.
Problems are broken down into instructions and are solved concurrently as each
resource which has been applied to work is working at the same time.
Advantages of Parallel Computing over Serial Computing are as follows:
1. It saves time and money as many resources working together will reduce the time and
cut potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.
4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing
makes better work of hardware.
Types of Parallelism:
1. Bit-level parallelism: It is the form of parallel computing which is based on the
increasing processor’s size. It reduces the number of instructions that the system must
execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute the sum of two
16-bit integers. It must first sum up the 8 lower-order bits, then add the 8 higher-order
bits, thus requiring two instructions to perform the operation. A 16-bit processor can
perform the operation with just one instruction.
2. Instruction-level parallelism: A processor can only address less than one instruction
for each clock cycle phase. These instructions can be re-ordered and grouped which are
later on executed concurrently without affecting the result of the program. This is called
instruction-level parallelism.
3. Task Parallelism: Task parallelism employs the decomposition of a task into subtasks
and then allocating each of the subtasks for execution. The processors perform
execution of sub tasks concurrently.
Why parallel computing?
• The whole real world runs in dynamic nature i.e. many things happen at a certain time
but at different places concurrently. This data is extensively huge to manage.
• Real world data needs more dynamic simulation and modeling, and for achieving the
same, parallel computing is the key.
• Parallel computing provides concurrency and saves time and money.
• Complex, large datasets, and their management can be organized only and only using
parallel computing’s approach.
• Ensures the effective utilization of the resources. The hardware is guaranteed to be used
effectively whereas in serial computation only some part of hardware was used and the
rest rendered idle.
• Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing:
• Data bases and Data mining.
• Real time simulation of systems.
• Science and Engineering.
• Advanced graphics, augmented reality and virtual reality.
Limitations of Parallel Computing:
• It addresses such as communication and synchronization between multiple sub-tasks
and processes which is difficult to achieve.
• The algorithms must be managed in such a way that they can be handled in the parallel
mechanism.
• The algorithms or program must have low coupling and high cohesion. But it’s difficult
to create such programs.
• More technically skilled and expert programmers can code a parallelism based program
well.
Future of Parallel Computing: The computational graph has undergone a great
transition from serial computing to parallel computing. Tech giant such as Intel has
already taken a step towards parallel computing by employing multicore processors.
Parallel computation will revolutionize the way computers work in the future, for the
better good. With all the world connecting to each other even more than before, Parallel
Computing does a better role in helping us stay that way. With faster networks,
distributed systems, and multi-processor computers, it becomes even more necessary.

You might also like