0% found this document useful (0 votes)
7 views34 pages

Lecture 01

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views34 pages

Lecture 01

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Parallel and distributed computing

OUTLINE
• Computing
• Parallel computing
• Distributed computing
• Need of parallel and distributed computing
• Scalability
Computing

• Computing is the process to complete a given goal


oriented task by computer technology
• Computing may include the design and
development of software and hardware systems for
a broad range of purposes, often consist of
structuring, processing and managing any kind of
information
Parallel vs. Serial computing

• Serial Computing: problem is broken into stream of


instructions that are executed sequentially one after
other on a single processor
• One instructions executes at a time.
• Parallel Computing: problem is broken into parts
that can be solved concurrently, and executed
simultaneously on different processors.
Parallel vs. serial computing
Need of parallel and distributed computing
• We assess evolutionary changes in machine architecture,
OS, Network connectivity & application workload
• Instead of using centralized computer to solve
computational problems, parallel and distributed computing
systems are uses to solve large scale problems.
• It uses multiple computers to solve computational problems
Need of parallel and distributed computing

• Billions of people use internet every day.


• As a result large data center must provide high
performance computing services to huge numbers of
internet users concurrently.
Where is parallel computing used

• Artificial intelligence and Machine learning


• Online transaction processing
• Big Data Analytics
• Real time system and control applications
• Disaster prevention
• video Rendering and editing
HPC Vs. HTC
• Supercomputers and large data centers must provide High
Performance Computing services to deal huge numbers of internet
users concurrently
• But HPC is no longer optimal for measuring performance, as the
speed of HPC systems has increased from Gflops (1990) to
Pflops(2010)
• It also demands High Throughput Computing built with parallel and
distributed computing technologies as internet searches and web
services by millions + users simultaneously.
• Thus performance goal shift to measure high throughput or number of
tasks completed per unit time.
PARALLEL COMPUTING SYSTEM

• Parallel computing systems are the simultaneous


execution of the single task on multiple processors
in order to obtain results faster
• The idea is based on the fact that the process of
solving a problem usually can be divided into
smaller tasks(divide and conquer), which may be
carried out simultaneously with some coordination.
PARALLEL COMPUTING SYSTEM
• The terms parallel computing architecture sometimes
used for computer with more than one processor (few
to thousands), available for processing
• The recent multicore processors (chips with more than
one processor core) are some commercial examples
which bring parallel computing to the desktop.
• Communication is done through Bus in parallel
computing
Types of Parallel Computing
• Task Parallelism: just like divide and Conquer
approach
• Data Parallelism: Single Instruction, Multiple Data
• Hybrid Parallelism: combining task and data
parallelism
• Hardware Parallelism: Multi-core CPUs, GPUs
and Distributed system.
Parallel Computing (Advantages)

• Save time and money:


Parallel Computing (Advantages)

• Solve larger/ complex problems:


• Many problems are so large and complex that it is
impractical or impossible to solve them on a single
computer with given limited computer memory.
• Web search engines, database processing involves
millions of transactions per second.
Parallel Computing (Challenges)

1. Synchronization issues
2. Load balancing
3. data dependency
4. communication overhead
Future Trends in Parallel Computing

1. Quantum Computing
2. Neuromorphic computing
3. Edge Computing
4. Exascal Computing
Distributed Computing
Distributed Systems

• We define a distributed system as one in which hardware or


software components located at networked computer communicate
and coordinate their actions only by passing messages.
• The simple definition covers the entire range of systems in which
networked computers can usefully be deployed.
• A distributed system is a collection of independent computers that
appears to its users as a single coherent system.
• Communication is done by message passing scheme.
• Every computer in distributed system has its own Instruction
manager/CU
Distributed Systems

There are two types of architecture in distributed


systems
1. General Purpose
Including PC, Laptops etc.
2. Special purpose
Mainframe computers or super computers
Differences (Parallel Vs. Distributed)

1. Multiple operations are 1. System components


performed at a time are located at different
locations
2. Single computer is
required 2. Uses multiple
computers
3. Multiple processors
perform multiple 3. Multiple computers
operations perform multiple
operations
Differences (Parallel Vs. Distributed) Cont.

4. May have shared or 4. Only have distributed


distributed memory memory
5. Communication is 5. Communication is
done through bus done through
message passing
6. Improve system
performance 6. Improves scalability,
fault tolerance and
resource sharing
capabilities.
Scalability

• Property of a system to handle a growing amount


of work by adding resources to the system
• A system is described as scalable if it will remain
effective when there is a significant increase in the
number of resources and the number of users.
Scalability

1. Physical scalability / load scalability: system can


be scalable w.r.t its size, meaning that we can
easily add/remove more users and resources to
the system
2. Administrative scalability: increasing number of
organization or users to access a system.
3. Functional scalability: enhance system by adding
new functionality without disturbing exciting.
Scalability

1. Geographic scalability: maintain effectiveness


during expansion from local area to larger region
2. Generation scalability: ability to scale by adopting
new generations of components
3. Heterogeneous scalability: ability to adopt
components from different vendors
Horizontal (scale out) & vertical (scale up) scaling
Scaling Horizontally (out/in): adding/ removing
nodes from a system
Examples: scaling out (to increase) from one web
server to three.
This requires improvements in maintenance and
management.
Horizontal (scale out) & vertical (scale up) scaling
Scaling Vertically (up/down): means
adding/removing resources to single i.e. addition of
CPUs, memory to single computer
These needs management, more sophisticated
programming to allocate resources and handles
issues
Scalability: Issues/Challenges
Scalability Planning: Predicting and planning for
scalability needs can be difficult. Over-provisioning
resources can lead to unnecessary costs, while
under-provisioning can result in performance
bottlenecks during peak demand. Companies like
Amazon and Google use predictive analytics to
anticipate scalability needs based on historical
data.
Scalability: Issues/Challenges
Heterogeneity: Hardware devices, OS,
Programming language must communicate with
each other, so some standard needs to agreed
and adopted(middleware). or it may lead to
compaitibility and interoperability challengeds
Transparency: should hide its distributed nature
from users , i.e. Access Transparency, Location
Transparency, Migration, Replication,
Concurrency, Failure
Scalability: Issues/Challenges

Controlling the cost of physical resources: demand of


resource grows, it should be possible to extend the system at
reasonable cost, to meet it, Return on Investment
It must be possible to add server computers to avoid the
performance bottleneck that would arise if a single file server had
to handle all file access requests
Example, if 20 users is supported by one server, then two servers
should be able to support 40 users., and it is not necessarily easy
to achieve in practice.
Scalability: Issues/Challenges

Fault Tolerance: distributed system must be


resillient to hardware failures, network issues and
bugs. which is very crucial
e.g. financial trading platform must continue
functioning without interruption, even if server fails
Scalability: Issues/Challenges

Controlling the performance loss:


there is a risk of performance degradation due to
increased network trafic such alogorithm should
use which may support scaling up/out to prevent
performance loss.

You might also like