0% found this document useful (0 votes)
22 views74 pages

Week 1

Uploaded by

abdul1818man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views74 pages

Week 1

Uploaded by

abdul1818man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 74

Parallel and Distributed

Computing
Lecture –
Introduction

1
Background – Serial Computing
• Earlier, computer software was written conventionally for Serial
computing.
• Standard computing is also known as “Serial computing"
• This meant that to solve a problem, an algorithm divides the
problem into smaller instructions.
• These discrete instructions are then executed on the Central
Processing Unit of a computer one by one.
• Only after one instruction is finished, next one starts. 2
Background – Serial Computing

3
Background – Serial Computing

4
Background – Serial Computing
• A real-life example of this would be people standing in a queue waiting for a
movie ticket and there is only a cashier.
• The cashier is giving tickets one by one to the persons. The complexity of this
situation increases when there are 2 queues and only one cashier.
• So, in short, Serial Computing is following:
1. In this, a problem statement is broken into discrete instructions.
2. Then the instructions are executed one by one.
3. Only one instruction is executed at any moment of time.

5
Background – Serial Computing
• Look at point 3. This was causing a huge problem in the computing industry
as only one instruction was getting executed at any moment of time. This was
a huge waste of hardware resources, as only one part of the hardware will be
running for particular instruction and of time.
• As problem statements were getting heavier and bulkier, so does the amount
of time in execution of those statements. Examples of processors are
Pentium 3 and Pentium 4.
• Now let’s come back to our real-life problem. We could definitely say that
complexity will decrease when there are 2 queues and 2 cashiers giving
tickets to 2 persons simultaneously. This is an example of Parallel Computing.
6
Parallel Computer
• Virtually all stand-alone computers today are parallel from a hardware
perspective:
- Multiple functional units (L1 cache, L2 cache, branch, prefetch, decode,
floating-point, graphics processing (GPU), integer, etc.)
- Multiple execution units/cores
- Multiple hardware threads

7
Parallel Computer

8
Parallel Computer
• Networks connect multiple stand-alone computers (nodes) to make larger
parallel computer clusters.

9
Parallel Computing
• Kind of computing architecture where the large problems break into
independent, smaller, usually similar parts that can be processed in one go.

• It is done by multiple CPUs communicating via Shared Memory, which


combines results upon completion.

• It helps in performing large computations as it divides the large problem


between more than one processor.
10
Parallel Computing

11
Parallel Computing

12
Parallel Computing
• Helps in faster application processing and task resolution by increasing the
available computation power of systems.
• The parallel computing principles are used by most supercomputers employ
to operate.
• The operational scenarios that need massive processing power or
computation, generally, parallel processing is commonly used there.

13
Parallel Computing
• Typically, this infrastructure is housed where various processors are installed in a server
rack; application server distributes the computational requests into small chunks then
requests are processed simultaneously on each server.

• The earliest computer software is written for serial computation as they are able to
execute a single instruction at one time, but parallel computing is different where it
executes several processors an application or computation in one time.
14
Parallel Computing – Why?
• The Real-World is a Massively Complex
- In the natural world, many complex, interrelated events are happening at
the same time, yet within a temporal sequence.

15
Parallel Computing – Why?
• The Real-World is a Massively Complex
- Compared to serial computing, parallel computing is much better suited
for modeling, simulating and understanding complex, real world
- phenomena.
- For example, imagine modeling these serially:

16
Parallel Computing – Why?
• Save Time/Monet
- In theory, throwing more resources at a task will shorten its time to
completion, with potential cost savings.
- Parallel computers can be built from cheap, commodity components.

17
Parallel Computing – Why?
• Solve Larger/Complex Problems
- Many problems are so large and/or complex that it is
impractical or impossible to solve them using a serial
program, especially given limited computer memory.

18
Parallel Computing – Why?
• Solve Larger/Complex Problems
- Example: "Grand Challenge Problems"
(en.wikipedia.org/wiki/Grand_Challenge) requiring petaflops and
petabytes of computing resources.
- Example: Web search engines/databases processing millions of
transactions every second

Petaflops =
Petabytes =

19
Parallel Computing – Why?
• Provide Accuracy
- Single compute resource can only do one thing at a time.
Multiple compute resources can do many things simultaneously.
- Example: Collaborative Networks provide a global venue where
people from around the world can meet and conduct work
"virtually".

20
Parallel Computing – Why?
• Provide Accuracy
- Single compute resource can only do one thing at a time.
Multiple compute resources can do many things simultaneously.
- Example: Collaborative Networks provide a global venue where
people from around the world can meet and conduct work
"virtually".

21
Parallel Computing – Why?
• Take Advantage of non-local Resources
- Using compute resources on a wide area network, or even the
Internet when local compute resources are scarce or insufficient.
- Example: SETI@home (setiathome.berkeley.edu) has over 1.7
million users in nearly every country in the world. (May, 2018).

22
Parallel Computing – Why?
• BETTER USE OF UNDERLYING PARALLEL HARDWARE
- Modern computers, even laptops, are parallel in architecture with multiple
processors/cores.
- Parallel software is specifically intended for parallel hardware with
multiple cores, threads, etc.
- In most cases, serial programs run on modern computers "waste"
potential computing power.

23
Parallel Computing – Who is using?
• Science and Engineering
- Historically, parallel computing has been considered to be "the high end of
computing", and has been used to model difficult problems in many areas
of science and engineering:
- Atmosphere, Earth, Environment
- Physics - applied, nuclear, particle, condensed matter, high pressure,
fusion, photonics
- Bioscience, Biotechnology, Genetics
- Chemistry, Molecular Sciences

24
Parallel Computing – Who is using?
• Science and Engineering
- Geology, Seismology
- Mechanical Engineering - from prosthetics to spacecraft
- Electrical Engineering, Circuit Design, Microelectronics
- Computer Science, Mathematics
- Defense, Weapons

25
Parallel Computing – Who is using?
• Industrial and Commercial
- • Today, commercial applications provide an equal or greater driving force
in the development of faster computers. These applications require
- the processing of large amounts of data in sophisticated ways. For
example:
- "Big Data",data mining
- Artificial Intelligence (AI)
- Oil exploration

26
Parallel Computing – Who is using?
• Industrial and Commercial
- Web search engines, web based business services
- Medical imaging and diagnosis
- Pharmaceutical design
- Financial and economic modeling
- Management of national and multi-national corporations
- Advanced graphics and virtual reality, particularly in the entertainment
industry
- Networked video and multi-media technologies
- Collaborative work environments 27
Parallel Computing – Who is using?
• Global Applications
- Parallel computing is now being used extensively around the world, in a
wide variety of applications

28
Parallel Computing - Advantages
• It saves time and money as many resources working together will reduce the
time and cut potential costs.

• It can be impractical to solve larger problems on Serial Computing

• It can take advantage of non-local resources when the local resources are
finite.

• Reduces the Complexity

• Serial Computing ‘wastes’ the potential computing power, thus Parallel


Computing makes better work of the hardware. 29
Parallel Computing –Parallel Computer Architectures
• Shared memory: All processors can access the same memory

• Uniform memory access (UMA):


- Identical Processors
- Equal access and access times to memory

30
Parallel Computing –Non-Uniform Memory Access (NUMA)
• Not all processors have equal access to all memories

• Memory access across link is slower

• Advantages: -user-friendly programming perspective to memory -fast and


uniform data sharing due to the proximity of memory to CPUs

• Disadvantages: -lack of scalability between memory and CPUs. -Programmer


responsible to ensure "correct" access of global memory -Expense
31
10/24/2023 32
10/24/2023 33
10/24/2023 34
10/24/2023 35
10/24/2023 36
10/24/2023 37
10/24/2023 38
10/24/2023 39
10/24/2023 40
10/24/2023 41
10/24/2023 42
10/24/2023 43
10/24/2023 44
10/24/2023 45
10/24/2023 46
10/24/2023 47
10/24/2023 48
10/24/2023 49
10/24/2023 50
10/24/2023 51
10/24/2023 52
10/24/2023 53
Parallel Computing - Distributed Memory
• Distributed memory systems require a communication network to connect
inter-processor memory.

• Advantages: -Memory is scalable with number of processors. -No memory


interference or overhead for trying to keep cache coherency. -Cost effective

• Disadvantages: -programmer responsible for data communication between


processors. -difficult to map existing data structures to this memory
organization. 54
Parallel Computing - Distributed Memory
• Distributed memory systems require a communication network to connect
inter-processor memory.

• Advantages: -Memory is scalable with number of processors. -No memory


interference or overhead for trying to keep cache coherency. -Cost effective

• Disadvantages: -programmer responsible for data communication between


processors. -difficult to map existing data structures to this memory
organization. 55
10/24/2023 56
Parallel Computing - Hybrid Distributed-Shared Memory
• Generally used for the currently largest and fastest computers
• Has a mixture of previously mentioned advantages and disadvantages

57
Parallel Computing – Limitations
• It addresses such as communication and synchronization between multiple
sub-tasks and processes which is difficult to achieve.

• The algorithms must be managed in such a way that they can be handled in a
parallel mechanism.

• The algorithms or programs must have low coupling and high cohesion. But
it’s difficult to create such programs.

• More technically skilled and expert programmers can code a parallelism-


based program well.
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74

You might also like