Tutorial No 1 (Solved)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

King Saud University

College of Computer and Information Sciences


Department of Computer Science
CSC453 – Parallel Processing – Tutorial No 1 – Third Quarter 2022/23

Question 1
1. Give the definition of Parallel computing and Parallel Programming
Parallel Computing: is a technique to accelerate computations in which many calculations are
carried out simultaneously.
Parallel Programming: consists of decomposing a programming problem into tasks, Deploy the
tasks on multiple processors and run them simultaneously, Coordinating work and
communications of those processors.

2. Enumerate and give a brief description of the main opportunities of parallelism.


• Instruction Level Parallelism
Hidden Parallelism in computer programs by compilers.

• Single computer level


Multi-core computers: Chip multi-processors Dual-core, Quad-core
Multi-processor computers: Symmetric multi-processors Super-computers

• Multiple computers level


Clusters, Servers, Grid computing
Collection of computers.
Clusters: Fixed, built at compile time, don’t change at run time.
Grid: Not fixed, unknown at compile time, my change at run time.

3. Use an example to explain how the Instruction Level Parallelism works.


We translate the instruction into a tree where the leaves are numbers, and the other nodes are
operators. The parallelism consists of running nodes of the same level simultaneously, bottom
up.
(a+b) * (c+d) could be computed simultaneously.
Separation of instructions and data. Instructions and memory references execute in parallel
without interfering.
Instruction Execution is pipelined: Processors initiate more than one instruction at a time.
4. Enumerate and give a brief description of the different types of parallel processing.
Task parallelism: Partition various tasks carried out solving the problem among the cores.
Data parallelism: Partition the data used in solving the problem among the cores. Each core
carries out similar operations on it’s part of the data.

5. What are the main differences between Distributed and Parallel Computing

Distributed Parallel
Objectives • Increase Reliability • Increase speed up
• Increase availability • Decrease latency
• Increase bandwidth
• Increase throughput
Assumptions Not reliable Reliable
Interaction among processors Infrequent Frequent
Work load • Heavy • Low overhead
• Coarse grained • Fine grained

Aspects of Parallel Computing:

1- Parallel Computers Architecture

2- Algorithms and applications:


- Reasoning about performance
- Designing parallel algorithms.

3- Parallel Programming:
- Paradigms: Message passing, shared memory, multi threading.
- Programming Models: SPMD, divide and conquer, task farming, data flow.
- Programming languages
- Frameworks
- Dedicated environments

You might also like