0% found this document useful (0 votes)
15 views

Lecture 10 - Parallel and Distributed Computing CSC 4106

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Lecture 10 - Parallel and Distributed Computing CSC 4106

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Parallel and Distributed

Computing
Lecture 10

CSC 4106
Process-centric

 In process-centric approach, processes are consciously used to get


results.
 The process-centric approach utilizes the data flow in a process as a
common thread to string various data entities together.
 In this way, no common data format is required.
 There are four dominant types of approaches,
 direct data translation
 two-way data conversion
 dual kernel solutions
 service –oriented solutions.
Direct Data Translation

 Direct data translators provide a direct solution which


entails translating the data stored in a product database directly
from one CAD system format to another, usually in one step. There
usually exists a neutral database in a direct data translator.
Two Way Data Conversion

 Data conversion is the process of translating data from one format to


another while retaining its viability and quality. The process involves
extracting data from a source, such as a database, file, or web service,
transforming it, and loading it to the required destination.
Dual Kernel Solution

 Since the kernel is in control of hardware, running two without an


abstraction layer. i.e Virtual Machines
 The kernel is the essential center of a computer operating system (OS).
It is the core that provides basic services for all other parts of the OS. It
is the main layer between the OS and hardware, and it helps
with process and memory management, file systems, device
control and networking.
Service-oriented solutions
Shared Memory – Distributed Memory

 Shared Memory
 The programmer task is to specify the activities of a set of processes that
communicate by reading or writing shared memory.
 Advantage : the programmer need not to concerned about data distribution issues.
 Disadvantage : performance implementations may be difficult on computers that
lack hardware support for shared memory and race condition tends to arise more
easily.
 Distributed memory
 Processes have only local memory and must use some other mechanism to
collect results. e.g Message passing, Remote procedure call
 Advantage : programmer have explicit control over data distribution and
communication.
Shared / distributed memory
Scalability and Performance Studies

 Scalability – the ability to handle increased workload


 Scalability - It dynamically adjust computing performance by changing
available computing resources.
 Performance – An act of presenting a viable resultant.
Scalability and Performance in
Parallel Computing
 A parallel architecture is said to be scalable if it can be expanded or
reduced to a larger or smaller system.
 Resultantly it receives a linear increase or decrease in its performance.
 In other words, scalability is a reflection of the system's ability to
efficiently utilize the increased processing resources.
 Parallel architectures have limited scalability.
 Parallel architectures are preferred in place requiring faster speed and
better performance.
Scalability and Performance in
Distributed Computing
 A Distributed architecture is said to be highly scalable, As there is no
limitation to expand or contract the system.
 It generally preferred in places requiring high scalability.

You might also like