0% found this document useful (0 votes)
15 views

Distributed System 2022

This document provides an overview of distributed computing. It discusses key differences between parallel and distributed computing, including that distributed computing uses geographically separated computers connected by a network, while parallel computing uses multiple processors within a single machine. It also defines several important terminologies used in distributed computing, such as jobs, nodes, task allocation, and performance parameters like throughput, latency, and bandwidth. The overview discusses advantages of distributed computing like utilizing already available individual machines rather than expensive parallel hardware.

Uploaded by

Fred
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Distributed System 2022

This document provides an overview of distributed computing. It discusses key differences between parallel and distributed computing, including that distributed computing uses geographically separated computers connected by a network, while parallel computing uses multiple processors within a single machine. It also defines several important terminologies used in distributed computing, such as jobs, nodes, task allocation, and performance parameters like throughput, latency, and bandwidth. The overview discusses advantages of distributed computing like utilizing already available individual machines rather than expensive parallel hardware.

Uploaded by

Fred
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/280977301

Distributed Computing: An Overview

Article · June 2015

CITATIONS READS

2 6,399

1 author:

Rafiqul Zaman Khan


Aligarh Muslim University
44 PUBLICATIONS 921 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Color Models View project

All content following this page was uploaded by Rafiqul Zaman Khan on 15 August 2015.

The user has requested enhancement of the downloaded file.


Int. J. Advanced Networking and Applications 2630
Volume: 07 Issue: 01 Pages: 2630-2635 (2015) ISSN: 0975-0290

Distributed Computing: An Overview


Md. Firoj Ali
Department of Computer Science, Aligarh Muslim University, Aligarh-02
Email: [email protected]
Rafiqul Zaman Khan
Department of Computer Science, Aligarh Muslim University, Aligarh-02
Email: [email protected]
----------------------------------------------------------------------ABSTRACT----------------------------------------------------------
Decrease in hardware costs and advances in computer networking technologies have led to increased interest in
the use of large-scale parallel and distributed computing systems. Distributed computing systems offer the
potential for improved performance and resource sharing. In this paper we have made an overview on
distributed computing. In this paper we studied the difference between parallel and distributed computing,
terminologies used in distributed computing, task allocation in distributed computing and performance
parameters in distributed computing system, parallel distributed algorithm models, and advantages of
distributed computing and scope of distributed computing.
Keywords – Distributed computing, execution time, heterogeneity, shared memory, throughput.
------------------------------------------------------------------------------------------------------------------------ -------------------------
Date of Submission: April 18, 2015 Date of Acceptance: June 08, 2015
------------------------------------------------------------------------------------------------------------------------ -------------------------
1. Introduction 1.2 No Shared Memory
This is an important aspect of for message-passing
D istributed computing refers to two or more communication among the nodes present in a network.
computers networked together sharing the same There is no common physical clock concept in this
computing work. The objective of distributed computing is memory architecture. But it is still possible to provide the
to sharing the job between multiple computers. abstraction of a common address space via the distributed
Distributed network is mainly heterogeneous in nature in shared memory abstraction [10, 11].
the sense that the processing nodes, network topology, 1.3 Geographical Separation
communication medium, operating system etc. may be In distributed computing system the processors are
different in different network which are widely distributed geographically distributed even over the globe. However,
over the globe [1, 2]. Presently several hundred computers it is not essential for the processors to be present on a
are connected to build the distributed computing system wide-area network (WAN). It is possible to make a
[3, 4, 5, 6, 7, 8]. In order to get the maximum efficiency of network/cluster of workstations (NOW/COW) present on
a system the overall work load has to be distributed among a LAN can be considered as a small distributed system
the nodes over the network. So the issue of load balancing [10, 12]. Due to the low-cost high-speed off-the-shelf
became popular due to the existence of distributed processor’s availability NOW configuration becomes
memory multiprocessor computing systems [3, 9]. In the popular. The Google search engine is built on the NOW
network there will be some fast computing nodes and slow architecture.
computing nodes. If we do not account the processing
speed and communication speed (bandwidth), the 1.4 Autonomy and Heterogeneity
performance of the overall system will be restricted by the The processors are autonomous in nature because they
slowest running node in the network [2, 3, 4, 5, 6, 7]. Thus have independent memories, different configurations and
load balancing strategies balance the loads across the are usually not part of a dedicated system connected
nodes by preventing the nodes to be idle and the other through any network, but cooperate with one another by
nodes to be overwhelmed. Furthermore, load balancing offering services or solving a problem together[10, 12].
strategies removes the idleness of any node at run time.
2. Differences between Parallel and Distributed
A distributed system can be categorized as a group of Computing
mostly autonomous nodes communicating over a There are many similarities between parallel and
communication network and having the following features distributed computing but there are some differences also
[10]: exist that are very important in respect of computing, cost
and time. Parallel computing actually subdivides an
1.1 No Common Physical Clock application into small enough tasks that can be executed at
This plays an important role to introduce the element of the concurrently while distributed computing divides an
“distribution” in a system and takes the responsibility to application into tasks that can be executed at different sites
provide inherent asynchrony amongst the processors. In using the available networks connected together. In
distributed network the nodes do not share common parallel computing multiple processing elements exist
physical clock [10]. within one machine in which every processing element
being dedicated to the overall system at the same time. But
in distributed computing a group of separate nodes
Int. J. Advanced Networking and Applications 2631
Volume: 07 Issue: 01 Pages: 2630-2635 (2015) ISSN: 0975-0290

possibly different in nature that each one contributes distributed computing. Actually topology defines how the
processing cycles to the overall system over a network. nodes will contribute their computational power towards
the tasks [11, 15].
Parallel computing needs expensive parallel hardware
to coordinate many processors within the same machine 3.6 Overheads
but distributed computing uses already available Overheads measure the frequency of communication
individual machines which are cheap enough in today’s among processors during execution. During the execution,
market. processors communicate to each other for the completion
of the job as early as possible, so obviously
3. Terminologies Used in Distributed Computing communication overheads take place. There are three
There are some basic terms used in distributed types of overheads mainly bandwidth, latency and
computing and ideas that will be defined first to response time [11]. First two are mostly influenced by the
understand the concept of distributed computing. network underlying the distributed computer system and
the last one is the administrative time taken for the system
3.1 Job to respond.
A job is defined as the overall computing entity that’s
need to be executed to solve the problem at hand [11]. 3.7 Bandwidth
There are different types of jobs depending upon the It measures the amount of data that can be transferred
nature of computation or algorithm itself. Some jobs are over a communication channel in a finite period of time
completely parallel in nature and some are partially [11].It always plays a critical role for the system
parallel. Completely parallel jobs are known as efficiency. Bandwidth is a crucial factor especially in case
embarrassingly parallel problem. In embarrassingly of fine grain problem where more communication takes
parallel problem communication among different entities place. The bandwidth is often far more critical than the
is minimum but in case of partially parallel problem speed of the processing nodes. The slow data rate
communication becomes high due to the communication obviously will restrict the speed of the processor and
among different processes running on different nodes to ultimately will cause poor performance efficiency.
finish the job. 3.8 Latency
It refers to the interval between an action being initiated
3.2 Granularity and the action actually having some effect [11]. Latency
Simply the size of tasks is expressed as specifies different meanings in different situations.
the granularity of parallelism. The grain size of a parallel Latency is the time between the data being sent and the
instruction is a measure of how much work each processor data actually being received in case of underlying network
does compared to an elementary instruction execution time called network latency. In case of task, latency is the time
[11]. It is equal to the number of serial instructions done between a task being submitted to a node and the node
within a task by one processor. There are mainly three actually begins the execution of the task called response
types of grain size exists: fine, medium and coarse grain. time. Network latency is closely related with the
bandwidth of the underlying network and both are critical
3.3 Node to the performance of a distributed computing system.
A node is an entity that is capable of executing the Response time and the network latency together are often
computing tasks. In traditional parallel system this refers called parallel overhead.
mostly to a physical processor unit within the computer
system. But in distributed computing a computer is
4. Performance Parameters in Distributed
generally considered as a computing node in a network
[11]. But in reality trends have been changed. A computer Computing
may have more than one core like dual core or multi core There are many performance parameters which are
processors. Both the terms node and processor have been mostly used for measuring parallel computing
used interchangeably in this literature. performance. Some of them are listed as follows:

3.4 Task 4.1 Execution Time


A task is a logically discrete part of the overall Execution time is defined as the time taken to complete
processing job. Each task is distributed over different an application after submission to a machine till finish.
processors or nodes connected through a network to work When the application is submitted to a serial computer, the
on each task to complete the job at the aim of minimized execution time is called serial execution time and denoted
task idle time. In the literature, tasks are sometimes by TS and when application is submitted to a parallel
referred to as jobs and vice-versa [11]. computer, the execution time is called parallel execution
time and denoted by TP.
3.5 Topology
4.2 Throughput
The way of arranging the nodes in a network or the
It is defined as the number of jobs completed per unit
geometrical structure of a network is known as topology.
time [11]. Throughput depends on the size of jobs.
Network topology is the most important part of the
Throughput may be one process per hour for large process
Int. J. Advanced Networking and Applications 2632
Volume: 07 Issue: 01 Pages: 2630-2635 (2015) ISSN: 0975-0290

while it may be twenty processes per seconds for small


processes. It is fully dependent on the underlying
architecture and the size of the running processes on that 4.10 Reliability
architecture. Reliability ensures operations without fail under any
specified conditions for a definite period of time [11, 14,
4.3 Speed Up 15].
Speed up of a parallel algorithm is the ratio of execution
time when the algorithm is executed sequentially to the 5. Parallel Distributed Algorithm Models
execution time when the same algorithm is executed by In this section we have stated parallel distributed
more than one processor in parallel. Speed up [11, 14] can algorithm models. An algorithm model is classically a
be mathematically represented as: Sp=Ts/Tp, where Ts is the method of forming a parallel algorithm by picking proper
sequential execution time, Tp is the parallel execution time. decomposition and mapping technique and applying the
In ideal situation, the speed up is equal to the number of appropriate strategy to minimize overheads [11, 14, 15].
processor in parallel but it is always less than the ideal one
because the other important factors in a cluster like 5.1 The Data-Parallel Model
communication delay, memory access delay reduces the The data-parallel model is shown in Fig. 1 [11]. This is
speed up. a simplest algorithm model. In this model, the tasks are
generally statically mapped onto computing elements and
4.4 Efficiency each task does the similar operations on different data
It is the measure of the contribution by the processors to [16]. Data parallelism occurs as the processors operate
an algorithm in parallel. Efficiency [11, 14] can be similar operations but the operations may be executed in
measured as Ep= Sp/p (0>Ep<1) where Sp is the speed up phases having different data. Uniform partitioning
and p is the number of processors in parallel. The Value of technique and static mapping are followed for load
Ep is closure to 1 indicates an efficient algorithm. balancing as the processors operate on same [11, 14, 15].
Data-parallel algorithms [17, 18] follow either shared-
4.5 System Utilization address-space or message passing paradigms technique.
This is a very important parameter. System utilization However, message passing offers better performance for
measures the involvement of resources present in a partitioned address space memory structure. Overheads
system. It may fluctuate between zero to 100 percent [11, can be minimized in the data-parallel model by selecting a
14]. locality preserving [17, 18]. The most attracting feature of
data-parallel problems is that the degree of data
4.6 Turnaround Time parallelism grows with the size of the problem and can be
It is defined as the time elapsed by the job from its effectively solve by adding more number of processors.
submission to completion. Turnaround time is the
summation of the time to get into memory, waiting in
ready queue, executing on the processor and spending time
for input/output operations [11, 14].

4.7 Waiting Time


I tis the total time spent by a processor waiting in ready
queue for getting a resource. In other words, waiting time
is the duration waited by a process to get the resource
attention. Waiting time depends upon the parameters
similar as turnaround time [11, 14, 15].

4.8 Response Time


Time between submission of requests and first response
to the request is known as response time. This time can be
restricted by the output devices of computing system [11, Figure 1 data parallel model
14, 15].
5.2 The Task Graph Model
4.9 Overheads Task dependency graph is an important way of
The overheads offered by a parallel program are representing the computations in any parallel algorithm
expressed by a single function known as overhead [11, 14, 15].The task-dependency graph has two varieties:
function [11, 14, 15]. We denote the overhead function of trivial and nontrivial. However, task dependency graph is
a parallel system by the symbol To. The total over heads in also used in mapping of tasks on to the processors. This
solving a problem summed over all processing elements is model is useful for solving problems which has the
pTP. Therefore, the overhead function (To) is given by (1) volume of data associated with the tasks islarge in
where TS time is free from overhead. comparison to the amount of computation associated with
:To = pTP– TS (1) them. Generally, static mapping technique is used to
Int. J. Advanced Networking and Applications 2633
Volume: 07 Issue: 01 Pages: 2630-2635 (2015) ISSN: 0975-0290

optimize the cost of data movement among tasks. Even a The larger granularity may take longer time to fill up
decentralized dynamic mapping uses the information the pipeline and the first process may take longer time to
about the task-dependency graph structure for minimizing pass the data to the next step so the next process may have
interaction overhead [11, 14, 15]. to wait longer and too fine granularity may cause more
over heads so this model uses overlapping interaction with
5.3 The Work Pool Model computation to reduce the overheads [11].
In this model the tasks may be assigned to any
processor by a dynamic mapping technique for load 5.6 Hybrid Models
balancing either by centralized or decentralized fashion Sometimes, one or two models are combined to form
[11, 14, 15].This model does not follow any pre-mapping hybrid model shown in Fig. 2 to solve the current problem
scheme. The work already may be statically available in hand [11]. Many times, an algorithm design may need
before computation orcan be created dynamically. features of more than one algorithm model. For example,
Whatever the process available or generated will be added pipeline model is combined with a task dependency graph
to the global (possibly distributed) work pool. It is in which data passed through the pipeline model lead by
necessary to use termination detection algorithm for the dependency graph [15, 19, 20].
notifying the other processes to understand the completion
of entire work when dynamic and decentralized mapping
is used so that the processor can stop finding more jobs
[11, 14, 15].

5.4 The Master-Slave Model


In this model generally one node is specially designated
called master node and other nodes are called worker or
slave nodes [15, 19, 20]. Simply master node generates the
work and distributes the works to the worker nodes. This
model does not have any absolute way of mapping and Figure 2 hybrid model
whatever work has been assigned to any worker will have
to complete. The worker nodes do the necessary 6. Advantages of Distributed Computing
computations and the master node collects that result. The Followings are the main advantages of distributed
master node may allocate tasks to the worker nodes computing:
depending on the priori information about the worker
nodes or on random basis which is more preferred 6.1 Inherently Distributed Computations
approach. If the master node takes more time to generate The applications which are distributed over the globe
works, the worker nodes can work in phases so that the like money transfer in banking, reservations in flight
next phase may start after the completion of previous journey which involves consensus among parties are
phase [15, 19, 20].This model resembles to the inherently distributed in nature [10, 11, 15].
hierarchical model in which the root nodes acts as master
nodes and the leaf nodes acts as slave nodes. Both shared- 6.2 Resource Sharing
address-space and message-passing paradigms are suitable As the replication of resources at all the sites is neither
for this model [15, 19, 20]. cost-effective nor practical for performance improvement,
the resources are distributed across the system. It is also
A large number of communications over heads impractical to place all the resources at a single site as it
generated at the master node may crash the whole system can degrade significant performance [10, 11, 15].For quick
[15, 19, 20].Thus it is necessary to choose such granularity access as well as higher reliability distributed database like
of tasks so that the system may have more dominance on DB2 partition the data sets across a number of servers
computation rather than communication. along with replication at a few sites [15].

5.5 The Pipeline or Producer-Consumer Model 6.3 Access to Geographically Remote Data and
In this model, the data is passed through pipeline which Resources
has several stages and each stage (process) does some In many instances data cannot be replicated at each site
work on the data and passed to the next stage. This due to its heavy size and it also may be risky to keep the
concurrent execution on a data stream by different vital data in each site [10, 11, 15]. For example, banking
programs is called stream parallelism[15, 19, 20, 21]. The system’s data cannot be replicated everywhere due to its
pipelines may be in the form of linear or multidimensional sensitivity. So it is rather stored in central server which
arrays, trees or general graphs. A pipeline is a chain of can be accessed by the branch offices through remote log
producers and consumers because in this model each in. Advances in mobile communication through which the
process generates result for next process. In general, static central server can be accessed which needs distributed
mapping is used in this model. protocols and middleware [15].
Int. J. Advanced Networking and Applications 2634
Volume: 07 Issue: 01 Pages: 2630-2635 (2015) ISSN: 0975-0290

6.4 Enhanced Reliability High performance computing is being highly used in


Enhanced reliability is provided by the distributed scientific applications like sequencing of the human
system as it has inherent potential by replicating resources genome, examining biological sequences to develop new
[10, 11, 15]. Further, in general the distributed resources medicines and treatments for diseases, analyzing
do not crash or malfunction. Reliability involves several extremely large dataset in bioinformatics and astrophysics,
points: understanding quantum phenomena and macromolecular
6.4.1 Availability structures in computational physics and chemistry [10, 11,
The resources are always available and can be accessed 15].
any time.
Distributed computing is extensively being used in
6.4.2 Integrity commercial applications. As the applications frequently
The resources or the data should always be in correct used web and database servers, it is indispensible to make
state as the data or resources are accessed concurrently by optimization for queering and taking quick decisions for
multiple processors. better business processes. The huge volume of data and
geographically distributed nature of this data need the use
6.4.3 Fault-Tolerance of effective parallel and distributed algorithms for the
Distributed system is fully fault tolerant because it issues like classification, time-series analysis, association
works properly even some of its resources stop to work rule mining and clustering [10, 11, 15].
[11, 22].
Since the computer systems become widespread in
6.5 Increased Performance/Cost Ratio every field of computer science applications itself, the
The performance/cost ratio is improved by resource parallel distributed computing embedded in a diverse field
sharing and accessing geographically remote data and of computer applications like computer security analysis,
resources [10, 11, 15].In fact, any job can be partitioned network intrusion detection, cryptography analysis,
and can be distributed over numbers of computer in a computations in ad-hoc mobile etc. [15].
distributed system rather than to allocate whole job to the
8. Conclusion
parallel machines.
This paper focuses on distributed computing. In this
6.6 Scalability paper we studied the difference between parallel and
More numbers of nodes may be connected to the wide- distributed computing, terminologies used in distributed
area network which does not directly affect in computing, task allocation in distributed computing and
communication performance [10, 11, 15]. performance parameters in distributed computing system,
parallel distributed algorithm models and advantages of
6.7 Modularity and Incremental Expandability distributed computing and scope of distributed computing.
Heterogeneous processors running the same middleware
algorithm may be simply included into the system without References
altering the performance and the existing nodes can be
easily replaced by other nodes [10, 11, 15]. [1] A Chhabra, G Singh, S S Waraich, B Sidhu and G
Kumar, Qualitative Parametric Comparison of Load
7. Scope of Distributed Computing Balancing Algorithms in parallel and Distributed
Distributed computing has changed the scenario of Computing Environment, Word Academy of
computation. Distributed computing is involved in almost Science, Engineering and Technology, 2006, 39-42.
every field of computation. The cost benefit analysis of [2] D Z Gu, L Yang and L R Welch, A Predictive,
distributed computing is always higher than the other Decentralized Load Balancing Approach, in:
dedicated computing [11]. Distributed computing is being Proceedings of the 19th IEEE International Parallel
highly applied in the fields such as engineering and and Distributed Processing Symposium, Denver,
design, scientific applications, commercial applications Colorado, April 2005, 04-08.
and applications in computer systems. [3] M F Ali and R Z Khan, The Study on Load
Balancing Strategies in Distributed Computing
Distributing computing is widely being used to design System, International Journal of Computer Science &
applications like airfoils, internal combustion engines Engineering Survey (IJCSES) Vol.3, No.2, April
,high-speed circuits, micro-electromechanical and nano- 2012.
electromechanical systems in engineering and design. [4] R Z Khan and M F Ali, An Efficient Diffusion Load
These types of applications need mainly optimization. Balancing Algorithm in Distributed System, I.J.
Algorithms like Genetic programming for discrete Information Technology and Computer Science, Vol.
optimization Branch-and-bound, Simplex, Interior Point 08, July 2014, 65-71.
Method for linear optimization which are being mostly [5] R Z Khan and M F Ali, An Efficient Local
used in optimization are parallelized and computed by Hierarchical Load Balancing Algorithm (ELHLBA)
distributed computing [10, 11, 15]. in Distributed Computing, IJCSET, Vol 3, Issue 11, ,
November, 2013, 427-430.
Int. J. Advanced Networking and Applications 2635
Volume: 07 Issue: 01 Pages: 2630-2635 (2015) ISSN: 0975-0290

[6] R Z Khan and M F Ali, An Improved Local


Hierarchical Load Balancing Algorithm (ILHLBA) Dr. Rafiqul Zaman Khan:
in Distributed Computing, International Journal of Dr. Rafiqul Zaman Khan
Advance Research in Science and Engineering is presently working as an
(IJARSE), Vol. No.2, Issue No.11, November, 2013. Associate Professor in the
[7] M F Ali and R Z Khan, A New Distributed Load Department of Computer
Balancing Algorithm, International Journal on Science in Aligarh Muslim
Recent and Innovation Trends in Computing and University (A.M.U),
Communication (IJRITCC), vol: 2, Issue: 9, Aligarh, India. He
September 2014, 2556 – 2559. received his B.Sc. Degree
[8] D L Eager, E D Lazowska and J Zahorjan, A from M.J.P Rohilkhand University, Bareilly, M.Sc and
Comparison of Receiver Initiated and Sender M.C.A from A.M.U. and PhD (Computer Science) from
Initiated Adaptive Load Sharing, Performance Jamia Hamdard University, New Delhi, India. He has 19
Evaluation, Vol. 6, 1986, 53-68. years of Teaching Experience of various reputed
[9] S P Dandamudi and K C M Lo, Hierarchical Load International and National Universities viz King Fahad
Sharing Policies for Distributed Systems, Technical University of Petroleum & Minerals (KFUPM), K.S.A,
Report TR- 96-22, Proc. Int. Conf. Parallel and Ittihad University, U.A.E, Pune University, Jamia
Distributed Computing Systems, 1996. Hamdard University and AMU, Aligarh. He worked as a
[10] A D Kshemkalyani and M. Singhal, Distributed Head of the Department of Computer Science at Poona
Computing: Principles, Algorithms and Systems, College, University of Pune. He also worked as a
Cambridge University Press, 2008. Chairman of the Department of Computer Science, AMU,
[11] B Barney, Introduction to Parallel Computing, Aligarh. His Research Interest includes Parallel &
Retrieved from Lawrence Livermore National Distributed Computing, Gesture Recognition, Expert
Laboratory: http://computing Systems and Artificial Intelligence.
.IInl.govt/utorials/parallel comp/,2010.
[12] www.cs.uic.edu/~ajayk/chapter1.pdf Mr. Md Firoj Ali: Mr.
[13] M Nelson, Distributed Systems Topologies: Part 1, Md Firoj Ali is presently
http://openp2p.com, 2001. working as an Assistant
[14] I Ahmad, A Gafoor and G C Fox, Hierarchical Engineer in WBSEDCL,
Scheduling of Dynamic Parallel Computations on India. He received his
Hypercube Multicomputers, Journal of Parallel and B.Sc. and MCA Degree
Distributed Computing, 20, 19943, 17-329. from A.M.U. He has
[15] A Grama, A Gupta, G Karypis and V Kumar, been awarded Senior
Introduction to Parallel Computing, Publisher: Research Fellowship by
Addison Wesley, uJanuary 2003. UGC, India and also
[16] K Cristoph and K Jorg, Modles for Paralel cleared National Eligibility Test conducted by UGC, 2012
Computing: Review and Perspectives, PARS- and State Eligibility Test conducted by WBCSC, 2013.
Miteilungen 24, Dec. 2007, 13-29. His Research Interest includes Load balancing in
[17] P J Hatcher and M J Quinn, Data-Parallel Distributed Computing System. He has published eleven
Programming on MIMD Computers, MIT Press, research papers in International Journals/Conferences in
Cambridge, MA, 1991. the field of parallel and distributed computing.
[18] W D Hillis and G Steele, Data Parallel Algorithms,
Communications of the ACM, Vol. 29, 1986.
[19] A Clamatis and A Corana, Performance Analysis of
Task based Algorithms on Heterogeneous systems
with message passing, In Proceedings Recent
Advances in Parallel Virtual Machine and Message
Passing Interface, 5th European PVM/MPI User’s
Group Meeting, Sept 1998.
[20] D Gelernter, M R Jourdenais and Kaminsky, Piranha
Scheduling: Strategies and Their Implementation,
International Journal of Parallel Programming, Feb
1995, 23(1): 5-33.
[21] S H Bokhari, Partitioning Problems in Parallel,
Pipelined, and Distributed Computing, IEEE
Transactions on Computers, January 1988, 37:8-57.
[22] www.cse.iitk.ac.in/report-repository, 2004.

View publication stats

You might also like