0% found this document useful (0 votes)
19 views4 pages

DS Module 6

Uploaded by

jwngcharnrzy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views4 pages

DS Module 6

Uploaded by

jwngcharnrzy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Module 6 : DISTRIBUTED SCHEDULING

INTRODUCTION
Distributed system offers a tremendous processing capacity, and to take full advantage of it, good resource
allocation schemes are needed. For this distributed scheduler is used. Because wide-area networks have
high communication delays, distributed scheduling is more suitable for a distributed Systems based on local
area networks.
MOTIVATION
A locally distributed system consists of a collection of autonomous computers, connected by a local area
communication network. Users submit tasks at their host computers for processing. The need for Load
distributing arises in such environments because due to the random arrival of tasks and their random CPU
service time requirements. There is a possibility that several computers are heavily loaded while others are
idle or lightly loaded. Even in a homogeneous distributed system, System performance can potentially be
improved by appropriately transferring the load from heavily loaded computers to idle or lightly loaded
computers.
ISSUES IN LOAD DISTRIBUTING
There are several central issues which are as follows:
1. Load: Resource queue lengths and particularly the CPU queue length are good indicators of load because
they correlate well with the task response time. Measuring the CPU queue length is fairly simple and carries
little overhead.
While the CPU queue length has been extensively used in previous studies as a load indictor, it has been
reported that little correlation exists between CPU queue length and processor utilization, particularly in an
interactive environment. So, the designers of V-System used CPU utilization as an indicator of the load at a
site.
2. Classification of Load distributing algorithms: The basic function of a load distributing algorithm is to
transfer load (tasks) from heavily loaded computers to the idle or lightly loaded computers. Load distributed
algorithms be broadly characterized as static, dynamic or adaptive.
In Static load distributing algorithms, decisions are hard-wired in the algorithm using a prior knowledge of
the system.
Dynamic load distributing algorithms use system state information to make load distributing decisions. While
static algorithms make no use of such decisions.
Adoptive load distributing algorithms are a special class of dynamic load distributing algorithms in that they
adapt their activities by dynamically changing the parameters of the algorithm to suit the changing system
state. E.g., a dynamic algo may continue to collect the system state irrespective of the system load. On the
others hand adaptive algorithm may discontinue the collection of the system state if the overall system load
is high to avoid imposing additional overhead on the system.

3. Load Balancing Vs Load Sharing: Load distributing algorithms can further be classified as load balancing
or Load Sharing algorithms. Both types of algorithms reduce the likelihood of an unshared state by
transferring tasks io lightly loaded nodes. Load balancing algorithm attempting to equalize load at all
computers Because a load balancing algo transfers tasks at higher rate than a load sharing algorithm, the
higher overhead incurred by a load balancing algorithm may outweigh this potential performance
1
improvement. Anticipatory transfers increase the task transfer rate of a load sharing algorithm, making it
less distinguishable from load balancing algo. In this sense, load balancing can be considered a special case
of load sharing, performing a particular level of anticipatory task transfers.
4. Pre-emptive Vs Non Pre-emptive Transfers: Pre-emptive task transfer involve the transfer of a task that
is partially executed. This transfer is an expensive operation as the collection of a task's state can be difficult
Task state consists of a virtual memory image, unread I/O buffers messages, file pointers, timer etc.
Non-pre-emptive involve the transfer of tasks that have not begun execution and hence do not require the
transfer of the tasks state. Non-pre-emptive task transfers are also referred to as 'task placements"
COMPONENTS OF LOAD DISTRIBUTING AL GORITHM
Typically, a load distributing algorithm has four components:
1. A transfer policy that determines whether a node is in a suitable state to participate in a task transfer.
2. A selection policy that determines which task should be transferred
3. A location policy that determines to which node a task selected for transfer should be sent.
4. An information policy which is responsible for triggering the collection of system state information.
A transfer policy typically requires information on the local nodes state to make decisions. A location policy,
on the other hand, is likely to require information on the states oft remote nodes to make decisions.
1. Transfer Policy: A large number of the transfer policies that have been proposed are threshold policies.
Thresholds are expressed in units of load. When a new task originates at a node, and the load at that node
exceeds a threshold T, the transfer policy decides that the node is sender. If the load at node falls below T,
the transfer policy decides that the node can be a receiver for a remote task.
2. Selection Policy: A selection policy selects a task for transfer once the transfer policy decides that the
node is a sender. The Simplest approach is to select newly originated task that have caused the node to
become a sender by increasing the load at the node beyond the threshold. In other method, a task is selected
for transfer only if its response time will be improved upon transfer. There are other factors to consider in
the selection of a task. First the overhead incurred by the transfer should be minimal. Second, the number
of location-dependent calls made by the selected task should be minimal.

3. Location Policy: The responsibility of a location policy is to find suitable nodes to share load. A widely used
method for finding a suitable node is through Polling. In polling, a node polls another node to find out
whether it is a suitable node for load sharing. Nodes can be polled either serially or in parallel.
4. Information Policy: The information policy is responsible for deciding when information about the states
of other nodes in the system should be collected. Most information policies are one of the following three
types,
a) Demand-driven
b) Periodic
c) State-change-driven
In demand-driven policy, a node collects the state of other nodes only when it becomes either a sender or a
receiver. Demand-driven policies can be sender-initiated, receiver-initiated or symmetrically initiated.
In sender-initiated policies, senders look for receivers to transfers their load.
In receiver-initiated policies, receivers solicit load from senders.

2
A symmetrically initiated policy is a combination of both, where load sharing actions are triggered by the
demand for extra processing power or extra work.
In periodic policy nodes exchange load information periodically.
In state-change driven policy, nodes disseminate state information whenever their state changes by a certain
degree. A state-change-driven policy differs from a demand-driven policy in that it disseminates information
about the state of a node rather than collecting information about other nodes Under centralized state-
change-driven policies, nodes send state information to a centralized collection point. Under decentralized
state-change-driven policies, nodes send information to peers.
STABILITY : There are mainly two views of stability:
1. The Queuing-Theoretic Perspective
2. The Algorithmic Perspective

The Queuing-Theoretic Perspective: When the long term arrival rate of work to a system is greater than the
rate at which the system can perform work, the CPU queues grow without bound. Such a system is termed
unstable.
Alternatively, an algorithm can be stable but may still cause a system to perform worse than when it is not
using the algorithm: Hence we use the effectiveness of an algorithm. A load distributing algorithm is said to
be effective under a given set of conditions if it improves the performance relative to that of a system not
using load distributing. Note that while an effective algorithm cannot be unstable, a stable algorithm can be
ineffective.
The Algorithmic Perspective: If an algorithm can perform fruitless actions indefinitely with finite probability,
the algorithm is said to be unstable. E.g. processor thrashing. The transfer of a task to a receiver may increase
the receiver's queue length to the point of over load, so transfer of that task to yet another node. In this
case, a task is moved from one node to another in search of a lightly loaded node without ever receiving
service
LOAD DISTRIBUTING ALGORITHMS -
Based on which type of nodes initiate load distributing actions, load distributing algorithms have been widely
referred to as sender- initiated, receiver-initiated and symmetrically initiated algorithms.
In sender-initiated algorithms, senders (overloaded nodes) look for receivers (underloaded or idle ne des)ot
their load. In receiver initiated policies, receivers solicit load from senders A symmetrically initiated policy is
a combination of both, where load sharing actions are triggered by the demand for extra processing power
or extra work.
1. Sender-Initiated Algorithms: In sender-initiated algorithms, load distributing activities initiated by an
overloaded node (sender) that attempts to send a task to an underloaded node (receiver).
2. Receiver-Initiated Algorithms: In receiver-initiated algorithms, the load distributing activity is
initiated from an underloaded node (receiver) that is trying to obtain a task from an overloaded node
(sender).
3. Symmetrically Initiated Algorithms: Under symmetrically initiated algorithms, both senders and
receivers search for receivers and senders respectively/for task transfers. These algorithms have the
advantages of both sender and receiver-initiated algorithms. AL low System loads, the sender-
initiated component is more successful in finding under-loaded nodes. At high system loads, the
receiver-initiated component is more successful in finding overloaded nodes.

3
However, these algorithms are not immune from the disadvantage of both sender aid receiver initiated
algorithms. As in sender-initiated algorithms, polling at high system loads may result in system instability,
and as in receiver-initiated algorithms, a pre-emptive transfer facility is necessary.
TASK MIGRATION
Task migration facilities allow pre-emptive transfers. The difference between task placement and task
migration is as follows:
Task placement refers to the transfer off a task that is yet to begin execution to a new location and start its
execution there. Task migration refers to the transfer of a task that has already begun execution to a new
location and continuing its execution there. The general steps involved in task migration are:
1. Sate Transfer: The transfer of the task's state to the new machine. The task's state includes the contents
of the registers, the task stack, whether the task is ready, blocked etc. Virtual memory address space, file
descriptors, references to children processes, etc., may be maintained by the kernel as a part of the task's
state. The task is suspended (frozen) at some point during the transfer so that the state does not change
further, and then the transfer of the task's state is completed.
2. Unfreeze: The task is installed at the new machine and is put in the ready queue so that it can continue
executing.
In the design of a task migration mechanism, several issues play an important role in determining he
efficiency of the mechanism. These issues include
1. State transfer
2. Location transparency
3. Structure of a migration mechanism
4. Performance
5. Organization of a migration mechanism.

You might also like