0% found this document useful (0 votes)
13 views131 pages

Chap 4. Resource and Process Management

mumbai university sem 8 subject distributed computing chapter 4. resource and process management

Uploaded by

jsuzuya1313
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views131 pages

Chap 4. Resource and Process Management

mumbai university sem 8 subject distributed computing chapter 4. resource and process management

Uploaded by

jsuzuya1313
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 131

Chap.

RESOURCE AND PROCESS


MANAGEMENT IN
DISTRIBUTED SYSTEMS

Kanchan K Doke
Computer Engg. Department , BVCOE
Contents

 Introduction
 Features of global Scheduling algorithm
 Task assignment approach
 Load balancing approach
 Load sharing approach
 Process management
 Process migration
 Code Migration

2
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Introduction

“Distributed scheduling is a resource management component


of DOS. It focuses on transparently redistributing the load of
the system among the computers”

 Target is to maximize the overall performance of the system

 More suitable for DS based on LANs

3
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Introduction

• Distributed systems contain a set of resources interconnected


by a network

• Processes are migrated to fulfill their resource requirements

• Resource manager are to control the assignment of resources


to processes

• Resources can be logical (shared file) or physical (CPU)

4
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Motivation

 Load distributed is required in such environment


because of random arrival of tasks and their
random CPU service time
 There is a possibility that several computers are
heavily loaded and others are idle of lightly loaded
 If some processors execute tasks at a slower rate
than others

5
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 10

Desirable features of a scheduling algorithm


 No Priori Knowledge about Processes required
 Does not required to specify information about Process characteristic
and resource requirements

 Dynamic in nature
 Decision should be based on the changing load of nodes and not on fixed
static policy

 Quick decision-making capability


 Algorithm must make quick decision about the assignment of task to
nodes of system

 Balanced system performance and scheduling overhead


 Great amount of information gives more intelligent decision, but
increases overhead

7
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Desirable features of a scheduling algorithm

 Stability
 Unstable when all processes are migrating without accomplishing any
useful work
 It occurs when the nodes turn from lightly-loaded to heavily-loaded state
and vice versa

 Scalability
 A scheduling algorithm should be capable of handling small as well as large
networks
 Probing only m of N nodes for selecting host.

 Fault tolerance
 Should be capable of working after the crash of one or more nodes of the
system

 Fairness of Service
 More users initiating equivalent processes expect to receive the same
quality of service
8
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 2

Types of process scheduling techniques

Task assignment approach

• User processes are collections of related tasks


• Tasks are scheduled to suitable nodes to improve
the performance

Load-balancing approach

• Tasks are distributed among nodes so as to equalize


the workload of nodes of the system

Load-sharing approach

• Simply attempts to avoid idle nodes while


processes wait for being processed

9
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Task assignment approach Marks 5

 Main assumptions in task assignment work:-


 Processes have been split into tasks

 Computation requirement of tasks and speed of processors are


known

 Cost of processing tasks on nodes are known

 Interprocess Communication cost between every pair of


tasks are known

 Resource requirements and available resources on node


are known

10
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Task assignment approach

 Goal:- to find an optimal assignment policy for the


tasks of an individual process.
 Minimization of IPC costs

 Quick turnaround time of process

 High degree of parallelism

 Efficient utilization of resources

11
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Task assignment example
 There are two nodes, {n1, n2} and six tasks {t1, t2, t3, t4, t5, t6}.
 Task assignment parameters –
 Task execution cost (xab the cost of executing task a on node b)
 Inter-task communication cost (cij the inter-task communication cost between
tasks i and j).
Inter-task communication cost Execution costs
Nodes
t1 t2 t3 t4 t5 t6 n1 n2
t1 0 6 4 0 0 12 t1 5 10
t2 6 0 8 12 3 0 t2 2 
t3 4 8 0 0 11 0 t3 4 4
t4 0 12 0 0 5 0 t4 6 3
t5 0 3 11 5 0 0 t5 5 2
t6 12 0 0 0 0 0 t6  4

Task t6 cannot be executed on node n1 and task t2 cannot be executed on node n2


since the resources they need are not available on these nodes.
12
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Task assignment example
1) Serial assignment, where tasks t1, t2, t3 are assigned to node n1 and tasks
t4, t5, t6 are assigned to node n2:

Execution cost, x = x11 + x21 + x31 + x42 + x52 + x62 = 5 + 2 + 4 + 3 + 2 + 4 = 20

Communication cost, c = c14 + c15 + c16 + c24 + c25 + c26 + c34 + c35 + c36 = 0 + 0 +
12 + 12 + 3 + 0 + 0 + 11 + 0 = 38.

Hence total cost = 20+38=58.

2) Optimal assignment, where tasks t1, t2, t3, t4, t5 are assigned to node n1
and task t6 is assigned to node n2.
Execution cost, x = x11 + x21 + x31 + x41 + x51 + x62

= 5 + 2 + 4 + 6 + 5 + 4 = 26

Communication cost, c = c16 + c26 + c36 + c46 + c56


= 12 + 0 + 0 + 0 + 0 = 12
Total cost =26+12= 38
13
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 5

Load Balancing vs. Load sharing


 In both the algorithms task is transferred to lightly
loaded nodes
 Load balancing is special case of load sharing
Load balancing algorithms
 Try to equalize loads at all computers
 Transfers tasks at higher rate
 The higher overhead incurred by the load
balancing algorithm may balance this potential
performance improvement
Load sharing
 Task transfers are not instantaneous
 Anticipatory task is transfer
16
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-balancing approach Marks 10

• Balance the total system load by transferring load from heavily


loaded to lightly loaded node
Load-balancing algorithms

Static Dynamic

Deterministic Probabilistic Centralized Distributed

Cooperative Noncooperative
 A classifications of Load-Balancing Algorithms

17
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-balancing approach Marks 5

• Static Load Balancing


▫ In static algorithm the processes are assigned to the
processors at the compile time according to the
performance of the nodes.

▫ Once the processes are assigned, no change or


reassignment is possible at the run time.

▫ Static algorithms do not collect any information about


the nodes .

▫ Information used- average behaviour of the system


▫ Incoming time
▫ Extend of resource needed
▫ Mean execution time
▫ Inter process communication
18
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-balancing approach Marks 5

• Dynamic Load Balancing


▫ In dynamic load balancing algorithm assignment of jobs is
done at the runtime.
▫ In DLB jobs are reassigned at the runtime depending
upon the situation that is the load will be transferred from
heavily loaded nodes to the lightly loaded nodes.
▫ In dynamic load balancing no decision is taken until the
process gets execution

19
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-balancing approach
Type of dynamic load-balancing algorithms

• Centralized versus Distributed:


▫ Centralized approach
▫ Collects information to server node and makes
assignment decision
▫ Algorithms can make efficient decisions, have lower fault-
tolerance
▫ Issue- reliability
1. Use k+1 replicated server
2. Reinstantiation – use k monitoring system to detect
server failure.

▫ Distributed approach
▫ Contains entities to make decisions on a predefined set of
nodes
▫ Distributed algorithms avoid the bottleneck of collecting
state information and react faster

22
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process classification

 Local Process:
 Is processed at its originating node.

 Remote Process:
 Is processed at a node different from the one it originated

23
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Issues in designing Load-balancing
algorithms Marks 10

• Load estimation policy


▫ Determines how to estimate the workload of a node
• Process transfer policy
▫ Determines whether to execute a process locally or remotely

• State information exchange policy


▫ Determines how to exchange load information among nodes

• Location policy
▫ Determines to which node the transferable process should be sent

• Priority assignment policy


▫ Determines the priority of execution of local and remote processes
• Migration limiting policy
▫ Determines the total number of times a process can migrate

24
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policy(estimate the workload) I.
for Load-balancing algorithms

• Some measurable parameters (with time and node


dependent factor) can be the following:
▫ Total number of processes on the node

▫ Resource demands of these processes

▫ Architecture and speed of the node’s processor

▫ Sum of remaining services times of all the processes on


a node

25
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policy(estimate the workload) II.
for Load-balancing algorithms

• In some cases the true load could vary widely depending on the
remaining service time of all the process on the node, which can
be measured in several way:

▫ Memoryless method assumes that all processes have the


same expected remaining service time, independent of the
time used so far.

▫ Pastrepeats assumes that the remaining service time is


equal to the time used so far.

▫ Distribution method expected remaining time conditioned


by the time already used, if distribution service time is known

26
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policy(estimate the workload)III.
for Load-balancing algorithms

 Load indicators:
 Resource queue lengths

 The CPU queue length


 Fairly simple and carries little overhead
 Does not always tell the correct situation as the jobs
may differ in types

• The processor/CPU utilization


 Is number of CPU cycles actually executed per unit of
real time.
 Requires a background process that monitors CPU
utilization continuously and imposes more overhead

27
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process transfer policy.
(execute a process locally or remotely) I
for Load-balancing algorithms
• Most of the algorithms use the threshold policy to decide on
whether the node is lightly-loaded or heavily-loaded

• Threshold value is a limiting value of the workload of node


which can be determined by
▫ Static policy:
▫ Predefined threshold value for each node depending on processing
capability.
▫ No exchange of stale information among the nodes is required
▫ Dynamic policy:
▫ Threshold value(ni)= average workload of all the nodes* predefined
constant (Ci)
▫ Ci- processing capability of node ni relative to the processing capability of
another nodes

• Below threshold value node accepts processes to execute,


above threshold value node tries to transfer processes to a
lightly-loaded node
28
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process transfer policy
(execute a process locally or remotely) II.
for Load-balancing algorithms

 Single-threshold policy may lead to unstable algorithm


because underloaded node could turn to be overloaded
right after a process migration

Overloaded
Overloaded
High mark
Threshold Normal
Low mark
Underloaded
Underloaded

Single-threshold policy Double-threshold policy

 To reduce instability double-threshold policy has been


proposed which is also known as high-low policy
29
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process transfer policy
(execute a process locally or remotely) III.
for Load-balancing algorithms

 Double threshold policy


 When node is in overloaded region new local
processes are sent to run remotely, requests to
accept remote processes are rejected

 When node is in normal region new local processes


run locally, requests to accept remote processes are
rejected

 When node is in underloaded region new local


processes run locally, requests to accept remote
processes are accepted

30
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node).
for Load-balancing algorithms

Location
policy

Threshold Shortest Bidding


Pairing
method method method

31
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node) I.
for Load-balancing algorithms

• Threshold method
▫ Policy selects a random node, checks whether the node is
able to receive the process, then transfers the process.
▫ If node rejects, another node is selected randomly.
▫ This continues until probe limit(static) is reached.

• Shortest method
▫ L distinct nodes are chosen at random, each is polled to
determine its load.
▫ The process is transferred to the node having the
minimum load value unless its workload value prohibits
to accept the process.
▫ Otherwise it is executed at it’s original node

32
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node) II.
for Load-balancing algorithms

• Bidding method
▫ Nodes contain managers (to send processes) and
contractors (to receive processes)
▫ Managers broadcast a “request for bid” message,
contractors respond with bids (prices based on capacity,
recourse available, memory size of the contractor node)
and manager selects the best offer
▫ Winning contractor is notified and asked whether it accepts
the process for execution or not
▫ Advantages: Node can decide whether to participate in the
global scheduling process
▫ Disadvantage:
▫ Increase in communication overhead
▫ Difficult to decide good pricing policy

33
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node) III.
for Load-balancing algorithms

 Pairing
 Each node asks some randomly chosen node to
form a pair with it
 Two nodes that differ greatly in load are
temporarily paired with each other and
migration starts
 Process selection:
 A node only tries to find a partner if it has at least
two processes
 By comparing their expected time to complete on
their current node with the paired node and migration
delay time.
 The pair is broken as soon as the migration is
over 34
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policy I.
for Load-balancing algorithms

 Dynamic policies require frequent exchange of


state information, but these extra messages arise
two opposite impacts:
 Increasing the number of messages gives more accurate
scheduling decision
 Increasing the number of messages raises the queuing
time of messages
State information
policies

Periodic Broadcast when On-demand Exchange by


broadcast state changes exchange polling

35
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policy II.
for Load-balancing algorithms

 Periodic broadcast
 Each node broadcasts its state information after the
elapse of every T units of time
 Problem: heavy traffic, fruitless messages, poor
scalability since information exchange is too large
for networks having many nodes

 Broadcast when state changes


 Avoids fruitless messages by broadcasting the state
only when a process arrives or departures.
 Further improvement is to broadcast only when
state switches to another region (double-threshold
policy)

36
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policy III.
for Load-balancing algorithms

 On-demand exchange
 A node broadcast a State-Information-Request message
when its state switches from normal to either underloaded
or overloaded region.
 On receiving this message other nodes reply with their
own state information to the requesting node
 Further improvement can be that only those nodes reply
which are useful to the requesting node

 Exchange by polling
 To avoid poor scalability (coming from broadcast
messages) the partner node is searched by polling the
other nodes on by one, until poll limit is reached

37
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Priority assignment policy
for Load-balancing algorithms

• Rules for scheduling local and remote process at a node

• Rules:

Selfish

•Local processes are given higher priority than remote processes.


•Worst response time performance of the three policies.

selfless

•Remote processes are given higher priority than local processes.


•Best response time performance of the three policies.

Intermediate

•If No. of local processes >=the number of remote processes, local


processes are given higher priority.
•Otherwise, remote processes are given higher priority than local
processes.
38
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Migration limiting policy
for Load-balancing algorithms

 This policy determines the total number of times a process


can migrate
 Uncontrolled
 A remote process arriving at a node is treated just as a
process originating at a node, so a process may be
migrated any number of times
 Controlled
 Avoids the instability of the uncontrolled policy
 Use a migration count(k) parameter to fix a limit on
the number of time a process can migrate
 K>1 , k=values decided either statically or
Dynamically

39
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-sharing approach Marks 10

 Drawbacks of Load-balancing approach


 Load balancing technique with attempting equalizing
the workload on all the nodes is not an appropriate
object since big overhead is generated by gathering
exact state information
 Load balancing is not achievable since number of
processes in a node is always fluctuating and temporal
unbalance among the nodes exists every moment

 Basic ideas for Load-sharing approach


 It is necessary and sufficient to prevent nodes from
being idle while some other nodes have more than two
processes
 Load-sharing is much simpler than load-balancing since
it only attempts to ensure that no node is idle when
heavily node exists
 Priority assignment policy and migration limiting policy
are the same as that for the load-balancing algorithms
40
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Issues in designing Load-sharing
algorithms Marks 10

• Load estimation policy


▫ Determines how to estimate the workload of a node
• Process transfer policy
▫ Determines whether to execute a process locally or remotely

• State information exchange policy


▫ Determines how to exchange load information among nodes

• Location policy
▫ Determines to which node the transferable process should be sent

• Priority assignment policy


▫ Determines the priority of execution of local and remote processes
• Migration limiting policy
▫ Determines the total number of times a process can migrate

41
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policies
for Load-sharing algorithms

 Since load-sharing algorithms simply


attempt to avoid idle nodes, it is
sufficient to know whether a node is
busy or idle
 Methods
 The simplest load estimation policy of
counting the total number of processes
 Algorithms measure CPU utilization to
estimate the load of a node

42
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process transfer policies
for Load-sharing algorithms

 Algorithms normally use


 All-or-nothing strategy
 This strategy uses the threshold value of all the nodes
fixed to 1
 Nodes become receiver node when it has no process,
and become sender node when it has more than 1
process
 Anticipatory transfer:
 To avoid processing power on nodes having zero
process load-sharing algorithms use a threshold value
of 2 instead of 1

 When CPU utilization is used as the load


estimation policy, the double-threshold policy
should be used as the process transfer policy
43
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policies I.
for Load-sharing algorithms

 Location policy decides whether the sender


node or the receiver node of the process takes
the initiative to search for suitable node in the
system, and this policy can be the following:
 Sender-initiated location policy
 Sender node decides where to send the process
 Heavily loaded nodes search for lightly loaded
nodes
 Receiver-initiated location policy
 Receiver node decides from where to get the
process
 Lightly loaded nodes search for heavily loaded
nodes

44
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 10
Location policies II.
for Load-sharing algorithms
 Sender-initiated location policy
 Node becomes overloaded, it either broadcasts or randomly
probes the other nodes one by one to find a node that is able to
receive remote processes
 When broadcasting, suitable node is known as soon as reply
arrives
Yes
Select Node “i”
No Poll Node “i”
randomly “i” is Poll-set Poll-set=Poll-set U “i”

Poll-set = Nil
Transfer task QueueLength
Yes to “i” Yes at
“i”< T

No
QueueLength+1
Task >T No. of polls
Arrives Yes <
PollLimit
No
No
Queue the
45
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE task locally
Marks 10
Location policies III.
for Load-sharing algorithms
 Receiver-initiated location policy
 Nodes becomes underloaded, it either broadcast or
randomly probes the other nodes one by one to
indicate its willingness to receive remote processes
Yes

Select Node “i” No


“i” is Poll-set Poll-set=Poll-set U “i” Poll Node “i”
randomly

Poll-set = Nil
Transfer task Yes QueueLength
Yes from “i” to “j” at “I”
<T

QueueLength No
<T
No. of polls
Yes <
No
PollLimit
Wait for a
perdetermined period No
46
Task Departure at “j” Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 10
Location policies IV.
for Load-sharing algorithms

 Receiver-initiated policy require preemptive


process migration facility since scheduling
decisions are usually made at process
departure time.
 Sender Initiated policies are preferable at light
to moderate system load
 Receiver initiated policies are preferable at
high system load.

47
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policies
for Load-sharing algorithms

 State information id exchange to know the state of other


nodes when it is either underloaded or overloaded
 Broadcast when state changes
 In sender-initiated/receiver-initiated location policy a node
broadcasts State Information Request when it becomes
overloaded/underloaded
 It is called broadcast-when-idle policy when receiver-initiated
policy is used with fixed threshold value of 1
 Poll when state changes
 In large networks polling mechanism is used
 Polling mechanism randomly asks different nodes for state
information until find an appropriate one or probe limit is
reached
 It is called poll-when-idle policy when receiver-initiated policy
is used with fixed threshold value of 1

48
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
SUMMARY
 Resource manager of a distributed system
schedules the processes to optimize combination of
resources usage, response time, network
congestion, scheduling overhead
 Three different approaches has been discussed
 Task assignment approach deals with the assignment of
task in order to minimize inter process communication
costs and improve turnaround time for the complete
process, by taking some constraints into account
 In load-balancing approach the process assignment
decisions attempt to equalize the average workload on all
the nodes of the system
 In load-sharing approach the process assignment decisions
attempt to keep all the nodes busy if there are sufficient
processes in the system for all the nodes
49
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
PROCESS
MANAGEMENT

50
Introduction
• Goal of process management
• is to make the best possible use of the processing
resources of the entire system by sharing them
among all processors.

• Three important concepts are used to achieve this


goal:
•Process allocation
•Process migration
•Threads

51
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Introduction Cont…

 Process allocation deals with the process of


deciding which process should be assigned to which
processor.

 Process migration deals with the movement of a


process from its current location to the processor to
which it has been assigned.

 Threads deals with fine-grained parallelism for


better utilization of the processing capability of the
system.
52
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration
 Relocation of a process from its current location to
another node.

 Process may be migrated


 Before it start executing on its source node

known as non-preemptive process migration.

 During the course of its execution



Known as preemptive process migration.

53
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Flow of execution of a migration
process

Source node
Destination node

Time Process P1 in
execution

Execution
suspended

Transfer of
Freezing time control

Execution
resumed

Process p1 in execution

54
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration steps Cont…

 Preemptive process migration is costlier.

 Process Migration Policy


 Selection of a process that should be migrated.
 Selection of the destination node to which the selected
process should be migrated.

 Process Migration Mechanism


 Actual transfer of the selected process to the destination
node.

55
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Desirable features of a good process
migration mechanism

o Transparency
o Minimal interference
o Minimal Residual Dependencies
o Efficiency
o Robustness

o Communication between coprocesses of a job

56
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
1.Transparency
Level of transparency:
 Object Access Level
 System Call & Interprocess Communication
level

57
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Object Access level
Transparency
 Minimum requirement for a system to support non-
preemptive process migration facility.

 Requires transparent object naming and user


mobility.

 Access to objects such as files and devices can be


location independent.

58
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : System Call & IPC level
 For migrated process, system calls should be
location independent.

 For transparent redirection of messages during the


transient state of a process.

 Transparency must be provided to support preemptive


process migration facility.

59
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Minimal Interference

 Migration of a process should cause minimal


interference of progress of the process involved.

 Achieve by minimizing the freezing time of the


process being migrated.
 Freezing time is the time period for which the execution
of the process is stopped for transferring its information to
the destination node.

60
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Minimal Residual Dependencies
o No residual dependency should be left on
previous node.

o Otherwise
o Process continues to impose a load on its
previous node
o A failure or reboot of previous node will cause
the process to fail

61
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Efficiency

 Minimum time required for migrating a process.

 Minimum cost of locating an object.

 Minimum cost of supporting remote execution


once the process is migrated.

62
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
5. Robustness

 The failure of a node other than the one on which a


process is currently running should not in any way
affect the accessibility or execution of that process.

63
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
6. Communication between coprocessors of a job

 To reduce communication cost, it is necessary that


coprocesses are able to directly communicate with
each other irrespective of their locations.

64
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration Mechanisms
Four major activities
 Freezing the process on its source node and

restarting it on its destination node.


 Transferring the process’s address space from its
source node to its destination node.
 Forwarding massages meant for the migrant process.

 Handling communication between cooperating


processes that have been separated as a result of
process migration.
65

Kanchan K. Doke, Computer Engg. Dept. ,BVCOE


1. Mechanisms for Freezing and
Restarting a Process
 Freezing the process:
 The execution of the process is suspended and all
external interactions with the process are deferred.

Issues Immediate and delayed blocking of the process

Fast and slow I/O operations

Information about open files

Reinstating the process on its Destination node

66
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… :Immediate and delayed blocking
of the process

oImmediate Blocking:
o If the process is not executing a system call
o If the process is executing a system call but is
sleeping at an interruptible priority waiting for a
kernel event to occur

oDelayed blocking:
o If the process is executing a system call but is
sleeping at an non-interruptible priority waiting for
a kernel event to occur
o A flag is set to tell that when system is completed,
process should block itself from further execution

67
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Fast and Slow I/O Operations

 Process is frozen after the completion of all fast I/O


operations like disk access.

 Slow I/O operation (pipe or terminal) is done after


process migration and when process is executed on
destination node.

68
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Information about open files

o Includes
o Name
o identifier of the file
o their access modes
o the current positions of their file pointers.

o Distributed system uses same protocol to access local as


well as remote files
o Unix based n/w use pathname to access
o It is necessary to somehow preserve a pointer to the file so
that migrated process could continue to access it.

69
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Information about open files

oApproaches :
o Link is created to the file and the pathname of the
link is used as an access point to the file after the
process migrates.
o An open file’s complete pathname is reconstructed
when required by modifying the kernel.

oOther issues:
o Multiple files used should be transfer to the
destination
o Permanent transfer
o Temporary transfer

70
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Reinstating the process on its
Destination Node

1. On the destination node, an empty process


state is created.
 Newly allocated process may or may not have the same process
identifier as the migrating process.

2. If identifier is different then copy’s ID is changed to the


original ID
3. Once all the state of the migrating process has been
transferred from the source to destination node and
copied into the empty state, new copy of the process is
unfrozen and old copy is deleted.
4. The process is restarted on its destination node in
whatever state it was in before being migrated.
5. Adjust a program counter to reissue the system call if
process may have been executing a system call before
migration. 71
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
o The migration of a process involves the transfer of
 Process’s state :
 Execution Status
 Register Contents
 Memory Tables
 I/O State(I/O Queue, I/O buffers, Interrupts)
 Process’s Identifier, Process’s user and group identifier
 Information about Open Files
 Process’s address space:
 Code
 Data
 Program stack

72
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…

 Mechanisms for address space transfer:


 Total freezing
 Pretransferring
 Transfer on reference

74
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Total Freezing
 A process’s execution is stopped while its address space is
being transferred.
Source node Destination node

Time Migration
Execution decision made
suspended

Freezing time Transfer of


address space

Execution
resumed

 Disadvantage:Process is suspended for long time during


migration, timeouts may occur, and if process is interactive,
the delay will be noticed by the user.
75
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Pretransferring

o The address space is transferred while the


process is still running on the source node.
o It is done as an initial transfer of the complete
address space followed by repeated transfers
of the page modified during the previous
transfer.
o The pretransfer operation is executed at a
higher priority than all other programs on the
source node.
o Advantage:
o Reduces the freezing time of the process.

76
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Pretransferring
Source Destination node
node
Time Migration
decision made

Execution
suspended Transfer of
address space
Freezing time
Execution
resumed

o Disadvantage:
o It may increase the total time for migrating due to the
possibility of redundant page transfers.
77
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Transfer on Reference

 Assumption:
 the processes tends to use only relatively small part of
their address space while execution

 The process address space is left behind on its source


node, and as the relocated process executes on its
destination node.
 Attempts to reference memory page results in the
generation of requests to copy in the desired blocks
from their remote location.
 A page is transferred from its source node to its
destination node only when referenced.

78
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Transfer on Reference

Source Destination
node node
Time Execution Migration
suspended decision
made
Freezing
time Execution
resumed

On-demand
transfer of
address space

79
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Transfer on Reference

 Advantage:
• Very short switching time of the process from
its source node to its destination node.

 Disadvantage:
• Imposes a continued load on the process’s
source node and results in the process if
source node fails or is rebooted.

80
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms

 In moving a process, it must be ensured that


 All pending
 En-route
 Future messages arrive at the process’s new location.

 Types of messages:
1. Messages received at the source node after the process’s
execution has been stopped on its source node and the
process’s execution has not yet been started on its
destination node.
2. Messages received at the source node after the process’s
execution has started on its destination node.
3. Messages that are to be sent to the migrant process from any
other node after it has started executing on the destination
node.

81
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont

 Mechanisms:
 Mechanism of resending the message
 Origin site mechanism
 Link traversal mechanism Link
 Update mechanism

82
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Mechanisms of resending the
message
Origin • Messages of type 1 and 2 are either
dropped or negatively
Sender Receiver
Send
acknowledged.
• Reply from receiver:
• Type 1: “try again later, process is
Resend Migrate
frozen”
• Type 2:”this process is unknown at this
Dest 1 node”

Resend again • The sender is notified and it needs


Migrate again to locate the migrant process

 Disadvantage:
Dest 2
 The message for warding mechanism of
process migration operation is
nontransparent to the processes
interacting with the migrant process.

83
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Origin Site Mechanism

Origin • Origin node keeps the info on the


Receiver current location of the process
Sender
Send created there.
• All messages are sent to origin
Migrate which forwards them to migrant
process.
Forward Dest 1
 Disadvantage:
 Failure of the origin site will
Migrate again disrupt the message forwarding
mechanism.
 Continuous load on the migrant
Dest 2 process’s origin site even after
the process has migrated from
that node.

84
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Link Traversal Mechanism
• Messages of type 1 are
Origin
queued and sent to
Sender Receiver destination node as part of
Send
migration procedure.
Forward
• Link is left on source node to
Link Migrate
Send redirect messages of type 2
and 3.
Dest 1
Forward  Two component of link
Link  one is unique process identifier
Migrate again : source node ID+ Porcess ID
 last known location:
Dest 2  Disadvantage:
 Several link may have to be
traversed to locate a process
from a node
 if any node in chain of link fails,
the process cannot be located. 85
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Link Update Mechanisms
 During the transfer phase of the
Sender
Send
Receiver migrant process, the source
node sends link-update
New messages to the kernels
location controlling all of the migrant
Migrate
process’s communication
Send partners.
Dest 1 • Messages of type 1 and 2 are
New location forwarded by the source node
• Messages of type 3 are sent
Send Migrate again directly to the destination node
 Link update message
Current location  Tells the new address of each link held
Dest 2 by the migrant process.
 Acknowledged for synchronization
purposes.

86
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Mechanisms for Handling Coprocessors

 To provide efficient communication between a


process and its sub processes which might have
been migrated on different nodes.

 Mechanisms :
 Disallowing separation of coprocesses.
 Home node or origin site concept.

87
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Mechanisms for Handling Coprocessors
Cont… : Disallowing Separation of coprocesses

 Easiest method of handling communication


between coprocesses is to disallow their
separation.
 Methods :
 By disallowing the migration of processes that wait for one or
more of their children to complete.
 By ensuring that when a parent process migrates, its children
process will be migrated along with it.

 Disadvantage:
 It does not allow the use of parallelism within jobs.

88
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Mechanisms for Handling Coprocessors
Cont… : Home node or Origin Sites Concept

 Used for communication between a process


and its sub process when the two are running
on different nodes.
 Allows the complete freedom of migrating a
process or its sub process independently and
executing them on different nodes of the
system.
 Disadvantage:
 All communication between a parent process and its
children processes take place via the home node.
 The message traffic and the communication cost increase
considerably.

89
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration in Heterogeneous
Systems

 All the concerned data must be translated from the


source CPU format to the destination CPU format
before it can be executed on the destination node.

 A heterogeneous system having n CPU types must


have n(n-1) pieces of translation software.

 Handles problem of different data representations


such as characters, integers and floating-point
numbers.
90
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration in Heterogeneous
Systems….Cont

1
Processor of Processor of
type 1 4 type 2

3 10 5 8
11 7
6 2

Processor of 9 Processor of
type 3 type 4
12

Example: The need for 12 pieces of translation software


required in a heterogeneous system having 4 types of
processors
91
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration in Heterogeneous
Systems….Cont

Processor of
type 1

1 2
8 3
Processor of External data Processor of
type 4 representation type 2
7 4

6 5
Processor
of type 3

Example: The need for only 8 pieces of translation software in a heterogeneous


system having 4 types of processors when the EDR mechanism is used
92
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Advantages of Process
Migration

 Reducing average response time of processes


 To reduce the average response time of the processes,
processes of a heavily loaded node are migrated to idle or
underutilized nodes.

 Speeding up individual jobs


 A migration of job to different node is done and execute
them concurrently.
 Migrate a job to a node having a faster CPU or to a node
at which it has minimum turnaround time.
 More speed up more migration cost involved.

93
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Advantages of Process
Migration….. Cont

 Gaining higher throughput


 Process migration facility may also be used properly to
mix I/O and CPU-bound processes on a global basis for
increasing the throughput of the system.

 Utilizing resources effectively


 Depending upon the nature of a process, it can be
migrated to suitable node to utilize the system resource in
the most efficient manner.

94
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Advantages of Process
Migration….. Cont

 Reducing network traffic


 Migrating a process closer to the resources it is using
most heavily.

 Improving system reliability


 Migrate a copy of a critical process to some other node
and to execute both the original and copied processes
concurrently on different nodes.

 Improving system security


 A sensitive process may be migrated and run on a secure
node. 59
95
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads 10/8/5 Marks

 It is a basic unit of CPU utilization used for


improving a system performance through
parallelism.
 A process consist of an address space and one or
more threads of control.
 Threads share same address space but having its own
program counter, register states and its own stack.
 Less protection due to the sharing of address space.

96
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads

97
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads………Cont

 On a uniprocessor, threads run in quasi-parallel


(time sharing)
 Shared-memory multiprocessor, as many threads can
run simultaneously as there are processors.
 States of Threads:
 Running, blocked, ready, or terminated.

 Threads are viewed as miniprocesses and lightweight


processes.
98
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads in Non-distributed Systems

 Multithreading is useful in the following kinds


of situations:
 To allow a program to do I/O and computations at
the “same” time: one thread blocks to wait for input,
others can continue to execute
 To allow separate threads in a program to be
distributed across several processors in a shared
memory multiprocessor
 To allow a large application to be structured as
cooperating threads, rather than cooperating
processes (avoiding excess context switches)

 Multithreading also can simplify program


development (divide-and-conquer)

99
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
10 Marks
Thread Implementation

 Often provided in form of a thread package.

 Two important approaches:


 User level: is to construct thread library that is entirely
executed in user mode.

 Kernel level: is to have the kernel be aware of threads and


schedule them.

101
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package
User-level approach
o Advantages:
o It is cheap to create and destroy threads.
o Cost of creating or destroying thread is determined by
the cost for allocating memory to set up a thread stack.
o Switching thread context is done in few instructions.
o Only CPU registers need to be stored & subsequently
reloaded with the previously stored values of the
thread to which it is being switched.
o There is no need to change memory maps, flush the
TLB, do CPU accounting etc.
o Drawback:
o Invocation of a blocking system call will immediately
block the entire process to which the thread belongs.
102
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont : User level

Processes and their


threads

User space
Runtime system (maintain
threads status info.)

Kernel (maintains
Kernel space processes
status info.)

104
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont
Kernel-level approach
 No runtime system is used and threads are managed by the
kernel.

 Threads status info. table maintained within kernel.

 Single-level scheduling is used in this approach.

 All calls that might block a thread are implemented as


system calls that trap to the kernel.

105
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont… : Kernel level

Processes and
User space their threads

Kernel (maintains
Kernel space threads status
info.)

106
Implementing a Threads Package… 8/5 Marks
Cont… Difference between User level and Kernel level Thread

 Thread level package implementation:


 User-level approach :can be implemented on top of an existing
OS that does not support threads.
 Kernel level approach: concept of thread must be incorporated in
the design of an OS

 Scheduling:
 User-level approach: due to use of two-level scheduling, users
have the flexibility to use their own customized algorithm to
schedule the threads of a process.
 Kernel level approach: use single level scheduling. User only can
specify priority

 Context Switching:
 User-level approach: Is faster, performed by runtime system
 Kernel level approach: is slower, performed by kernel

107
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont… Difference between User level and Kernel level Thread

 Status information table:


 User-level approach: maintain by runtime system, so scalability is
good
 Kernel level approach: maintain by kernel, so scalability is poor

 Clock interrupt:
 User-level approach: Since there is no clock interrupt within a
single process, so once CPU is given to a thread to run, there is
no way to interrupt it.
 Kernel level approach: clock interrupt occur periodically and
kernel can keep track of amount of CPU time consumed by
thread.

108
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Hybrid Threads –Lightweight
Processes (LWP)
 LWP is similar to a kernel-level thread:
 It runs in the context of a regular process
 The process can have several LWPs created by the kernel in response
to a system call.

 User level threads are created by calls to the user-level


thread package.
 The thread package allows operation like creating and
destroying threads, in addition have scheduling and
synchronization algorithm for threads.
 All operations are carried out without intervention of the
kernel.

109
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Thread Implementation

Combining kernel-level lightweight processes and user-level threads.

110
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Hybrid threads – LWP

 The OS schedules an LWP which uses the


thread scheduler to decide which thread to
run.
 Thread synchronization and context
switching are done at the user level; LWP is
not involved and continues to run.
 If a thread makes a blocking system call
control passes to the OS (mode switch)
 The OS can schedule another LWP or let the
existing LWP continue to execute, in which case
it will look for another thread to run.
111
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Hybrid threads – LWP

 Advantages of the hybrid approach


 Most thread operations (create, destroy,
synchronize) are done at the user level
 Blocking system calls need not block the
whole process
 Applications only deal with user-level
threads
 LWPs can be scheduled in parallel on the
separate processing elements of a
multiprocessor

112
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads and Distributed Systems….
Multithreaded clients
 Multithreaded clients: Main issue is hiding network
latency.
 Multithreaded web client:
 Browser scans HTML, and finds more files that need to be
fetched.
 Each file is fetched by a separate thread, each issuing an
HTTP request.
 As files come in, the browser displays them.

 Multiple request-response calls to other machines


(RPC):
 Client issues several calls, each one by a different thread.
 Waits till all return.
 If calls are to different servers, will have a linear speedup.
113
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads and Distributed Systems….
Multithreaded Servers

 Improve performance, provide better


structuring
 Consider what a file server does:
 Wait for a request
 Execute request (may require blocking I/O)
 Send reply to client
 Several models for programming the server
 Single threaded
 Multi-threaded
 Finite-state machine
114
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads in Distributed Systems
- Servers
 A single-threaded (iterative) server
processes one request at a time – other
requests must wait.
 Possible solution: create (fork) a new server
process for a new request.
 This approach creates performance
problems (servers must share file system
information)

115
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Multithreaded Servers
The true benefit from multithreading in DS is having multithreaded servers

 A multithreaded server (i.e. file server) organized in a


dispatcher/worker model.
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Finite-state machine
 Model support parallelism but with nonblocking
system calls.
 Implemented as a single threaded process and is
operated like a finite state machine.
 An event queue is maintained for request &
reply.
 During time of a disk access, it records current
state in a table & fetches next request from
queue.
 When a disk operation completes, the
appropriate piece of client state must be
retrieved to find out how to continue carrying
out the request. 117
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
118
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Motivations for using Threads
 Overheads involved in creating a new process is more
than creating a new thread.

 Context switching between threads is cheaper than


processes due to their same address space.

 Resource sharing can be achieved more efficiently and


naturally between threads due to same address space.

 Threads allow parallelism to becombined with sequential


execution and blocking system calls.

119
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Different models to construct a server
process: As a single-thread process

 Use blocking system calls but without any


parallelism.

 If a dedicated machine is used for the file server,


the CPU remains idle while the file server is
waiting for a reply from the disk space.

 No parallelism is achieved in this method and fewer


client requests are processed per unit of time.

120
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Different models to construct a server
process: As a finite state machine

 Model support parallelism but with nonblocking


system calls.
 Implemented as a single threaded process and is
operated like a finite state machine.
 An event queue is maintained for request & reply.
 During time of a disk access, it records current state
in a table & fetches next request from queue.
 When a disk operation completes, the appropriate
piece of client state must be retrieved to find out
how to continue carrying out the request.

121
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Different models to construct a server
process: As a group of threads
 Supports parallelism with blocking system calls.

 Server process is comprised of a single dispatcher


thread and multiple worker threads.

 Dispatcher thread keeps waiting in a loop for


request from the clients.

 A server process designed in this way has good


performance and is also easy to program.

122
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads

Models

Dispatcher-
Team Pipeline
workers
model model
model

123
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads
Cont… : Dispatcher-worker model
 Single dispatcher thread and multiple worker threads.
 Dispatcher thread accepts requests from clients and
after examining the request, dispatches the request to
one of the free worker threads for further processing of
the request.
 Each worker thread works on a different client request.

Requests A Server process for


Port processing incoming
requests
Dispatcher Thread

Worker Worker Worker


Thread Thread Thread

124
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads
Cont… : Team Model
 There is no dispatcher-worker relationship for processing
clients requests.
 Each thread gets and processes clients’ requests on its own.
 Each thread of the process is specialized in servicing a
specific type of request.
 Multiple types of requests can be simultaneously handled by
the process
Requests

Port A Server process for


processing incoming
type1 requests that may be of three
type3 different types, each types of
type2
request being handled by a
different thread.
Thread Thread Thread

125
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads
Cont… :Pipeline Model

 Useful for application based on the producer-consumer model.


 The threads of a process are organized as a pipeline so that
the output data generated by the first thread is used for
processing by the second thread and so on.

Requests A Server process for processing


incoming requests, each request
processed in three steps, each
Port step handled by a different
thread and output of one step as
input to the next step

Thread Thread Thread

126
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Issues in designing a Thread Package

Design Issues
• Threads creation
• Thread termination
• Threads synchronization
• Threads scheduling
• Signal handling

127
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Creation
Static Dynamic

• Threads remain fixed for • Number of threads changes


its entire lifetime dynamically.

• No. of threads of a process • Process started with single


is decided at the time of thread, new thread is created
writing a program or as and when needed during
during compilation. execution.

• Fixed stack is allocated to • Stack size is specified as


each thread. parameter to the system call
for thread creation

• System call returns ID of newly


created call, which is used in
subsequent calls.

128
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Termination
 A thread may either destroy itself when it
finishes its job by making an exit call
Or
 Be killed from outside by using the kill command
and specifying the thread identifier as its
parameter.
Or
 Terminated as process terminates (Statically
created Threads)

129
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization
 Threads share a common address space, so it
is necessary to prevent multiple threads from
trying to access the same data
simultaneously.
 Segment of code in which a thread may be
accessing some shared variable is called a
critical region.
 Use mutual exclusion mechanism for threads
synchronization:
 Mutex variable
 Condition variable.

130
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization….
Cont..:Mutex variable

 Is a binary semaphore: block or unblock


 A typical sequence in the use of a mutex
1. Create and initialize mutex
2. Several threads attempt to lock mutex
3. Only one succeeds and now owns mutex
4. The owner performs some set of actions
5. The owner unlocks mutex
6. Another thread acquires mutex and repeats the
process
7. Finally mutex is destroyed

131
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization….
Cont..:Mutex variable

 If Mutex variable is already lock:


 The thread is locked and enter in a queue
of thread’s waiting
OR
 Failure is return to the thread.
 Thread can continue with other job
OR
 keep retrying to lock the mutual variable until it
succeeds.

132
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization….
Cont..: Conditonal Variable

 Condition variable associated with mutex variable &


reflects Boolean state of that variable.
 Condition – wait and signal operation.
Thread1 Thread2

Lock (mutex_A) Lock (mutex_A) fails


succeeds Wait (A_free)
Critical region
(uses shared
resource A) Blocked state

Unlock (mutex_A)
Signal (A_free) Lock (mutex_A)
succeeds

Mutex_A is a mutex variable for exclusive use of shared resource A.


A_free is a condition variable for resource A to become free.
133
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling

 Special features for threads scheduling :


1. Priority assignment facility
2. Flexibility to vary quantum size
dynamically
3. Handoff scheduling
4. Affinity scheduling

134
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling ….
cont: Priority assignment facility

 Threads are scheduled on the first-in, first-out basis or the


round-robin policy is used.
 Used to timeshares the CPU cycles
 To provide more flexibility priority is assigned to the
various threads of the applications.
 Priority thread maybe non-preemptive or preemptive.
 Non-preemptive :
 CPU is not taken away from the thread even if higher priority thread
become ready for execution.

 Preemptive
 Higher priority thread always preempt lower priority thread

135
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling …. cont: Flexibility to
vary quantum size dynamically

 Use of round-robin scheduling scheme.


 Varies the size of fixed-length time quantum to timeshare
the CPU cycle among the threads.
 Not suitable for multiprocessor system.
 Gives good response time to short requests, even on
heavily loaded systems.
 Provides high efficiency on lightly loaded systems.

136
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling …. cont: Handoff
Scheduling

 Allows a thread to name its successor if it


wants to.
 Sending thread can give up the CPU and allow
receiving therad to run next
 Provides flexibility to bypass the queue of
runnable threads and directly switch the CPU
to the thread specified by the currently
running thread.

137
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling …. cont: Affinity
scheduling

 A thread is scheduled on the CPU it last ran


on in hopes that part of its address space is still
in that CPU’s cache.

 Gives better performance on a multiprocessor


system.

138
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Signal handling

 Signals provide software-generated interrupts and


exceptions.
 Issues :
 A signal must be handled properly no matter which
thread of the process receives it.
 Signals must be prevented from getting lost.
 Approach:
 Create a separate exception handler thread in each
process.
 Assign each thread its own private global variables for
signaling conditions.
139
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE

You might also like