Chap 4. Resource and Process Management
Chap 4. Resource and Process Management
Kanchan K Doke
Computer Engg. Department , BVCOE
Contents
Introduction
Features of global Scheduling algorithm
Task assignment approach
Load balancing approach
Load sharing approach
Process management
Process migration
Code Migration
2
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Introduction
3
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Introduction
4
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Motivation
5
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 10
Dynamic in nature
Decision should be based on the changing load of nodes and not on fixed
static policy
7
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Desirable features of a scheduling algorithm
Stability
Unstable when all processes are migrating without accomplishing any
useful work
It occurs when the nodes turn from lightly-loaded to heavily-loaded state
and vice versa
Scalability
A scheduling algorithm should be capable of handling small as well as large
networks
Probing only m of N nodes for selecting host.
Fault tolerance
Should be capable of working after the crash of one or more nodes of the
system
Fairness of Service
More users initiating equivalent processes expect to receive the same
quality of service
8
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 2
Load-balancing approach
Load-sharing approach
9
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Task assignment approach Marks 5
10
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Task assignment approach
11
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Task assignment example
There are two nodes, {n1, n2} and six tasks {t1, t2, t3, t4, t5, t6}.
Task assignment parameters –
Task execution cost (xab the cost of executing task a on node b)
Inter-task communication cost (cij the inter-task communication cost between
tasks i and j).
Inter-task communication cost Execution costs
Nodes
t1 t2 t3 t4 t5 t6 n1 n2
t1 0 6 4 0 0 12 t1 5 10
t2 6 0 8 12 3 0 t2 2
t3 4 8 0 0 11 0 t3 4 4
t4 0 12 0 0 5 0 t4 6 3
t5 0 3 11 5 0 0 t5 5 2
t6 12 0 0 0 0 0 t6 4
Communication cost, c = c14 + c15 + c16 + c24 + c25 + c26 + c34 + c35 + c36 = 0 + 0 +
12 + 12 + 3 + 0 + 0 + 11 + 0 = 38.
2) Optimal assignment, where tasks t1, t2, t3, t4, t5 are assigned to node n1
and task t6 is assigned to node n2.
Execution cost, x = x11 + x21 + x31 + x41 + x51 + x62
= 5 + 2 + 4 + 6 + 5 + 4 = 26
Static Dynamic
Cooperative Noncooperative
A classifications of Load-Balancing Algorithms
17
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-balancing approach Marks 5
19
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-balancing approach
Type of dynamic load-balancing algorithms
▫ Distributed approach
▫ Contains entities to make decisions on a predefined set of
nodes
▫ Distributed algorithms avoid the bottleneck of collecting
state information and react faster
22
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process classification
Local Process:
Is processed at its originating node.
Remote Process:
Is processed at a node different from the one it originated
23
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Issues in designing Load-balancing
algorithms Marks 10
• Location policy
▫ Determines to which node the transferable process should be sent
24
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policy(estimate the workload) I.
for Load-balancing algorithms
25
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policy(estimate the workload) II.
for Load-balancing algorithms
• In some cases the true load could vary widely depending on the
remaining service time of all the process on the node, which can
be measured in several way:
26
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policy(estimate the workload)III.
for Load-balancing algorithms
Load indicators:
Resource queue lengths
27
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process transfer policy.
(execute a process locally or remotely) I
for Load-balancing algorithms
• Most of the algorithms use the threshold policy to decide on
whether the node is lightly-loaded or heavily-loaded
Overloaded
Overloaded
High mark
Threshold Normal
Low mark
Underloaded
Underloaded
30
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node).
for Load-balancing algorithms
Location
policy
31
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node) I.
for Load-balancing algorithms
• Threshold method
▫ Policy selects a random node, checks whether the node is
able to receive the process, then transfers the process.
▫ If node rejects, another node is selected randomly.
▫ This continues until probe limit(static) is reached.
• Shortest method
▫ L distinct nodes are chosen at random, each is polled to
determine its load.
▫ The process is transferred to the node having the
minimum load value unless its workload value prohibits
to accept the process.
▫ Otherwise it is executed at it’s original node
32
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node) II.
for Load-balancing algorithms
• Bidding method
▫ Nodes contain managers (to send processes) and
contractors (to receive processes)
▫ Managers broadcast a “request for bid” message,
contractors respond with bids (prices based on capacity,
recourse available, memory size of the contractor node)
and manager selects the best offer
▫ Winning contractor is notified and asked whether it accepts
the process for execution or not
▫ Advantages: Node can decide whether to participate in the
global scheduling process
▫ Disadvantage:
▫ Increase in communication overhead
▫ Difficult to decide good pricing policy
33
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Location policy (Selection of destination node) III.
for Load-balancing algorithms
Pairing
Each node asks some randomly chosen node to
form a pair with it
Two nodes that differ greatly in load are
temporarily paired with each other and
migration starts
Process selection:
A node only tries to find a partner if it has at least
two processes
By comparing their expected time to complete on
their current node with the paired node and migration
delay time.
The pair is broken as soon as the migration is
over 34
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policy I.
for Load-balancing algorithms
35
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policy II.
for Load-balancing algorithms
Periodic broadcast
Each node broadcasts its state information after the
elapse of every T units of time
Problem: heavy traffic, fruitless messages, poor
scalability since information exchange is too large
for networks having many nodes
36
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policy III.
for Load-balancing algorithms
On-demand exchange
A node broadcast a State-Information-Request message
when its state switches from normal to either underloaded
or overloaded region.
On receiving this message other nodes reply with their
own state information to the requesting node
Further improvement can be that only those nodes reply
which are useful to the requesting node
Exchange by polling
To avoid poor scalability (coming from broadcast
messages) the partner node is searched by polling the
other nodes on by one, until poll limit is reached
37
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Priority assignment policy
for Load-balancing algorithms
• Rules:
Selfish
selfless
Intermediate
39
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load-sharing approach Marks 10
• Location policy
▫ Determines to which node the transferable process should be sent
41
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Load estimation policies
for Load-sharing algorithms
42
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process transfer policies
for Load-sharing algorithms
44
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 10
Location policies II.
for Load-sharing algorithms
Sender-initiated location policy
Node becomes overloaded, it either broadcasts or randomly
probes the other nodes one by one to find a node that is able to
receive remote processes
When broadcasting, suitable node is known as soon as reply
arrives
Yes
Select Node “i”
No Poll Node “i”
randomly “i” is Poll-set Poll-set=Poll-set U “i”
Poll-set = Nil
Transfer task QueueLength
Yes to “i” Yes at
“i”< T
No
QueueLength+1
Task >T No. of polls
Arrives Yes <
PollLimit
No
No
Queue the
45
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE task locally
Marks 10
Location policies III.
for Load-sharing algorithms
Receiver-initiated location policy
Nodes becomes underloaded, it either broadcast or
randomly probes the other nodes one by one to
indicate its willingness to receive remote processes
Yes
Poll-set = Nil
Transfer task Yes QueueLength
Yes from “i” to “j” at “I”
<T
QueueLength No
<T
No. of polls
Yes <
No
PollLimit
Wait for a
perdetermined period No
46
Task Departure at “j” Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Marks 10
Location policies IV.
for Load-sharing algorithms
47
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
State information exchange policies
for Load-sharing algorithms
48
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
SUMMARY
Resource manager of a distributed system
schedules the processes to optimize combination of
resources usage, response time, network
congestion, scheduling overhead
Three different approaches has been discussed
Task assignment approach deals with the assignment of
task in order to minimize inter process communication
costs and improve turnaround time for the complete
process, by taking some constraints into account
In load-balancing approach the process assignment
decisions attempt to equalize the average workload on all
the nodes of the system
In load-sharing approach the process assignment decisions
attempt to keep all the nodes busy if there are sufficient
processes in the system for all the nodes
49
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
PROCESS
MANAGEMENT
50
Introduction
• Goal of process management
• is to make the best possible use of the processing
resources of the entire system by sharing them
among all processors.
51
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Introduction Cont…
53
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Flow of execution of a migration
process
Source node
Destination node
Time Process P1 in
execution
Execution
suspended
Transfer of
Freezing time control
Execution
resumed
Process p1 in execution
54
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration steps Cont…
55
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Desirable features of a good process
migration mechanism
o Transparency
o Minimal interference
o Minimal Residual Dependencies
o Efficiency
o Robustness
56
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
1.Transparency
Level of transparency:
Object Access Level
System Call & Interprocess Communication
level
57
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Object Access level
Transparency
Minimum requirement for a system to support non-
preemptive process migration facility.
58
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : System Call & IPC level
For migrated process, system calls should be
location independent.
59
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Minimal Interference
60
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Minimal Residual Dependencies
o No residual dependency should be left on
previous node.
o Otherwise
o Process continues to impose a load on its
previous node
o A failure or reboot of previous node will cause
the process to fail
61
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Efficiency
62
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
5. Robustness
63
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
6. Communication between coprocessors of a job
64
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration Mechanisms
Four major activities
Freezing the process on its source node and
66
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… :Immediate and delayed blocking
of the process
oImmediate Blocking:
o If the process is not executing a system call
o If the process is executing a system call but is
sleeping at an interruptible priority waiting for a
kernel event to occur
oDelayed blocking:
o If the process is executing a system call but is
sleeping at an non-interruptible priority waiting for
a kernel event to occur
o A flag is set to tell that when system is completed,
process should block itself from further execution
67
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Fast and Slow I/O Operations
68
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Information about open files
o Includes
o Name
o identifier of the file
o their access modes
o the current positions of their file pointers.
69
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Information about open files
oApproaches :
o Link is created to the file and the pathname of the
link is used as an access point to the file after the
process migrates.
o An open file’s complete pathname is reconstructed
when required by modifying the kernel.
oOther issues:
o Multiple files used should be transfer to the
destination
o Permanent transfer
o Temporary transfer
70
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Cont… : Reinstating the process on its
Destination Node
72
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…
74
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Total Freezing
A process’s execution is stopped while its address space is
being transferred.
Source node Destination node
Time Migration
Execution decision made
suspended
Execution
resumed
76
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Pretransferring
Source Destination node
node
Time Migration
decision made
Execution
suspended Transfer of
address space
Freezing time
Execution
resumed
o Disadvantage:
o It may increase the total time for migrating due to the
possibility of redundant page transfers.
77
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Transfer on Reference
Assumption:
the processes tends to use only relatively small part of
their address space while execution
78
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Transfer on Reference
Source Destination
node node
Time Execution Migration
suspended decision
made
Freezing
time Execution
resumed
On-demand
transfer of
address space
79
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
2. Address Space Transfer Mechanisms
Cont…: Transfer on Reference
Advantage:
• Very short switching time of the process from
its source node to its destination node.
Disadvantage:
• Imposes a continued load on the process’s
source node and results in the process if
source node fails or is rebooted.
80
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
Types of messages:
1. Messages received at the source node after the process’s
execution has been stopped on its source node and the
process’s execution has not yet been started on its
destination node.
2. Messages received at the source node after the process’s
execution has started on its destination node.
3. Messages that are to be sent to the migrant process from any
other node after it has started executing on the destination
node.
81
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont
Mechanisms:
Mechanism of resending the message
Origin site mechanism
Link traversal mechanism Link
Update mechanism
82
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Mechanisms of resending the
message
Origin • Messages of type 1 and 2 are either
dropped or negatively
Sender Receiver
Send
acknowledged.
• Reply from receiver:
• Type 1: “try again later, process is
Resend Migrate
frozen”
• Type 2:”this process is unknown at this
Dest 1 node”
Disadvantage:
Dest 2
The message for warding mechanism of
process migration operation is
nontransparent to the processes
interacting with the migrant process.
83
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Origin Site Mechanism
84
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Link Traversal Mechanism
• Messages of type 1 are
Origin
queued and sent to
Sender Receiver destination node as part of
Send
migration procedure.
Forward
• Link is left on source node to
Link Migrate
Send redirect messages of type 2
and 3.
Dest 1
Forward Two component of link
Link one is unique process identifier
Migrate again : source node ID+ Porcess ID
last known location:
Dest 2 Disadvantage:
Several link may have to be
traversed to locate a process
from a node
if any node in chain of link fails,
the process cannot be located. 85
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
3. Message-forwarding Mechanisms
…Cont: Link Update Mechanisms
During the transfer phase of the
Sender
Send
Receiver migrant process, the source
node sends link-update
New messages to the kernels
location controlling all of the migrant
Migrate
process’s communication
Send partners.
Dest 1 • Messages of type 1 and 2 are
New location forwarded by the source node
• Messages of type 3 are sent
Send Migrate again directly to the destination node
Link update message
Current location Tells the new address of each link held
Dest 2 by the migrant process.
Acknowledged for synchronization
purposes.
86
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Mechanisms for Handling Coprocessors
Mechanisms :
Disallowing separation of coprocesses.
Home node or origin site concept.
87
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Mechanisms for Handling Coprocessors
Cont… : Disallowing Separation of coprocesses
Disadvantage:
It does not allow the use of parallelism within jobs.
88
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
4. Mechanisms for Handling Coprocessors
Cont… : Home node or Origin Sites Concept
89
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Process Migration in Heterogeneous
Systems
1
Processor of Processor of
type 1 4 type 2
3 10 5 8
11 7
6 2
Processor of 9 Processor of
type 3 type 4
12
Processor of
type 1
1 2
8 3
Processor of External data Processor of
type 4 representation type 2
7 4
6 5
Processor
of type 3
93
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Advantages of Process
Migration….. Cont
94
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Advantages of Process
Migration….. Cont
96
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads
97
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads………Cont
99
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
10 Marks
Thread Implementation
101
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package
User-level approach
o Advantages:
o It is cheap to create and destroy threads.
o Cost of creating or destroying thread is determined by
the cost for allocating memory to set up a thread stack.
o Switching thread context is done in few instructions.
o Only CPU registers need to be stored & subsequently
reloaded with the previously stored values of the
thread to which it is being switched.
o There is no need to change memory maps, flush the
TLB, do CPU accounting etc.
o Drawback:
o Invocation of a blocking system call will immediately
block the entire process to which the thread belongs.
102
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont : User level
User space
Runtime system (maintain
threads status info.)
Kernel (maintains
Kernel space processes
status info.)
104
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont
Kernel-level approach
No runtime system is used and threads are managed by the
kernel.
105
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont… : Kernel level
Processes and
User space their threads
Kernel (maintains
Kernel space threads status
info.)
106
Implementing a Threads Package… 8/5 Marks
Cont… Difference between User level and Kernel level Thread
Scheduling:
User-level approach: due to use of two-level scheduling, users
have the flexibility to use their own customized algorithm to
schedule the threads of a process.
Kernel level approach: use single level scheduling. User only can
specify priority
Context Switching:
User-level approach: Is faster, performed by runtime system
Kernel level approach: is slower, performed by kernel
107
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Implementing a Threads Package…
Cont… Difference between User level and Kernel level Thread
Clock interrupt:
User-level approach: Since there is no clock interrupt within a
single process, so once CPU is given to a thread to run, there is
no way to interrupt it.
Kernel level approach: clock interrupt occur periodically and
kernel can keep track of amount of CPU time consumed by
thread.
108
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Hybrid Threads –Lightweight
Processes (LWP)
LWP is similar to a kernel-level thread:
It runs in the context of a regular process
The process can have several LWPs created by the kernel in response
to a system call.
109
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Thread Implementation
110
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Hybrid threads – LWP
112
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads and Distributed Systems….
Multithreaded clients
Multithreaded clients: Main issue is hiding network
latency.
Multithreaded web client:
Browser scans HTML, and finds more files that need to be
fetched.
Each file is fetched by a separate thread, each issuing an
HTTP request.
As files come in, the browser displays them.
115
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Multithreaded Servers
The true benefit from multithreading in DS is having multithreaded servers
119
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Different models to construct a server
process: As a single-thread process
120
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Different models to construct a server
process: As a finite state machine
121
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Different models to construct a server
process: As a group of threads
Supports parallelism with blocking system calls.
122
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads
Models
Dispatcher-
Team Pipeline
workers
model model
model
123
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads
Cont… : Dispatcher-worker model
Single dispatcher thread and multiple worker threads.
Dispatcher thread accepts requests from clients and
after examining the request, dispatches the request to
one of the free worker threads for further processing of
the request.
Each worker thread works on a different client request.
124
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads
Cont… : Team Model
There is no dispatcher-worker relationship for processing
clients requests.
Each thread gets and processes clients’ requests on its own.
Each thread of the process is specialized in servicing a
specific type of request.
Multiple types of requests can be simultaneously handled by
the process
Requests
125
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Models for organizing threads
Cont… :Pipeline Model
126
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Issues in designing a Thread Package
Design Issues
• Threads creation
• Thread termination
• Threads synchronization
• Threads scheduling
• Signal handling
127
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Creation
Static Dynamic
128
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Termination
A thread may either destroy itself when it
finishes its job by making an exit call
Or
Be killed from outside by using the kill command
and specifying the thread identifier as its
parameter.
Or
Terminated as process terminates (Statically
created Threads)
129
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization
Threads share a common address space, so it
is necessary to prevent multiple threads from
trying to access the same data
simultaneously.
Segment of code in which a thread may be
accessing some shared variable is called a
critical region.
Use mutual exclusion mechanism for threads
synchronization:
Mutex variable
Condition variable.
130
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization….
Cont..:Mutex variable
131
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization….
Cont..:Mutex variable
132
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Synchronization….
Cont..: Conditonal Variable
Unlock (mutex_A)
Signal (A_free) Lock (mutex_A)
succeeds
134
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling ….
cont: Priority assignment facility
Preemptive
Higher priority thread always preempt lower priority thread
135
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling …. cont: Flexibility to
vary quantum size dynamically
136
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling …. cont: Handoff
Scheduling
137
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Threads Scheduling …. cont: Affinity
scheduling
138
Kanchan K. Doke, Computer Engg. Dept. ,BVCOE
Signal handling