0% found this document useful (0 votes)
10 views63 pages

OS Unit 2

Uploaded by

Yogitha Aathreya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views63 pages

OS Unit 2

Uploaded by

Yogitha Aathreya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

CPU Scheduling

CPU Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Basic Concepts
Maximum CPU utilization obtained with multiprogramming

CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O
wait

CPU burst distribution


Alternating Sequence of CPU and
I/O Bursts
Histogram of CPU-burst Times
CPU Scheduler
Selects from among the processes in ready queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
Dispatcher

Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program

Dispatch latency – time it takes for the dispatcher to stop one process and start another running
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible

Throughput – # of processes that complete their execution per time unit

Turnaround time – amount of time to execute a particular process

Waiting time – amount of time a process has been waiting in the ready queue

Response time – amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment)
Scheduling Algorithm Optimization Criteria

Max CPU utilization


Max throughput
Min turnaround time
Min waiting time
Min response time
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
Consider one CPU-bound and many I/O-bound processes
Shortest-Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time

SJF is optimal – gives minimum average waiting time for a given set of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart

P4 P1 P3 P2

0 3 9 16 24

Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Determining Length of Next CPU Burst
Can only estimate the length – should be similar to the previous one
Then pick process with shortest predicted next CPU burst

Can be done by using the length of previous CPU bursts, using exponential averaging

1. t n = actual length of n th CPU burst


2.  n +1 = predicted value for the next CPU burst
3.  , 0    1
4. Define :

Commonly, α set to ½
Preemptive version called shortest-remaining-time-first

 n =1 =  t n + (1 −  ) n .
Prediction of the Length of the
Next CPU Burst
Examples of Exponential Averaging
 =0
n+1 = n
Recent history does not count
 =1
n+1 =  tn
Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0

Since both  and (1 - ) are less than or equal to 1, each successive term has less weight than its
predecessor
Example of Shortest-remaining-time-first
Now we add the concepts of varying arrival times and preemption to the analysis

ProcessA arri Arrival TimeT Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3

0 1 5 10 17 26

Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec


Priority Scheduling
A priority number (integer) is associated with each process

The CPU is allocated to the process with the highest priority (smallest integer  highest priority)
Preemptive
Nonpreemptive

SJF is priority scheduling where priority is the inverse of predicted next CPU burst time

Problem  Starvation – low priority processes may never execute

Solution  Aging – as time progresses increase the priority of the process


Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

Average waiting time = 8.2 msec


Round Robin (RR)

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
q large  FIFO
q small  q must be large with respect to context switch, otherwise overhead is too high
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3

The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but better response
q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
Time Quantum and Context Switch Time
Turnaround Time Varies With
The Time Quantum

80% of CPU bursts should


be shorter than q
Multilevel Queue
Ready queue is partitioned into separate queues, eg:
foreground (interactive)
background (batch)
Process permanently in a given queue

Each queue has its own scheduling algorithm:


foreground – RR
background – FCFS

Scheduling must be done between the queues:


Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of
starvation.
Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its
processes; i.e., 80% to foreground in RR
20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue

A process can move between the various queues; aging can be implemented this way

Multilevel-feedback-queue scheduler defined by the following parameters:


number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when that process needs service
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS

Scheduling
A new job enters queue Q0 which is served FCFS
 When it gains CPU, job receives 8 milliseconds
 If it does not finish in 8 milliseconds, job is moved to queue Q1
At Q1 job is again served FCFS and receives 16 additional milliseconds
 If it still does not complete, it is preempted and moved to queue Q2
Multilevel Feedback Queues
Process Synchronization
Objectives
To introduce the critical-section problem, whose solutions can be used to ensure the consistency of
shared data

To present both software and hardware solutions of the critical-section problem

To introduce the concept of an atomic transaction and describe mechanisms to ensure atomicity
Background

Concurrent access to shared data may result in data inconsistency

Maintaining data consistency requires mechanisms to ensure the orderly


execution of cooperating processes

Suppose that we wanted to provide a solution to the consumer-producer


problem that fills all the buffers. We can do so by having an integer count that
keeps track of the number of full buffers. Initially, count is set to 0. It is
incremented by the producer after it produces a new buffer and is
decremented by the consumer after it consumes a buffer.
Producer

while (true) {

/* produce an item and put in nextProduced */


while (counter == BUFFER_SIZE)
; // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Consumer

while (true) {
while (counter == 0)
; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;

/* consume the item in nextConsumed */


}
Race Condition
counter++ could be implemented as

register1 = counter
register1 = register1 + 1
counter = register1

counter-- could be implemented as

register2 = counter
register2 = register2 - 1
count = register2

Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute counter = register1 {count = 6 }
S5: consumer execute counter = register2 {count = 4}
Critical Section Problem
Consider system of n processes {p0, p1, … pn-1}
Each process has critical section segment of code
Process may be changing common variables, updating table, writing file, etc
When one process in critical section, no other may be in its critical section
Critical section problem is to design protocol to solve this
Each process must ask permission to enter critical section in entry section, may
follow critical section with exit section, then remainder section
Especially challenging with preemptive kernels
Critical Section
General structure of process pi is
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections

2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the
processes that will enter the critical section next cannot be postponed
indefinitely

3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes
Peterson’s Solution
Two process solution

Assume that the LOAD and STORE instructions are atomic; that is, cannot
be interrupted

The two processes share two variables:


int turn;
Boolean flag[2]

The variable turn indicates whose turn it is to enter the critical section

The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi

do {
flag[i] = TRUE;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);

Provable that
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
Synchronization Hardware
Many systems provide hardware support for critical section code

Uniprocessors – could disable interrupts


Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable

Modern machines provide special atomic hardware instructions


 Atomic = non-interruptable
Either test memory word and set value
Or swap contents of two memory words
Solution to Critical-section
Problem Using Locks
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
TestAndSet Instruction

Definition:

boolean TestAndSet (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv:
}
Solution using TestAndSet
Shared boolean variable lock, initialized to FALSE
Solution:

do {
while ( TestAndSet (&lock ))
; // do nothing

// critical section

lock = FALSE;

// remainder section

} while (TRUE);
Swap Instruction

Definition:

void Swap (boolean *a, boolean *b)


{
boolean temp = *a;
*a = *b;
*b = temp:
}
Solution using Swap
Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key
Solution:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );

// critical section

lock = FALSE;

// remainder section

} while (TRUE);
Bounded-waiting Mutual Exclusion
with TestandSet()
do {
waiting[i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting[i] = FALSE;
// critical section
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
// remainder section
} while (TRUE);
Semaphore
Synchronization tool that does not require busy waiting
Semaphore S – integer variable
Two standard operations modify S: wait() and signal()
Originally called P() and V()
Less complicated
Can only be accessed via two indivisible (atomic) operations
wait (S) {
while S <= 0
; // no-op
S--;
}
signal (S) {
S++;
}
Semaphore as
General Synchronization Tool
Counting semaphore – integer value can range over an unrestricted domain
Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement
Also known as mutex locks
Can implement a counting semaphore S as a binary semaphore
Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the
same semaphore at the same time

Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the crtical section
Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied

Note that applications may spend lots of time in critical sections and therefore
this is not a good solution
Semaphore Implementation
with no Busy waiting

With each semaphore there is an associated waiting queue


Each entry in a waiting queue has two data items:
value (of type integer)
pointer to next record in the list

Two operations:
block – place the process invoking the operation on the appropriate
waiting queue
wakeup – remove one of processes in the waiting queue and place it in
the ready queue
Semaphore Implementation with
no Busy waiting (Cont.)
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
Implementation of signal:

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Deadlock and Starvation
Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of
the waiting processes
Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);
Starvation – indefinite blocking
A process may never be removed from the semaphore queue in which it is suspended
Priority Inversion – Scheduling problem when lower-priority process holds a lock needed by higher-
priority process
Solved via priority-inheritance protocol
Classical Problems of Synchronization
Classical problems used to test newly-proposed synchronization schemes

Bounded-Buffer Problem

Readers and Writers Problem

Dining-Philosophers Problem
Bounded-Buffer Problem
N buffers, each can hold one item

Semaphore mutex initialized to the value 1

Semaphore full initialized to the value 0

Semaphore empty initialized to the value N


Bounded Buffer Problem (Cont.)
The structure of the producer process

do {

// produce an item in nextp

wait (empty);
wait (mutex);

// add the item to the buffer

signal (mutex);
signal (full);
} while (TRUE);
Bounded Buffer Problem (Cont.)
The structure of the consumer process

do {
wait (full);
wait (mutex);

// remove an item from buffer to nextc

signal (mutex);
signal (empty);

// consume the item in nextc

} while (TRUE);
Readers-Writers Problem
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and write

Problem – allow multiple readers to read at the same time


Only one single writer can access the shared data at the same time

Several variations of how readers and writers are treated – all involve priorities

Shared Data
Data set
Semaphore mutex initialized to 1
Semaphore wrt initialized to 1
Integer readcount initialized to 0
Readers-Writers Problem (Cont.)
The structure of a writer process

do {
wait (wrt) ;

// writing is performed

signal (wrt) ;
} while (TRUE);
Readers-Writers Problem (Cont.)
The structure of a reader process

do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)

// reading is performed

wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Readers-Writers Problem Variations
First variation – no reader kept waiting unless writer has permission to use shared
object

Second variation – once writer is ready, it performs write asap

Both may have starvation leading to even more variations

Problem is solved on some systems by kernel providing reader-writer locks


Dining-Philosophers Problem

Philosophers spend their lives thinking and eating


Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a time) to eat
from bowl
Need both to eat, then release both when done
In the case of 5 philosophers
Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem Algorithm
The structure of Philosopher i:

do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);

What is the problem with this algorithm?


Problems with Semaphores
Incorrect use of semaphore operations:

signal (mutex) …. wait (mutex)

wait (mutex) … wait (mutex)

Omitting of wait (mutex) or signal (mutex) (or both)

Deadlock and starvation

You might also like