0% found this document useful (0 votes)
67 views22 pages

Al3452 Os Unit-2

Uploaded by

msvimal07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views22 pages

Al3452 Os Unit-2

Uploaded by

msvimal07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT-2 PROCESS MANAGEMENT

Syllabus: Processes - Process Concept - Process Scheduling - Operations on Processes -


Inter-process Communication; CPU Scheduling - Scheduling criteria - Scheduling
algorithms: Threads -Multithread Models – Threading issues; Process Synchronization - The
critical-section problem -Synchronization hardware – Semaphores – Mutex - Classical
problems of synchronization -Monitors; Deadlock - Methods for handling deadlocks,
Deadlock prevention, Deadlock avoidance, Deadlock detection, Recovery from deadlock.

A process can be thought of as a program in execution.

 A process willneed certain resources—such as CPU time, memory, files, and I/O
devices
 —to accomplish its task.
 These resources are allocated to the processeither when it is created or while it is
executing.
 A process is the unit of work in most systems.
 Systems consist ofa collection of processes: operating-systemprocesses execute
system
 code, and user processes execute user code.
 The operating system is responsible for several important aspects ofprocess and
thread management: the creation and deletion of both userand system processes; the
scheduling of processes; and the provision ofmechanisms for synchronization,
communication, and deadlock handlingfor processes.

Processes

 Early computers allowed only one program to be executed at a time.


 Thisprogram had complete control of the system and had access to all the
system’sresources.
 In contrast, contemporary computer systems allow multiple programsto be loaded into
memory and executed concurrently.
 This evolutionrequired firmer control and more compartmentalization of the various
programs;and these needs resulted in the notion of a process, which is a programin
execution.
 A process is the unit of work in a modern time-sharing system.
 A system therefore consists of a collection of processes: operating system processes
 By switching the CPU between processes, theoperating system can make the
computer more productive.

Process Concept

 An operating system executes a variety of programs:


o Batch system – jobs
o Time-shared systems – user programs or tasks
 A processis more than the program code, which is sometimes known as the text
section.
 It also includes the current activity, as represented by the value of the
programcounter and the contents of the processor’s registers.
 A process generally alsoincludes the process stack, which contains temporary
data (such as functionparameters, return addresses, and local variables), and a data
section, whichcontains global variables.
 A process may also include a heap, which ismemorythat is dynamically allocated
during process run time.
 The structure of a processin memory is shown in Figure.

 A program is apassive entity, such as a file containing a list of instructions stored


on disk(often called an executable file).
 In contrast, a process is an active entity,with a program counter specifying the
next instruction to execute and a setof associated resources.
 A program becomes a process when an executable fileis loaded into memory.
 Two common techniques for loading executable filesare double-clicking an icon
representing the executable file and entering thename of the executable file on the
command line (as in prog.exe or a.out).

Process State

 As a process executes, it changes state. The state of a process is defined in


partby the current activity of that process. A process may be in one of the
followingstates:

• New. The process is being created.

• Running. Instructions are being executed.

• Waiting. The process is waiting for some event to occur (such as an


I/Ocompletion or reception of a signal).

• Ready. The process is waiting to be assigned to a processor.

• Terminated. The process has finished execution.

Diagram of process state

 It is important to realizethat only one process can be running on any processor


at any instant. Manyprocesses may be ready and waiting, however. The state
diagram correspondingto these states is presented in Figure above.
Process Control Block
 Each process is represented in the operating system by a process control
block(PCB)—alsocalleda task control block.
 APCBis shown in Figure below.

 It containsmany pieces of information associated with a specific process,


including these:
 Process state. The statemay be new, ready, running, waiting, halted,
andso on.
 Program counter. The counter indicates the address of the next
instructionto be executed for this process.
 CPU registers. They include accumulators, index registers,stack
pointers, and general-purpose registers, plus any condition-
codeinformation. Along with the program counter, this state
information mustbe saved when an interrupt occurs, to allow the
process to be continuedcorrectlyafterward

Diagram showing CPU switch from process to process.


 CPU-scheduling information. This informationincludes
aprocesspriority,pointers to scheduling queues, and any other
scheduling parameters.
 Memory-management information. This information may include
suchitems as the value of the base and limit registers and the page
tables, or thesegment tables, depending on the memory system used by
the operatingsystem.
 Accounting information. This information includes the amount of
CPU
 and real time used, time limits, account numbers, job or process
numbers,and so on.
 I/O status information. This information includes the list of I/O
devicesallocated to the process, a list of open files, and so on.
 The PCB simply serves as the repository for any information that mayvary from
process to process.
Threads
 A process is a program thatperforms a single thread of execution.
 For example, when a process is runninga word-processor program, a single thread of
instructions is being executed.
 This single thread of control allows the process to perform only one task at
a time.
 Most modern operating systemshave extended the process concept to allow a process
to have multiple threadsof execution and thus to perform more than one task at a time.
 On a system that supports threads, the PCB is expanded to includeinformation for
each thread.

Process Scheduling
 The objective of multiprogramming is to have some process running at alltimes, to
maximize CPU utilization.
 The objective of time sharing is to switch theCPU among processes so frequently that
users can interact with each programwhile it is running.
 To meet these objectives, the process scheduler selectsan available process
forprogram execution on the CPU.
 For a single-processor system, there will neverbe more than one running process.
 If there are more processes, the rest willhave to wait until the CPU is free and can be
rescheduled.
Scheduling Queues
 As processes enter the system, they are put into a job queue, which consistsof all
processes in the system.
 The processes that are residing in main memoryand are ready and waiting to execute
are kept on a list called the ready queue.
 This queue is generally stored as a linked list.
 A ready-queue header containspointers to the first and final PCBs in the list. Each
PCB includes a pointer fieldthat points to the next PCB in the ready queue.
 The system also includes other queues.
 When a process is allocated theCPU, it executes for a while and eventually quits, is
interrupted, or waits forthe occurrence of a particular event, such as the completion of
an I/O request.
 The list of processes waiting for a particular I/O device is called adevice queue. Each
device has its own device queue as shown below.

The ready queue and various I/O device queues


 A common representation of process scheduling is a queueing diagram,such as that
in Figure below.

Queueing-diagram representation of process scheduling.

 Each rectangular box represents a queue. Two typesof queues are present: the ready
queue and a set of device queues. The circlesrepresent the resources that serve the
queues, and the arrows indicate the flowof processes in the system.
 A new process is initially put in the ready queue. It waits there until it isselected for
execution, or dispatched. Once the process is allocated the CPUand is executing, one
of several events could occur:
o The process could issue an I/O request and then be placed in an I/O queue.
o The process could create a new child process and wait for the
child’stermination.
o The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
 In the first two cases, the process eventually switches fromthe waiting stateto the
ready state and is then put back in the ready queue. A process continuesthis cycle until
it terminates, at which time it is removed from all queues andhas its PCB and
resources deallocated.
Schedulers
 A process migrates among the various scheduling queues throughout itslifetime.
 The operating system must select, for scheduling purposes, processesfrom these
queues in some fashion.
 The selection process is carried out by theappropriate scheduler.
 Often, in a batch system, more processes are submitted than can be
executedimmediately. These processes are spooled to a mass-storage device (typically
adisk), where they are kept for later execution.
 The long-term scheduler, or jobscheduler, selects processes from this pool and
loads them into memory forexecution.
 The short-term scheduler, or CPU scheduler, selects from amongthe processes that
are ready to execute and allocates the CPU to one of them.
 The short-term scheduler must select a new process for the CPUfrequently. A process
 The long-term scheduler executes much less frequently; minutes may separatethe
creation of one new process and the next.
 The long-term schedulercontrols the degree of multiprogramming (the number of
processes in memory).
 It is important that the long-term scheduler make a careful selection. Ingeneral, most
processes can be described as either I/O bound or CPU bound.
 An I/O-bound process is one that spends more of its time doing I/O thanit spends
doing computations.
 A CPU-bound process, in contrast, generatesI/O requests infrequently, using more of
its time doing computations.
 It isimportant that the long-term scheduler select a good process mix of I/O-boundand
CPU-bound processes.
 Some operating systems, such as time-sharing systems, may introduce anadditional,
intermediate level of scheduling. This medium-term scheduler isdiagrammed in
Figure below.

Addition of medium-term scheduling to the queueing diagram.


 The key idea behind a medium-term scheduler isthat sometimes it can be
advantageous to remove a process from memory and thus reduce the degree
ofmultiprogramming. Later, the process can be reintroduced into memory, and
itsexecution can be continued where it left off. This scheme is called swapping.
 The process is swapped out, and is later swapped in, by the medium-termscheduler.
 Swapping may be necessary to improve the process mix or becausea change in
memory requirements has overcommitted available memory,requiring memory to be
freed up.
Context Switch
 When an interrupt occurs, the systemneeds to save the current context of the process
running on the CPU so thatit can restore that context when its processing is done,
essentially suspendingthe process and then resuming it.
 The context is represented in the PCB of theprocess. It includes the value of the CPU
registers, the process stateand memory-management information.
 Switching the CPU to another process requires performing a state save ofthe current
process and a state restore of a different process. This task is knownas a context
switch.
 When a context switch occurs, the kernel saves the contextof the old process in its
PCB and loads the saved context of the new processscheduled to run.

Operations on Processes
 The processes in most systems can execute concurrently, and they may be
created and deleted dynamically.
 Thus, these systems must provide a mechanism for process creation and
termination.
Process Creation
 During the course of execution, a process may create several new processes.
 The creating process is called a parent process, and the new processes are
called the children of that process.
 Each of these new processes may in turn create other processes, forming a tree
of processes.
 Most operating systems (including UNIX, Linux, and Windows) identify
processes according to a unique process identifier (or pid), which is typically
an integer number.
 The pid provides a unique value for each process in the system, and it can be
used as an index to access various attributes of a process within the kernel.
 Figure illustrates a typical process tree for the Linux operating system,
showing the name of each process and its pid.

init
pid = 1

login kthreadd sshd


pid = 8415 pid = 2 pid = 3028

bash khelper pdflush sshd


pid = 8416 pid = 6 pid = 200 pid = 3610

emacs tcsch
ps
pid = 9204 pid = 4005
pid = 9298

A tree of processes on a typical Linux system.

 The init process (which always has a pid of 1) serves as the root parent process
for all user processes.
 Once the system has booted, the init process can also create various user
processes, such as a web or print server, an ssh server, and the like.
 In Figure, we see two children of init—kthreadd and sshd. The kthreadd
process is responsible for creating additional processes that perform tasks on
behalf of the kernel (in this situation, khelper and pdflush).
 The sshd process is responsible for managing clients that connect to the
system by using ssh.
 The login process is responsible for managing clients that directly log onto the
system.
 In this example, a client has logged on and is using the bash shell, which has
been assigned pid 8416.
 Using the bash command-line interface, this user has created the process ps as
well as the emacs editor.
 On UNIX and Linux systems, we can obtain a listing of processes by using the
ps command.
For example, the command
ps -el
will list complete information for all processes currently active in the system.
 When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
 There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process (it has the same
program and data as the parent).
2. The child process has a new program loaded into it.
 To illustrate these differences, let’s first consider the UNIX operating system.
 In UNIX, each process is identified by its process identifier, which is a unique
integer.
 A new process is created by the fork() system call.
 The new process consists of a copy of the address space of the original
process.
 This mechanism allows the parent process to communicate easily with its
child process.
 Figure illustrates the UNIX system calls

Process creation using the fork() system call.


Process Termination
 A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit() system call.
 At that point, the process may return a status value (typically an integer) to its parent
process (via the wait() system call).
 All the resources of the process—including physical and virtual memory, open files,
and I/O buffers—are deallocated by the operating system.
 Termination can occur in other circumstances as well.
 A process can cause the termination of another process via an appropriate system call.
 Thus, when one process creates a new process, the identity of the newly created
process is passed to the parent.
 A parent may terminate the execution of one of its children for a variety of reasons,
such as these:
• The child has exceeded its usage of some of the resources that it has been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if
its parent terminates.
 Some systems do not allow a child to exist if its parent has terminated.
 In such systems, if a process terminates (either normally or abnormally), then all its
children must also be terminated. This phenomenon, referred to as cascading
termination, is normally initiated by the operating system.
 When a process terminates, its resources are deallocated by the operating system.
 However, its entry in the process table must remain there until the parent calls wait(),
because the process table contains the process’s exit status.
 A process that has terminated, but whose parent has not yet called wait(), is known as
a zombie process.
 The parent process may wait for termination of a child process by using the
wait()system call. The call returns status information and the pid of the terminated
process
pid = wait(&status);
 If parent terminated without invoking wait , the child process is an orphan
Interprocess Communication
 Processes executing concurrently in the operating system may be either independent
processes or cooperating processes.
 A process is independent if it cannot affect or be affected by the other processes
executing in the system.
 Any process that does not share data with any other process is independent.
 A process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other processes is a
cooperating process.
 There are several reasons for providing an environment that allows process
cooperation:
1. Information sharing
2. Computation speedup
3. Modularity
4. Convenience
 Cooperating processes require an interprocess communication (IPC) mechanism
that will allow them to exchange data and information.
 There are two fundamental models of interprocess communication: shared memory
and message passing.
 In the shared-memory model, a region of memory that is shared by cooperating
processes is established.
 Processes can then exchange information by reading and writing data to the shared
region.
 In the message-passing model, communication takes place by means of messages
exchanged between the cooperating processes.
 The two communications models are contrasted in Figure below.
Communications models. (a) Message passing. (b) Shared memory.
Shared-Memory Systems
 Interprocess communication using shared memory requires communicating processes
to establish a region of shared memory.
 Typically, a shared-memory region resides in the address space of the process
creating the shared-memory segment.
 Other processes that wish to communicate using this shared-memory segment must
attach it to their address space.
 The processes are also responsible for ensuring that they are not writing to the same
location simultaneously.
 To illustrate the concept of cooperating processes, let’s consider the producer–
consumer problem, which is a common paradigm for cooperating processes.
 A producer process produces information that is consumed by a consumer process.
 For example, a compiler may produce assembly code that is consumed by an
assembler.
 The assembler, in turn, may produce object modules that are consumed by the loader.
 The producer–consumer problem
Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Solution is correct, but can only use BUFFER_SIZE-1 elements
The producer process using shared memory
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
The consumer process using shared memory

item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next consumed */
}
 This scheme allows at most BUFFER SIZE − 1 items in the buffer at the same
time.
Message-Passing Systems
 The scheme requires that these processes share a region of memory and that
the code for accessing and manipulating the shared memory be written
explicitly by the application programmer.
 Another way to achieve the same effect is for the operating system to provide
the means for cooperating processes to communicate with each other via a
message-passing facility.
 Message passing provides a mechanism to allow processes to communicate
and to synchronize their actions without sharing the same address space.
 It is particularly useful in a distributed environment, where the communicating
processes may reside on different computers connected by a network.
 A message-passing facility provides at least two operations:
send(message) receive(message)
 Messages sent by a process can be either fixed or variable in size
 If processes P andQwant to communicate, theymust send messages to and
receive messages from each other: a communication link must exist between
them.
 This link can be implemented in a variety of ways.
 Here are several methods for logically implementing a link and the
send()/receive() operations:
1. Direct or indirect communication
2. Synchronous or asynchronous communication
3. Automatic or explicit buffering
We look at issues related to each of these features next.
Direct Communication
Naming
 Processes that want to communicate must have a way to refer to each other.
 They can use either direct or indirect communication.
 Under direct communication, each process that wants to communicate must explicitly
name the recipient or sender of the communication.
 In this scheme, the send() and receive() primitives are defined as:
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from process Q.
 A communication link in this scheme has the following properties:
• A link is established automatically between every pair of processes that want to
communicate. The processes need to know only each other’s identity to communicate.
• A link is associated with exactly two processes.
• Between each pair of processes, there exists exactly one link.
 This scheme exhibits symmetry in addressing; that is, both the sender process and the
receiver process must name the other to communicate.
 In this scheme, the send() and receive() primitives are defined as follows:
• send(P, message)—Send a message to process P.
• receive(id, message)—Receive a message from any process.
 The variable id is set to the name of the process with which communication has taken
place.
 Disadvantages
 Limited modularity of the resulting process definitions
 Changing the identifier of a process may necessitate examining all other
process definitions.
Indirect communication
 With indirect communication, the messages are sent to and received from mailboxes,
or ports.
 A mailbox can be viewed abstractly as an object into which messages can be placed
by processes and from which messages can be removed.
 Each mailbox has a unique identification.
 A process can communicate with another process via a number of different
mailboxes, but two processes can communicate only if they have a shared mailbox.
The send() and receive() primitives are defined as follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from mailbox A.
 In this scheme, a communication link has the following properties:
• A link is established between a pair of processes only if both members of
the pair have a shared mailbox.
• A link may be associated with more than two processes.
• Between each pair of communicating processes, a number of different links may
exist, with each link corresponding to one mailbox.
Mailbox sharing
 P1, P2, and P3 share mailbox A
 P1, sends; P2 and P3 receive
 Who gets the message?
Solutions
 Allow a link to be associated with at most two processes
 Allow only one process at a time to execute a receive operation
 Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.

 A mailbox may be owned either by a process or by the operating system.


 When a process that owns a mailbox terminates, the mailbox disappears.
 In contrast, a mailbox that is owned by the operating system has an existence of its
own.
 It is independent and is not attached to any particular process. The operating system
then must provide a mechanism that allows a process to do the following:
• Create a new mailbox.
• Send and receive messages through the mailbox.
• Delete a mailbox.
Synchronization
 Communication between processes takes place through calls to send() and
receive() primitives.
 There are different design options for implementing each primitive.
 Message passing may be either blocking or nonblocking— also known as
synchronous and asynchronous.
Blocking send. The sending process is blocked until the message is received
by the receiving process or by the mailbox.
Nonblocking send. The sending process sends the message and resumes
operation
Blocking receive. The receiver blocks until a message is available.
Nonblocking receive. The receiver retrieves either a valid message or a null.
 Different combinations possible
If both send and receive are blocking, we have a rendezvous
Producer-consumer becomes trivial
message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);
/* consume the item in next consumed */
}

Buffering
Whether communication is direct or indirect, messages exchanged by communicating
processes reside in a temporary queue.
Basically, such queues can be implemented in three ways:
• Zero capacity. The queue has a maximum length of zero; thus, the link cannot have
any messages waiting in it. In this case, the sender must block until the recipient
receives the message.
• Bounded capacity. The queue has finite length n; thus, at most n messages can
reside in it. If the queue is not full when a new message is sent, the message is placed
in the queue and the sender can continue execution without waiting. The link’s
capacity is finite, however. If the link is full, the sender must block until space is
available in the queue.
• Unbounded capacity. The queue’s length is potentially infinite; thus, any
number of messages can wait in it. The sender never blocks.

The zero-capacity case is sometimes referred to as a message system with no


buffering. The other cases are referred to as systems with automatic buffering.
CPU Scheduling
Objectives
 To introduce CPU scheduling, which is the basis for multiprogrammed
operating systems
 To describe various CPU-scheduling algorithms
 To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a
particular system
 To examine the scheduling algorithms of several operating systems
Basic Concepts
 Maximum CPU utilization obtained with multiprogramming
 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main concern

CPU Scheduler
Short-term scheduler selects from among the processes in ready queue, and allocates
the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities
Dispatcher
 Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
o switching context
o switching to user mode
o jumping to the proper location in the user program to restart
that program
Dispatch latency – time it takes for the dispatcher to stop one process and start another
running

Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)

Scheduling Algorithm Optimization Criteria


o Max CPU utilization
o Max throughput
o Min turnaround time
o Min waiting time
o Min response time
Scheduling Algorithms
1. First come first served
2. Shortest job first
3. Priority
4. Round Robin
5. Multilevel queue
6. Multilevel feedback queue
1. First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order:
P2 , P3 , P1
The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect - short process behind long process
Consider one CPU-bound and many I/O-bound processes

2. Shortest-Job-First (SJF) Scheduling


Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
Process Arrival Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Example of Shortest-remaining-time-first
n Now we add the concepts of varying arrival times and preemption to the analysis
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec


3. Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority (smallest integer  highest
priority)

 Preemptive
 Nonpreemptive
 SJF is priority scheduling where priority is the inverse of predicted next CPU burst
time
Problem  Starvation – low priority processes may never execute
Solution  Aging – as time progresses increase the priority of the process
Example of Priority Scheduling
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart

Average waiting time = 8.2 msec


4. Round Robin (RR) Scheduling
Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the end
of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits
more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
o q large  FIFO
o q small  q must be large with respect to context switch, otherwise overhead
is too high
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Typically, higher average turnaround than SJF, but better response


q should be large compared to context switch time
q usually 10ms to 100ms, context switch < 10 usec
Time Quantum and Context Switch Time

5. Multilevel Queue Scheduling


Ready queue is partitioned into separate queues, eg:
foreground (interactive)
background (batch)
Process permanently in a given queue
Each queue has its own scheduling algorithm:
foreground – RR
background – FCFS
Scheduling must be done between the queues:
Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR
20% to background in FCFS

6. Multilevel Feedback Queue Scheduling


A process can move between the various queues; aging can be implemented this way
Multilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when that
process needs service

Example of Multilevel Feedback Queue


Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
 When it gains CPU, job receives 8 milliseconds
 If it does not finish in 8 milliseconds, job is moved to queue Q1
At Q1 job is again served FCFS and receives 16 additional milliseconds
 If it still does not complete, it is preempted and moved to queue Q2

You might also like