0% found this document useful (0 votes)
36 views61 pages

Unit II 1

The document covers CPU scheduling, process management, and synchronization in operating systems. It discusses various scheduling algorithms such as First-Come, First-Served, Shortest Job First, and Round-Robin, along with their advantages and disadvantages. Additionally, it addresses the critical section problem and solutions for process synchronization to prevent race conditions and ensure mutual exclusion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views61 pages

Unit II 1

The document covers CPU scheduling, process management, and synchronization in operating systems. It discusses various scheduling algorithms such as First-Come, First-Served, Shortest Job First, and Round-Robin, along with their advantages and disadvantages. Additionally, it addresses the critical section problem and solutions for process synchronization to prevent race conditions and ensure mutual exclusion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

Unit-II

• CPU Scheduling –
• Scheduling Criteria,
• Scheduling Algorithms,
• Multiple -Processor Scheduling.
• Process Management and Synchronization –
• The Critical Section Problem,
• Synchronization Hardware and Software,
• Semaphores, and
• Classical Problems of Synchronization,
• Critical Regions,
• Monitors.
CPU Scheduling
• CPU scheduling is the process of deciding which process will
own the CPU to use while another process is suspended.
• The main function of the CPU scheduling is to ensure that
whenever the CPU remains idle, the OS has at least selected one
of the processes available in the ready-to-use line.
Short Term Scheduler

• It is also called as CPU scheduler.


• Short-term schedulers, also known as dispatchers, make the
decision of which process to execute next. Short-term schedulers
are faster than long-term schedulers.
• It is the change of ready state to running state of the process.
• CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.
• Its main objective is to increase system performance in
accordance with the chosen set of criteria.
Categories of Scheduling

• There are two categories of scheduling:


• Non-preemptive: Here the resource can’t be taken from a process
until the process completes execution.
• The switching of resources occurs when the running process
terminates and moves to a waiting state.
• Preemptive: Here the OS allocates the resources to a process for a
fixed amount of time. During resource allocation, the process switches
from running state to ready state or from waiting state to ready state.
• This switching occurs as the CPU may give priority to other processes
and replace the process with higher priority with the running process.
Scheduling Criteria

• There are several different criteria to consider when trying to select the
"best" scheduling algorithm for a particular situation and environment,
including:
• CPU Utilization: It would make sense if the scheduling was done in
such a way that the CPU is utilized to its maximum. If a scheduling
algorithm does not waste any CPU cycle or makes the CPU work most
of the time (100% of the time, ideally), then the scheduling algorithm
can be considered good.
• Throughput: Throughput by definition is the total number of processes
that are completed (executed) per unit of time or, in simpler terms, it is
the total work done by the CPU in a unit of time. Now, of course, an
algorithm must work to maximize throughput.
• Turnaround time - Time required for a particular process to
complete, from submission time to completion. (Wall clock time.)
or The total amount of time spent by the process in system is
called Turnaround time.

• Waiting time - How much time processes spend in the ready


queue waiting their turn to get on the CPU.

• Response time – if the process are arrive at the system and it gets
the scheduled for the first time is called response time.
Scheduling Algorithms

CPU scheduling deals with the problem of deciding which of the


processes in the ready queue is to be allocated the CPU.
There are different CPU-scheduling algorithms
• First-come, first-served scheduling (FCFS) algorithm
• Shortest Job First Scheduling (SJF) algorithm
• Priority Scheduling algorithm
• Round-Robin Scheduling algorithm
• Multilevel Queue Scheduling algorithm
First-come, first-served scheduling (FCFS) algorithm

• FCFS is considered as simplest CPU-scheduling algorithm.


• The first come first serve scheduling algorithm states that the process
that requests the CPU first is allocated the CPU first.
• It is implemented by using the FIFO queue.
• FCFS is a non-preemptive scheduling algorithm.
• FCFS is easy to implement and use.
• Arrival time (AT) − Arrival time is the time at which the process
arrives in ready queue.
• Burst time (BT) or CPU time of the process − Burst time is the
unit of time in which a particular process completes its execution.
• Completion time (CT) − Completion time is the time at which
the process has been terminated.
• Turn-around time (TAT) − The total time from arrival time to
completion time is known as turn-around time. TAT can be
written as,
• Turn-around time (TAT) = Completion time (CT) – Arrival
time (AT)
• Waiting time (WT) − Waiting time is the time at which the
process waits for its allocation while the previous process is in the
CPU for execution. WT is written as,
• Waiting time (WT) = Turn-around time (TAT) – Burst time (BT)

• Gantt chart − Gantt chart is a visualization which helps to


scheduling and managing particular tasks in a project.
• It is used while solving scheduling problems, for a concept of
how the processes are being allocated in different algorithms.
Processes Arrival Time Burst Time

P1 0 4

P2 1 3

P3 2 1

P4 3 2

P5 4 5
• Advantages Of FCFS Scheduling

• It is an easy algorithm to implement since it does not include any


complex way.
• Every task should be executed simultaneously as it follows FIFO
queue.
• FCFS does not give priority to any random important tasks first
so it’s a fair scheduling.
• Disadvantages Of FCFS Scheduling
• FCFS results in convoy effect which means if a process with
higher burst time comes first in the ready queue then the
processes with lower burst time may get blocked and that
processes with lower burst time may not be able to get the CPU if
the higher burst time task takes time forever.
• If a process with long burst time comes in the line first then the
other short burst time process have to wait for a long time, so it is
not much good as time-sharing systems.
• Since it is non-preemptive, it does not release the CPU before it
completes its task execution completely.
Shortest Job First Scheduling
(SJF) algorithm:
• A different approach to CPU scheduling is the Shortest-Job-First where the
scheduling of a job or a process is done on the basis of its having shortest
next CPU burst (execution time).
• When the CPU is available, it is assigned to the process that has the smallest
next CPU burst.
• If two processes have the same length next CPU burst, FCFS scheduling is
used to break the tie.
• SJF is optimal – gives minimum average waiting time for a given set of
processes.
• The SJF algorithm can be either preemptive or no preemptive
• Non pre- emptive – once CPU given to the process it cannot
be preempted until completes its CPU burst. This scheme is
known as the shortest-next-CPU-burst algorithm.

• Preemptive – if a new process arrives with CPU burst length


less than remaining time of current executing process,
preempt. This scheme is known as the Shortest-Remaining-
Time-First (SRTF).
Shortest Remaining Time

• Shortest remaining time (SRT) is the preemptive version of the


SJN algorithm.
• The processor is allocated to the job closest to completion but it
can be preempted by a newer ready job with shorter time to
completion.
Priority Scheduling algorithm:

• The SJF algorithm is a special case of the general priority scheduling


algorithm.
• A priority is associated with each process, and the CPU is allocated to the
process with the highest priority.
• Equal-priority processes are scheduled in FCFS order.
• Note that we discuss scheduling in terms of high priority and low priority.
• Priorities are generally indicated by some fixed range of numbers, such as
0 to 7 or 0 to 4,095. However, there is no general agreement on whether 0
is the highest or lowest priority.
• Some systems use low numbers to represent low priority; others use low
numbers for high priority. This difference can lead to confusion. We
assume that high numbers represent high priority.
• Priorities can be defined either internally or externally.
• Internally: priorities use some measurable quantity or
quantities to compute the priority of a process. For example,
time limits, memory requirements, the number of open files.
• Externally: priorities are set by criteria outside the operating
system, such as the importance of the process, the type and
amount of funds being paid for computer use.
• Priority scheduling can be either preemptive or non
preemptive.
• When a process arrives at the ready queue, its priority is
compared with the priority of the currently running process.
• A preemptive priority scheduling algorithm will preempt the
CPU if the priority of the newly arrived process is higher
than the priority of the currently running process.
• A non preemptive priority scheduling algorithm will simply
put the new process at the head of the ready queue.
Starvation:
• Starvation is a resource management problem where a process does
not get the resources it needs for a long time because the resources are
being allocated to other processes.
• (in Simple words, low priority processes may never execute)

Aging:
• Aging is a technique to avoid starvation in a scheduling system.
• It works by adding an aging factor to the priority of each request. The
aging factor must increase the requests priority as time passes and
must ensure that a request will eventually be the highest priority
request (after it has waited long enough)
• (In simple words, as time progresses increase the priority of the
process.)
Round-Robin Scheduling
algorithm:
• The round-robin (RR) scheduling algorithm is designed especially for
timesharing systems.
• It is similar to FCFS scheduling, but preemption is added to enable the
system to switch between processes. A small unit of time, called a time
quantum or time slice, is defined.
• A time quantum is generally from 10 to 100 milliseconds in length. The
ready queue is treated as a circular queue.
• The CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 time quantum.
• To implement RR scheduling, we keep the ready queue as a FIFO queue o£
processes. New processes are added to the tail of the ready queue. The CPU
scheduler picks the first process from the ready queue, sets a timer to
interrupt after 1 time quantum, and dispatches the process.
• One of two things will then happen-

• The process may have a CPU burst of less than 1 time quantum. In this
case, the process itself will release the CPU voluntarily. The scheduler
will then proceed to the next process in the ready queue.
• Otherwise, if the CPU burst of the currently running process is longer
than 1 time quantum, the timer will go off and will cause an interrupt
to the operating system.
Multilevel Queue Scheduling
• A multilevel queue scheduling algorithm partitions the ready queue
into several separate queues (Figure).
• The processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size, process
priority, or process type.
• Each queue has its own scheduling algorithm. For example, separate
queues might be used for foreground and background processes.
• The foreground queue might be scheduled by an RR algorithm, while
the background queue is scheduled by an FCFS algorithm.
• In addition, there must be scheduling among the queues, which is
commonly implemented as fixed-priority preemptive scheduling.
Multilevel Feedback Queue
Scheduling
• The multilevel feedback queue scheduling algorithm, in contrast,
allows a process to move between queues. The idea is to separate
processes according to the characteristics of their CPU bursts.
• If a process uses too much CPU time, it will be moved to a lower-
priority queue. This scheme leaves I/O-bound and interactive
processes in the higher-priority queues. In addition, a process that
waits too long in a lower-priority queue may be moved to a higher-
priority queue. This form of aging prevents starvation.
Introduction of Process Synchronization

• Processes are categorized as one of the following two types:


• Independent Process: The execution of one process does not
affect the execution of other processes.
• Cooperative Process: A process that can affect or be
affected by other processes executing in the system.
• Process synchronization problem arises in the case of
Cooperative processes also because resources are shared in
Cooperative processes.
• Race Condition
• When more than one process is executing the same code or
accessing the same memory or any shared variable in that
condition there is a possibility that the output or the value of
the shared variable is wrong so for that all the processes
doing the race to say that my output is correct this condition
known as a race condition.
• A race condition is a situation that may occur inside a critical
section. This happens when the result of multiple thread
execution in the critical section differs according to the order
in which the threads execute.
What is Process Synchronization?

• Process Synchronization is the task of coordinating the execution of


processes in a way that no two processes can have access to the
same shared data and resources.
• It is specially needed in a multi-process system when multiple
processes are running together, and more than one processes try to
gain access to the same shared resource or data at the same time.
• This can lead to the inconsistency of shared data. So the change
made by one process not necessarily reflected when other processes
accessed the same shared data.
• To avoid this type of inconsistency of data, the processes need to
be synchronized with each other.
The Critical Section Problem

• Consider a system consisting of n processes {P0, P1, ..., Pn−1}.


• Each process has a segment of code, called a critical section, in
which the process may be changing common variables, updating
a table, writing a file, and so on.
• The important feature of the system is that, when one process is
executing in its critical section, no other process is allowed to
execute in its critical section.
The Critical Section Problem

• That is, no two processes are executing in their critical sections at


the same time.
• The critical-section problem is to design a protocol that the
processes can use to cooperate.
• Each process must request permission to enter its critical section.
The Critical Section Problem

• The section of code implementing this


request is the entry section.
• The critical section may be followed by an
exit section.
• The remaining code is the remainder section.
• The entry section and exit section are
enclosed in boxes to highlight these
important segments of code.
The Critical Section Problem

A solution to the critical-section problem must satisfy the following three


requirements:
1. Mutual exclusion: If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes
wish to enter their critical sections, then only those processes that are not
executing in their remainder sections can participate in deciding which will
enter its critical section next, and this selection cannot be postponed
indefinitely.
The Critical Section Problem

3. Bounded waiting: There exists a bound, or limit, on the number of times


that other processes are allowed to enter their critical sections after a process
has made a request to enter its critical section and before that request is
granted.
Peterson’s Solution
• A classic software-based solution to the critical-section problem
known as Peterson’s solution.
• May not work correctly on Modern Computer Architecture.
• However, it provides a good algorithmic description of solving
the critical-section problem and illustrates some of the
complexities involved in designing software that addresses the
requirements of mutual exclusion, progress, and bounded waiting.
• Peterson’s solution is restricted to two processes that alternate
execution between their critical sections and remainder sections.
• Let’s call the processes Pi and Pj.
• Peterson’s solution requires two data items to be shared between
the two processes.
• int turn;
• boolean flag[2];
• int turnThe variable turn indicates whose turn it is to enter its
critical section.
• Boolean flag[2]The flag array is used to indicate if a process is
ready to enter its critical section.
Process Pj in PeterSon’s solution

do {
flag[j] = true;
turn = i;
while (flag[i] && turn = =
i);

/* critical section */

flag[j] = false;
remainder section
} while (true);
Synchronization Hardware in OS
• Hardware Locks are used to solve the problem of `process
synchronization.
• The process synchronization problem occurs when more than one
process tries to access the same resource or variable.
• If more than one process tries to update a variable at the same time
then a data inconsistency problem can occur. This process
synchronization is also called synchronization hardware in the
operating system.
• Process Syncronization problem can be solved by software as well as a
hardware solution. Peterson solution is one of the software solutions
to the process synchronization problem. Peterson algorithm allows
two or more processes to share a single-use resource without any
conflict. In this article, we will discuss the Hardware solution to the
problem. The hardware solution is as follows:
• 1. Test and Set
2. Swap
3. Unlock and Lock
Test and Set
• boolean lock = false;

• boolean TestAndSet(boolean &target){


• boolean returnValue = target;
• target = true;
• return returnValue;
• }
• while(1){
• while(TestAndSet(lock));

• CRITICAL SECTION CODE;
• lock = false;
• REMAINDER SECTION CODE;

• }
Swap
• boolean lock = false;
• individual key = false;

• void swap(boolean &a, boolean &b){


• boolean temp = a;
• a = b;
• b = temp;
• }
• while(1){
• key=true;
• while(key){
• swap(lock,key);
• }

• CRITICAL SECTION CODE
• lock = false;
• REMAINDER SECTION CODE
• }
Unlock and lock
• boolean lock = false;
• Individual key = false;
• Individual waiting[i];

• boolean TestAndSet(boolean &target){


• boolean returnValue = target;
• target = true;
• return returnValue;
• }

• while(1){
• waiting[i] = true;
• key = true;
• while(waiting[i] && key){
• key = TestAndSet(lock);
• }
• CRITICAL SECTION CODE
• j = (i+1) % n;
• while(j != i && !waiting[j])
• j = (j+1) % n;
• if(j == i)
• lock = false;
• else
• waiting[j] = false;
Semaphore
• Semaphore is a software-based solution for Process
Synchronization.
• Semaphore proposed by Edsger Dijkstra is a technique to manage
concurrent processes by using a single interger value.
• Semaphore is simply a varable which is non negative value and
shared between threads.
• This variable is used to solve the critical section problem and to
achieve process synchronization in multiprocessing environment.
Semaphore
• Semaphore is implemented by using 2 procedures
• 1.wait()
• 2.signal()
• These both are atomic procedures.
Wait()
• It gives the status for the process to enter into the critical section.
• If the semaphore value is greater than or equal to 1 then it
indicates that the process can avail the critical section.
• After the execution of a process, automatically the semaphore
value will be decremented.
• If the semaphore value is equal to zero, then it indicates some
other process is in the critical section.
• So no other processes should be allowed into critical section.
wait():
Signal():
• After completion of the process execution in its critical section
immediately semaphore value will be incremented.
Types of Semaphores:
• Two types of semaphores
• 1.Binary Semaphore
• 2.Counting Semaphore
• Binary Semaphore:
• This is also known as a mutex lock.
• It can have only two values – 0 and 1.
• Counting Sempahore:
• Non negative integer.
• Efficient use of multiple resources.
Classical Problems of Synchronization
• Bounded-Buffer Problem

• Readers and Writers Problem

• Dining-Philosophers Problem
Bounded-Buffer Problem

• The Bounded Buffer problem is also called the producer-consumer


problem. This problem is generalized in terms of the Producer-
Consumer problem.
• The solution to this problem is, to create two counting semaphores
“full” and “empty” to keep track of the current number of full and
empty buffers respectively.
• Producers produce a product and consumers consume the product,
but both use of one of the containers each time.
Readers and Writers Problem

• The readers-writers problem is a classical problem of process


synchronization, it relates to a data set such as a file that is shared
between more than one process at a time.
• Among these various processes, some are Readers - which can
only read the data set; they do not perform any updates, some are
Writers - can both read and write in the data sets.
• The readers-writers problem is used for managing
synchronization among various reader and writer process so that
there are no problems with the data sets, i.e. no inconsistency is
generated.
• If two or more than two readers want to access the file at the same
point in time there will be no problem.
• However, in other situations like when two writers or one reader
and one writer wants to access the file at the same point of time,
there may occur some problems.
• hence the task is to design the code in such a manner that if one
reader is reading then no writer is allowed to update at the same
point of time.
• Similarly, if one writer is writing no reader is allowed to read the
file at that point of time and if one writer is updating a file other
writers should not be allowed to update the file at the same point
of time.
• However, multiple readers can access the object at the same time.
Solution to the first readers–writers problem
• The solution for readers and writers can be implemented using binary
semaphores.
• We will make use of two semaphores and one integer variable.
1. mutex, a semaphore (initialized to 1) which is used to ensure mutual
exclusion when readcnt is updated.i.e. when any reader enters or exits
from the critical section.
2. wrt, a semaphore (initialized to 1) common to both reader and writer
processes.
3.readCount,an integer value((initialized to 0) that keeps track of how
many processes are currently reading the data.
Readers and Writers Problem:
 The structure of a writer process
do {
wait (wrt) ;
// writing is performed
signal (wrt) ;
} while (TRUE);
Readers and Writers Problem
 The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Dining Philosopher Problem
Using Semaphores
• The Dining Philosopher Problem states that K philosophers are seated
around a circular table with one chopstick between each pair of
philosophers.
• There is one chopstick between each philosopher. A philosopher may
eat if he can pick up the two chopsticks adjacent to him.
• One chopstick may be picked up by any one of its adjacent followers
but not both.
• There are three states of the philosopher: THINKING, HUNGRY, and
EATING. Here there are two semaphores: Mutex and a semaphore
array for the philosophers. Mutex is used such that no two
philosophers may access the pickup or put it down at the same time.
The array is used to control the behavior of each philosopher. But,
semaphores can result in deadlock due to programming errors.
• process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}

You might also like