Unit II 1
Unit II 1
• CPU Scheduling –
• Scheduling Criteria,
• Scheduling Algorithms,
• Multiple -Processor Scheduling.
• Process Management and Synchronization –
• The Critical Section Problem,
• Synchronization Hardware and Software,
• Semaphores, and
• Classical Problems of Synchronization,
• Critical Regions,
• Monitors.
CPU Scheduling
• CPU scheduling is the process of deciding which process will
own the CPU to use while another process is suspended.
• The main function of the CPU scheduling is to ensure that
whenever the CPU remains idle, the OS has at least selected one
of the processes available in the ready-to-use line.
Short Term Scheduler
• There are several different criteria to consider when trying to select the
"best" scheduling algorithm for a particular situation and environment,
including:
• CPU Utilization: It would make sense if the scheduling was done in
such a way that the CPU is utilized to its maximum. If a scheduling
algorithm does not waste any CPU cycle or makes the CPU work most
of the time (100% of the time, ideally), then the scheduling algorithm
can be considered good.
• Throughput: Throughput by definition is the total number of processes
that are completed (executed) per unit of time or, in simpler terms, it is
the total work done by the CPU in a unit of time. Now, of course, an
algorithm must work to maximize throughput.
• Turnaround time - Time required for a particular process to
complete, from submission time to completion. (Wall clock time.)
or The total amount of time spent by the process in system is
called Turnaround time.
• Response time – if the process are arrive at the system and it gets
the scheduled for the first time is called response time.
Scheduling Algorithms
P1 0 4
P2 1 3
P3 2 1
P4 3 2
P5 4 5
• Advantages Of FCFS Scheduling
Aging:
• Aging is a technique to avoid starvation in a scheduling system.
• It works by adding an aging factor to the priority of each request. The
aging factor must increase the requests priority as time passes and
must ensure that a request will eventually be the highest priority
request (after it has waited long enough)
• (In simple words, as time progresses increase the priority of the
process.)
Round-Robin Scheduling
algorithm:
• The round-robin (RR) scheduling algorithm is designed especially for
timesharing systems.
• It is similar to FCFS scheduling, but preemption is added to enable the
system to switch between processes. A small unit of time, called a time
quantum or time slice, is defined.
• A time quantum is generally from 10 to 100 milliseconds in length. The
ready queue is treated as a circular queue.
• The CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 time quantum.
• To implement RR scheduling, we keep the ready queue as a FIFO queue o£
processes. New processes are added to the tail of the ready queue. The CPU
scheduler picks the first process from the ready queue, sets a timer to
interrupt after 1 time quantum, and dispatches the process.
• One of two things will then happen-
• The process may have a CPU burst of less than 1 time quantum. In this
case, the process itself will release the CPU voluntarily. The scheduler
will then proceed to the next process in the ready queue.
• Otherwise, if the CPU burst of the currently running process is longer
than 1 time quantum, the timer will go off and will cause an interrupt
to the operating system.
Multilevel Queue Scheduling
• A multilevel queue scheduling algorithm partitions the ready queue
into several separate queues (Figure).
• The processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size, process
priority, or process type.
• Each queue has its own scheduling algorithm. For example, separate
queues might be used for foreground and background processes.
• The foreground queue might be scheduled by an RR algorithm, while
the background queue is scheduled by an FCFS algorithm.
• In addition, there must be scheduling among the queues, which is
commonly implemented as fixed-priority preemptive scheduling.
Multilevel Feedback Queue
Scheduling
• The multilevel feedback queue scheduling algorithm, in contrast,
allows a process to move between queues. The idea is to separate
processes according to the characteristics of their CPU bursts.
• If a process uses too much CPU time, it will be moved to a lower-
priority queue. This scheme leaves I/O-bound and interactive
processes in the higher-priority queues. In addition, a process that
waits too long in a lower-priority queue may be moved to a higher-
priority queue. This form of aging prevents starvation.
Introduction of Process Synchronization
do {
flag[j] = true;
turn = i;
while (flag[i] && turn = =
i);
/* critical section */
flag[j] = false;
remainder section
} while (true);
Synchronization Hardware in OS
• Hardware Locks are used to solve the problem of `process
synchronization.
• The process synchronization problem occurs when more than one
process tries to access the same resource or variable.
• If more than one process tries to update a variable at the same time
then a data inconsistency problem can occur. This process
synchronization is also called synchronization hardware in the
operating system.
• Process Syncronization problem can be solved by software as well as a
hardware solution. Peterson solution is one of the software solutions
to the process synchronization problem. Peterson algorithm allows
two or more processes to share a single-use resource without any
conflict. In this article, we will discuss the Hardware solution to the
problem. The hardware solution is as follows:
• 1. Test and Set
2. Swap
3. Unlock and Lock
Test and Set
• boolean lock = false;
• }
Swap
• boolean lock = false;
• individual key = false;
• while(1){
• waiting[i] = true;
• key = true;
• while(waiting[i] && key){
• key = TestAndSet(lock);
• }
• CRITICAL SECTION CODE
• j = (i+1) % n;
• while(j != i && !waiting[j])
• j = (j+1) % n;
• if(j == i)
• lock = false;
• else
• waiting[j] = false;
Semaphore
• Semaphore is a software-based solution for Process
Synchronization.
• Semaphore proposed by Edsger Dijkstra is a technique to manage
concurrent processes by using a single interger value.
• Semaphore is simply a varable which is non negative value and
shared between threads.
• This variable is used to solve the critical section problem and to
achieve process synchronization in multiprocessing environment.
Semaphore
• Semaphore is implemented by using 2 procedures
• 1.wait()
• 2.signal()
• These both are atomic procedures.
Wait()
• It gives the status for the process to enter into the critical section.
• If the semaphore value is greater than or equal to 1 then it
indicates that the process can avail the critical section.
• After the execution of a process, automatically the semaphore
value will be decremented.
• If the semaphore value is equal to zero, then it indicates some
other process is in the critical section.
• So no other processes should be allowed into critical section.
wait():
Signal():
• After completion of the process execution in its critical section
immediately semaphore value will be incremented.
Types of Semaphores:
• Two types of semaphores
• 1.Binary Semaphore
• 2.Counting Semaphore
• Binary Semaphore:
• This is also known as a mutex lock.
• It can have only two values – 0 and 1.
• Counting Sempahore:
• Non negative integer.
• Efficient use of multiple resources.
Classical Problems of Synchronization
• Bounded-Buffer Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem