Os Unit-Ii
Os Unit-Ii
UNIT - II
Threads – Overview – Threading issues - CPU Scheduling – Basic Concepts – Scheduling Criteria – Scheduling
Algorithms – Multiple-Processor Scheduling – Real Time Scheduling - The Critical-Section Problem –
Synchronization Hardware – Semaphores – Classic problems of Synchronization – Critical regions – Monitors.
PART-A
v:shared T;
➢ The variable v can be accessed only inside a region statement of the following forms.
region v when B do S;
➢ This construct means that,while statement S is being executed ,no other process can
access
• Increased Complexity − Multithreaded processes are quite complicated. Coding for these
can only be handled by expert programmers.
• Difficult to Identify Errors− Identification and correction of errors is much more difficult
in multithreaded processes as compared to single threaded processes.
➢ User level threads are faster to ➢ Kernel level threads are slower to
create and manage. create and manage
Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations, wait and signal that are used for process synchronization.
A thread is a path of execution within a process. A process can contain multiple threads.
A thread is also known as lightweight process.
Types of Thread:
➢ User-level Thread
➢ Kernel-level Thread
(OR)
• When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process.
• Context-switch time is overhead; the system does no useful work while switching.
• Time dependent on hardware support
10.What is Kernel? (NOV-2016)
A solution to the critical section problem must satisfy the following three conditions:
2. Progress:If no process is in its critical section, and if one or more threads want to
execute their critical section then any one of these threads must be allowed to get into its critical
section.
3. Bounded Waiting-After a process makes a request for getting into its critical section,
there is a limit for how many other processes can get into their critical section, before this process's
request is granted. So after the limit is reached, system must grant the process permission to get
into its critical section.
Deadlock:
Deadlock occurs when each process holds a resource and wait for other resource held by
any other process.
Starvation:
Starvation is the problem that occurs when high priority processes keep executing and
low priority processes get blocked for indefinite time.
When a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the entry code. This is called as busy waiting and this type of
semaphore is also called a spinlock, because the process while waiting for the lock.
Job scheduling is the process of allocating system resources to many different tasks by an
operating system (OS). The system handles prioritized job queues that are awaiting CPU time and
it should determine which job to be taken from which queue and the amount of time to be allocated
for the job.
A thread is a path of execution within a process. A process can contain multiple threads.
A thread is also known as lightweight process.
When a thread moves through the system, it is always in one of the five states:
(1) Ready
(2) Running
(3) Waiting
(4) Delayed
(5) Blocked
PART-B
(OR)
List the important factors in scheduling and Explain with an example and gantt chart,FCFS
and Round Robin Scheduling?(NOV-2017)
CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU
• FCFS Scheduling
• Round Robin Scheduling.
• SJF Scheduling.
• SRT Scheduling.
• Priority Scheduling.
• Multilevel Queue Scheduling.
• Multilevel Feedback Queue Scheduling.
Example:
The First-Come-First-Served algorithm is rarely used as a master scheme in modern
operating systems but it is often embedded within other schemes.
P1 24
P2 3
P3 3
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27
P2 P3 P1
0 3 6 30
The Gantt chart for the schedule is:
ROUND ROBIN
• In the round robin scheduling, processes are dispatched in a FIFO manner but are given a
limited amount of CPU time called a time-slice or a quantum. If a process does not
complete before its CPU-time expires, the CPU is preempted and given to the next process
waiting in a queue.
• The preempted process is then placed at the back of the ready list. Round Robin Scheduling
is preemptive (at the end of time-slice) therefore it is effective in time-sharing
environments in which the system needs to guarantee reasonable response times for
interactive users.
1. Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the ready
queue.
2. If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more
than (n-1)q time units.
3. Performance
->q small q must be large with respect to context switch, otherwise overhead is too
high.
Example:-
P1 53
P2 17
P3 68
P4 24
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
Like FCFS, SJF is non preemptive therefore, it is not useful in timesharing environment in which
reasonable response time must be guaranteed.
1. Associate with each process the length of its next CPU burst. Use these lengths to schedule
the process with the shortest time
2. Two schemes:
3. SJF is optimal – gives minimum average waiting time for a given set of processes
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
->SJF (preemptive)
P1 P2 P3 P2 P4 P1
0 2 4 5 7 1 16
1
->Average waiting time = (9 + 1 + 0 +2)/4 = 3
• The SRT is the preemtive counterpart of SJF and useful in time-sharing environment.
• In SRT scheduling, the process with the smallest estimated run-time to completion is run
next, including new arrivals.
• In SJF scheme, once a job begin executing, it run to completion.
• In SJF scheme, a running process may be preempted by a new arrival process with shortest
estimated run-time.
• The algorithm SRT has higher overhead than its counterpart SJF.
• The SRT must keep track of the elapsed time of the running process and must handle
occasional preemptions.
• In this scheme, arrival of small processes will run almost immediately. However, longer
jobs have even longer mean waiting time.
Priority Scheduling
1. A priority number (integer) is associated with each process
2. The CPU is allocated to the process with the highest priority (smallest integer highest
priority)
->Preemptive
->nonpreemptive
3. SJF is a priority scheduling where priority is the predicted next CPU burst time
4. Problem Starvation – low priority processes may never execute
5. Solution Aging – as time progresses increase the priority of the process
The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run.
Equal-Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF) algorithm is a
special case of general priority scheduling algorithm.
An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted)
next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.
Priority can be defined either internally or externally. Internally defined priorities use some
measurable quantities or qualities to compute priority of a process.
• Time limits.
• Memory requirements.
• File requirements,
for example, number of open files.
• CPU Vs I/O requirements.
Externally defined priorities are set by criteria that are external to operating system such as
• A preemptive priority algorithm will preemptive the CPU if the priority of the newly
arrival process is higher than the priority of the currently running process.
• A non-preemptive priority algorithm will simply put the new process at the head of the
ready queue.
A major problem with priority scheduling is indefinite blocking or starvation. A solution to the
problem of indefinite blockage of the low-priority process is aging. Aging is a technique of
gradually increasing the priority of processes that wait in the system for a long period of time.
A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for
instance In a multilevel queue scheduling processes are permanently assigned to one queues. The
processes are permanently assigned to one another, based on some property of the process, such
as
• Memory size
• Process priority
• Process type
Algorithm chooses the process from the occupied queue that has the highest priority, and run that
process either
• Preemptive or
• Non-preemptively
Example:-
1. Three queues:
2. Scheduling
❖ A new job enters queue Q0 which is served FCFS. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved
to queue Q1.
❖ At Q1 job is again served FCFS and receives 16 additional milliseconds. If it
still does not complete, it is preempted and moved to queue Q2.
(OR)
What is Synchronization? Explain how semaphores can be used to deal with n-process
critical section problem?(MAY-2018)
• The critical section is a code segment where the shared variables can be accessed.
An atomic action is required in a critical section i.e. only one process can execute
in its critical section at a time. All the other processes have to wait to execute in
their critical sections.
• In the above diagram, the entry section handles the entry into the critical section.
It acquires the resources needed for execution by the process. The exit section
handles the exit from the critical section. It releases the resources and also informs
the other processes that the critical section is free.
• Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical section at
any time. If any other processes require the critical section, they must wait until it
is free.
• Progress
Progress means that if a process is not using the critical section, then it should not
stop any other process from accessing it. In other words, any process can enter a
critical section if it is free.
• Bounded Waiting
Bounded waiting means that each process must have a limited waiting time. Itt
should not wait endlessly to access the critical section.
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
• boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical
section
• int turn : The process whose turn is to enter the critical section.
TestAndSet
TestAndSet is a hardware solution to the synchronization problem. In TestAndSet,
we have a shared lock variable which can take either of the two values, 0 or 1.
0 Unlock
1 Lock
Before entering into the critical section, a process inquires about the lock. If it is locked,
it keeps on waiting until it becomes free and if it is not locked, it takes the lock and executes the
critical section.
In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting cannot
be preserved.
3.Describe about the Producer Consumer problem with bounded buffer using either message
passing or semaphores?(NOV-2014,NOV-2018)
In bounded buffer problem (or) Producer Consumer problem the buffer size is
specified.
• The producer consumer problem is a synchronization problem. There is a fixed size buffer
and the producer produces items and enters them into the buffer. The consumer removes
the items from the buffer and consumes them.
• A producer should not produce items into the buffer when the consumer is consuming an
item from the buffer and vice versa. So the buffer should only be accessed by the producer
or consumer at a time.
• The producer consumer problem can be resolved using semaphores. The codes for the
producer and consumer process are given as follows −
while (true) {
// produce an item
wait (empty);
wait (mutex);
signal (mutex);
signal (full); }
while (true) {
wait (full);
signal (mutex);
4.Discuss on:
a)Semaphores (NOV-2015)
(OR)
Definition
A semaphore is a protected variable whose value can be accessed and altered only by the
operations P and V and initialization operation called 'Semaphoiinitislize'.
Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also
called general semaphores can assume only nonnegative values.
The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S),
operates as follows:
P(S): IF S > 0
THEN S := S - 1
ELSE (wait on S)
The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal
(S), operates as follows:
▪ Semaphore S; // initialized to 1
▪ wait (S);
Critical Section
signal (S);
Semaphore Implementation
1. Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time
2. Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the critical section.
1. With each semaphore there is an associated waiting queue. Each entry in a waiting
queue has two data items:
2. Two operations:
->Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
->Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
wakeup(P); }
b) Monitors(NOV-2015)
monitor monitor-name {
procedure P1 (…) { …. }
Figure: Monitors
monitor DP
state[i] = HUNGRY;
test(i);
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
(state[(i + 1) % 5] != EATING) )
state[i] = EATING ;
self[i].signal () ;
initialization_code() {
state[i] = THINKING;
}}
->Each philosopher I invokes the operations pickup() and putdown() in the following
sequence:
dp.pickup (i)
EAT
dp.putdown (i)
1. Variables
wait(mutex);
…
body of F;
…
if (next-count > 0)
signal(next)
else
signal(mutex);
int x-count = 0;
x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
if (x-count > 0)
{
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
5.Explain the Threading issues? (NOV-2016)
• Q: If one thread forks, is the entire process copied, or is the new process single-threaded?
• A: System dependant.
• A: If the new process execs right away, there is no need to copy all the other threads. If it
doesn't, then the entire process should be copied.
• A: Many versions of UNIX provide multiple versions of the fork call for this purpose.
Signal Handling
• Q: When a multi-threaded process receives a signal, to what thread should that signal be
delivered?
• A: There are four major options:
1. Deliver the signal to the thread to which the signal applies.
Thread Cancellation
• Threads that are no longer needed may be cancelled by another thread in one of two ways:
1. Asynchronous Cancellation cancels the thread immediately.
2. Deferred Cancellation sets a flag indicating the thread should cancel itself when
it is convenient. It is then up to the cancelled thread to check this flag periodically
and exit nicely when it sees the flag set.
• ( Shared ) resource allocation and inter-thread data transfers can be problematic with
asynchronous cancellation.
• Most data is shared among threads, and this is one of the major benefits of using threads
in the first place.
• However sometimes threads need thread-specific data also.
• Most major thread libraries ( pThreads, Win32, Java ) provide support for thread-specific
data, known as thread-local storage or TLS. Note that this is more like static data than
local variables,because it does not cease to exist when the function ends.
Scheduler Activations
• If the kernel thread blocks, then the LWP blocks, which blocks the user thread.
• Ideally there should be at least as many LWPs available as there could be concurrently
blocked kernel threads. Otherwise if all LWPs are blocked, then user threads will have to
wait for one to become available.
6.Calculate the average waiting time and average turn around time for the FCFS CPU
scheduling for the given data, Specify which is the best algorithm, and discuss why? Illustrate
the scheduling with Gantt chart?(APRIL-2017)
P1 12 0
P2 25 2
P3 13 1
P4 7 0
P5 11
5
• The scheduling method is non preemptive , the process will run to the completion.
• Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.
• Although it is easy to implement , but it is poor performance since the average waiting
time is higher as compare to other scheduling algorithms.
#include<stdio.h>
#include<conio.h>
int n,produced,consumed,buffer[5];
int result,i;
producer(int consumed,int result)
{
if((consumed==1)&&(i>=0))
{
result=result+1;
i++;
buffer[i]=result;
produced=1;
printf("produced buffer=%d\n",buffer[i]);
consumed=0;
return(produced,result);}
else
printf("buffer still to be consumed:\n");
}
consumer(int produced,int result)
{
if((produced==1)&&(i>0))
{
consumed=1;
printf("consumed buffer=%d\n",buffer[i]);
produced=0;i--;
return(consumed,result);
}
else
printf("buffer still to be produced");
}
int main()
{
clrscr();
produced=0;
consumed=1;
result=0;
i=0;
while(1)
{
printf("\nenter the choice:");
printf("\n1.producer\n2.consumer\n3.exit\n");
scanf("%d",&n);
if(n==3)
exit(0);
else if(n==1)
producer(consumed,buffer[i]);
else if(n==2)
consumer(produced,buffer[i]);
else exit(0);
}
}
8.Explain the classical problems for synchronization hardware and draw the process using
real time scheduling with suitable examples?(NOV-2018)
• To generalize the solution(s) expressed above, each process when entering their critical
section must set some sort of lock, to prevent other processes from entering their critical
sections simultaneously, and must release the lock when exiting their critical section, to
allow other processes to proceed. Obviously it must be possible to attain the lock only
when no other process has already set a lock. Specific implementations of this general
procedure can get quite complicated, and may include hardware solutions as outlined in
this section.
• One simple solution to the critical section problem is to simply prevent a process from
being interrupted while in their critical section, which is the approach taken by non
preemptive kernels. Unfortunately this does not work well in multiprocessor
environments, due to the difficulties in disabling and the re-enabling interrupts on all
processors. There is also a question as to how this approach affects timing if the clock
interrupt is disabled.
• Another approach is for hardware to provide certain atomic operations. These operations
are guaranteed to operate as a single instruction, without interruption. One such operation
is the "Test and Set", which simultaneously sets a boolean lock variable and returns its
previous value, as shown in Figures:
• The above examples satisfy the mutual exclusion requirement, but unfortunately do not
guarantee bounded waiting. If there are multiple processes trying to get into their critical
sections, there is no guarantee of what order they will enter, and any one process could
have the bad luck to wait forever until they got their turn in the critical section. ( Since
there is no guarantee as to the relative rates of the processes, a very fast process could
theoretically release the lock, whip through their remainder section, and re-lock the lock
before a slower process got a chance. As more and more processes are involved vying for
the same resource, the odds of a slow process getting locked out completely increase. )
• Below Figure illustrates a solution using test-and-set that does satisfy this requirement,
using two shared data structures, boolean lock and boolean waiting[ N ], where N is the
number of processes in contention for critical sections:
1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem
Bounded-Buffer Problem
while (true) {
// produce an item
wait (empty);
wait (mutex);
signal (mutex);
signal (full);
while (true) {
wait (full);
signal (mutex);
Readers-Writers Problem
o Readers – only read the data set; they do not perform any
updates
o Writers – can both read and write.
2. Problem – allow multiple readers to read at the same time. Only one single writer
can access the shared data at the same time.
3. Shared Data
n Data set
n Semaphore mutex initialized to 1.
n Semaphore wrt initialized to 1.
n Integer readcount initialized to 0.
4. The structure of a writer process
while (true) {
wait (wrt) ;
// writing is performed
signal (wrt) ;
while (true) {
wait (mutex) ;
readcount ++ ;
wait (mutex) ;
readcount - - ;
signal (mutex) ;}
Dining-Philosophers Problem
Dining Philosophers
There are N philosophers sitting around a circular table eating spaghetti and discussing
philosophy.The problem is that each philosopher needs 2 chopsticks to eat, and there are only N
chopsticks, each one between each 2 philosophers.
1. Shared data
While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );
// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think }
10.Write the brief notes on Real Time Scheduling and Various Scheduling
Algorithms?(NOV-2019)
REAL-TIME SCHEDULING:
• Hard real time system are required to complete a critical task within a gurantee amount of
time.
• Generally a process is submitted along with the statement of amount of time in which it
needs to complete perform i/o.
• The scheduler then either admits the process, guaranteeing that the process will complete
on time ,or rejects the requests as impossible. This is known as RESOURCE
REVOLUTION.
• Such a guarantee require that the scheduler know exactly how long each type of OS
function takes to perform and therefore each operation must be guaranteed to take a
maximum amount of time.
• They are a compound of special purpose between running on hardware dedicated to their
critical process and lack the full functionality of modern computers and operating system.
• SOFT REAL TIME computing is less reactive. It requires that critical process receive
priority over less fortunate ones.
• Implementing soft real time functionality requires careful design of the scheduler and
related aspects of os.
• First the system must have priority scheduling and real-time process must have the highest-
priority. The priority of real time processes must not degrade over time, even though the
priority of non-real time process may.
• smaller the latency, faster the real-time process can start executing , once its runnable
• To keep dispatch latency low, we need to allow system calls to be pre-emptible .There are
several ways to achieve this.
• one is to insert pre-emption points in long, duration system calls, that check to the whether
a high-priority process terminates, the interrupted process continue with the system calls.
• Preemption points can be placed only at “safe" locations in the kernel-only where kernel
os are not modified. Even with preemptions points dispatch latency can be large because
only a few preemption points can be practically added to kernel.
• With this effect the kernel can always be preemptible, because any kernel data being,
updated are protected from modification by the high-priority process. This the most
effective method.
• For a higher priority process to read or modify kernel data currently being, accessed by
lower priority, the higher priority should wait for the lower priority to finish. This is known
as PRIORITY INVERSION.
DISPATCH LATENCY