Os Unit 2
Os Unit 2
UNIT II
2 Marks
1. What is a thread? (APR’15, NOV ‘15)
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it
comprises of a thread id, a program counter, a register set and a stack. It shares with other threads
belonging to the same process its code section, data section, and operating system resources such as
open files and signals.
Kernel threads
Kernel threads are supported directly by the operating system .Thread creation; scheduling and
management are done by the operating system. Therefore they are slower to create & manage
compared to user threads. If the thread performs a blocking system call, the kernel can schedule
another thread in the application for execution
11. What are the various scheduling criteria for CPU scheduling?
The various scheduling criteria are
• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time
15.What happens if the time allocated in a Round Robin Scheduling is very large? And what
happens if the time allocated is very low?
It results in a FCFS scheduling. If time is too low, the processor through put is reduced. More
time is spent on context switching
17. What is the difference between process and thread? (May 2017)
1. Threads are easier to create than processes since they don't require a separate address space.
2. Multithreading requires careful programming since threads share data structures that should only be
modified by one thread at a time. Unlike threads, processes don't share the same address space.
3. Threads are considered lightweight because they use far less resources than processes.
4. Processes are independent of each other. Threads, since they share the same address space are
interdependent, so caution must be taken so that different threads don't step on each other.
This is really another way of stating #2 above.
5. A process can consist of multiple threads.
2. Spooling is however capable of overlapping I/O operation for one job with processor operations for
another job.
20. What is meant by CPU–I/O Burst Cycle, CPU burst, I/O burst?
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
CPU burst is length of time process needs to use CPU before it next makes a system call
(normally request for I/O).
I/O burst is the length of time process spends waiting for I/O to complete.
23. What are the uses of job queues, ready queue and device queue? (MAY’17)
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
28. What are conditions under which a deadlock situation may arise? Or Write four general
strategies for dealing with deadlocks? (APR’15)(NOV’14)
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
Mutual exclusion
Hold and wait
No pre-emption
Circular-wait
available, the resources currently allocated to each process, and the future requests and releases of
each process, to decide whether the could be satisfied or must wait to avoid a possible future deadlock.
11 Marks
I/O burst
Load store
Add store CPU burst
Read from file
Wait for I/O I/O burst
.
.
CPU Scheduler
CPU scheduler selects one of the processes in the ready queue to be executed.
There are two types of scheduling .they are
1. Preemptive scheduling
2. Non preemptive scheduling
Preemptive scheduling-during process with the ,if is possible to remove the CPU from the process
then it is called preemptive scheduling.
Non preemptive scheduling-during processing with the CPU from the process then it is not possible
to remove the CPU from the process then it is called non-preemptive scheduling.
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
Scheduling under 1 and 4 is non-preemptive.
All other scheduling is preemptive.
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start another running.
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the first response
is produced, not output (for time-sharing environment)
Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Scheduling algorithms:
First –come first served scheduling(FCFS)
Shortest-job-first scheduling(SJF)
Priority scheduling
Round-robin scheduling(RR)
Multilevel queue scheduling
Multilevel feedback queue scheduling
Waiting time
Process waiting time
P1 0
P2 3
P3 9
P4 13
Process TAT
P1 3
P2 9
P3 13
P4 15
Average TAT=(3+9+13+15)/4=10 ms
P4 3
Gantt chart:
P4 P1 P3 P2
0 3 9 16 24
Waiting time
Process waiting time
P1 3
P2 16
P3 9
P4 0
Example
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
Waiting time
Process waiting time
P1 6
P2 0
P3 16
P4 18
P5 1
Average TAT=(16+1+18+19+6)/5=12 ms
Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of
the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time
units.
Performance
q large FIFO
q small q must be large with respect to context switch, otherwise overhead is too high.
Example given time quantum=4ms
Average TAT=(30+7+10)/3=15.6 ms
First, the system must have priority scheduling, and real-time processes must have the highest
priority.
The priority of real-time processes must not degrade over time, even though the priority of non-
real-time processes may. Second, the dispatch latency must be small. The smaller the latency, the
faster a real-time process can start executing once it is runnable. The high-priority process
would be waiting for a lower-priority one to finish. This situation is known as priority
inversion.
In fact, a chain of processes could all be accessing resources that the high-priority process
needs. This problem can be solved via the priority-inheritance protocol, in which all these
processes (the ones accessing resources that the high-priority process needs) inherit the high
priority until they are done with the resource in question. When they are finished, their priority
reverts to its original value.
The conflict phase of dispatch latency has two components:
1. Preemption of any process running in the kernel
2. Release by low-priority processes resources needed by the high-priority process
As an example, in Solaris 2, the dispatch latency with preemption disabled is over 100 milliseconds.
However, the dispatch latency with preemption enabled is usually reduced to 2 milliseconds.
THREADS: OVERVIEW
DEFINITION
A thread called as a light weight process (LWP) is a basic unit of CPU utilization.
It comprises a thread ID, a program counter, a register set and a stack.
It shares with other threads belonging to the same process its code section, data section, and
other operating system resources such as open files and signals.
Benefits of multi threaded programming
1. Responsiveness
2. Resource sharing
3. Economy
4. Utilization of multiprocessor architecture.
User and Kernel Threads
Threads maybe provided at either the user level, for user threads or by the kernel for kernel threads.
USER THREADS:
Supported above the kernel and are implemented by a thread library at the user level.
Library provides support for thread execution, scheduling and management with no support
from kernel.
Target thread:
A thread that is to be cancelled is often referred to as the target thread.
Cancellations of a target thread may occur in two situations:
Two general approaches:
1. Asynchronous cancellation terminates the target thread immediately
2. Deferred cancellation allows the target thread to periodically check if it should be cancelled
Allow cancellation at safe points
Thread pools
Motivating example:
A web server creates anew thread to service each request.
Two concerns:
1. The amount of time required to create the thread prior to servicing the request ,compounded with
the fact that this thread will be discarded once it has completed its work that is overhead to create
thread
2. No limit on the number of thread created, may exhaust system resources, such as CPU time or
memory .
To overcome the above said problem we need thread pools.
General idea:
Create a pool of threads at process startup.
If request comes in, then wakeup a thread from pool, assign the request to it,if no thread
available ,server waits until one is free
After completing the service, thread returns to pool.
Advantages:
Usually slightly faster to service a request with an existing thread than create a new thread
Allows the number of threads in the application(s) to be bound to the size of the pool
11. Consider the following set of processes with the length of the CPU burst given in
milliseconds (10)
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
(a) Draw Gantt charts that illustrate the execution of these processes using the following svheduling
algorithms : FCFS,SJF ,non-preemptive priority (a smaller priority number implies higher
priority)
(b) What is the turnaround time of all processes for each scheduling algorithms.
FCFS:
Gantt chart:
P1 P2 P3 P4 P5
0 10 11 13 14 19
Waiting time
Process waiting time
P1 0
P2 10
P3 11
P4 13
P5 14
Average TAT=(10+11+13+14+19)/5=13.4 ms
SJF:
Gantt chart:
P2 P4 P3 P5 P1
0 1 2 4 9 19
Waiting time
Process waiting time
P1 9
P2 0
P3 2
P4 1
P5 4
Average TAT=(19+1+4+2+9)/5=7 ms
NON-PREEMPTIVE PRIORITY SCHEDULING:
Gantt chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
Waiting time
Process waiting time
P1 6
P2 0
P3 16
P4 18
P5 1
Average TAT=(16+1+18+19+6)/5=12 ms
12. Calculate average waiting time and average turnaround time for the following algorithms’
(a)FCFS
FCFS:
Gantt chart:
P1 P2 P3 P4
0 7 10 18 23
Waiting time
Process waiting time
P1 0-0=0
P2 7-1=6
P3 10-2=8
P4 18-3=15
Average TAT=(7+10+18+23)/4=14.5 ms
Preemptive SJF(SRTF):
Gantt chart:
P1 P2 P2 P2 P4 P1 P3
0 1 2 3 4 9 15 23
Waiting time
Process waiting time
P1 9-0-1=8
P2 3-1-2=0
P3 15-2=13
P4 4-3=1
Average TAT=(15+4+23+9)/4=12.75 ms
Gantt chart:
P P P P P P P P P P P P P P P P P P P P P P P
1 2 3 4 1 2 3 4 1 2 3 4 1 3 4 1 3 4 1 3 1 3 3
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Waiting time
Process waiting time
P1 20-6=14
P2 9-2=7
P3 22-7=15
P4 17-4=13
Average TAT=(21+10+23+18)/4=18 ms
13. Write short notes on Dead lock and its characteristics?(6 Marks)
A process request for resource if it is not available at the time, a process enters wait state.
Waiting process may never again change the state because the resource they requested are held by
other process, this situation is called deadlock.
Example:
Suppose a computer has one tape drive and one plotter, the process A and B request tape drive and
plotter respectively .Both request are granted.
Now A request the plotter and B request tape drive. Without giving up of its resources, then both the
request cannot be granted. This situation is called deadlock.
Example
Semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B);
wait (B); wait(A);
SYSTEM MODEL:
Under the normal mode of operation a process may utilize a resource in only the following
sequence:
1. Request:
If the request cannot be granted immediately(eg. the resource is being used by another
process), then the requesting process must wait until it can acquire the resource.
2. Use:
The process can operate on the resource (for eg.if the resource is a printer, the process can
print)
3. Release:
The process releases the resource.
Deadlock Characterization
Necessary condition:
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
Mutual exclusion: At least one resource must be held in a non-sharable mode; that is,only one
process at a time can use a resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
Hold and wait: a process must be holding at least one resource is waiting to acquire additional
resources held by other processes.
No preemption: a resource can be released only voluntarily by the process holding it, after that
process has completed its task.
Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn-1 is waiting for a
resource that is held by Pn, and P0 is waiting for a resource that is held by P0.
Resource allocation graph:deadlocks can be described more precisely in terms of directed graph
called a system resource allocation graph.
This graph consists of a set of vertices V and a set of edges E.
-the set of vertices V is partitioned into two different types;
P-set containing all active processes.
R-set consisting of all resource types.
R1 R3
P1 P2 P3
R2 R4
Request edge:Directed edge pi->rj is called a request edge.
Assignment edge:A directed edge rj->pi is called an assignment edge.
Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold
any other resources.
Require process to request and be allocated all its resources before it begins execution, or allow process
to request resources only when the process has none. Low resource utilization; starvation possible.
No Preemption – If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held are released.
Preempted resources are added to the list of resources for which the process is waiting.
Process will be restarted only when it can regain its old resources, as well as the new ones that it is
requesting.
Circular Wait – impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration.
Resource allocation graph for dead lock avoidance (for one instance of each resource)
Each process must declare the maximum number of instance required for each resource type
upon entering the system.
When a process requests a set of resources, the system determines whether the allocation of
these resources will have the system in a safe state.
Yes: allocate the resources
No: the process must wait
Data Structures for the Banker’s Algorithm
Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj
available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource
type Rj.
Allocation: n x m matrix. If Allocation [i,j] = k, then Pi is currently allocated k instances of Rj.
Need: n x m matrix. If Need [i,j] = k, then Pi may need k more instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
1.Let Work and Finish be vectors of length m and n, respectively.
Initialize Work = Available
Finish [i] = false for i = 1, 2, 3, …, n.
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi<= Work
If no such i exists, go to step 4.
3. Work = Work + Allocation i
Finish[i] = true go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.
Available: A vector of length m indicates the number of available resources of each type.
Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process.
Request: An n x m matrix indicates the current request of each process.
If Request[i,j] = k, then process Pi is requesting k more instances of resource type Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize as follows:
(a) Work = Available
(b) For i = 1,2, …, n,
If Allocation[i]#0, then Finish[i] = false;
Otherwise, Finish[i] = true.
2.Find an index i such that both:
(a) Finish[i] == false
(b) Requesti<=Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1 <=i <=n, then the system is in deadlock state. Moreover,
If Finish[i] == false, then Pi is deadlocked.
Detection-Algorithm Usage
When, and how often, to invoke depends on.
How often a deadlock is likely to occur.
How many processes will need to be rolled back.
One for each disjoint cycle
If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and
so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.
How long process has computed, and how much longer to completion.
Resources the process has used.
Resources process needs to complete.
How many processes will need to be terminated.
Is process interactive or batch system
Recovery from Deadlock:
Resource Preemption
Selecting a victim – minimize cost.
Rollback – return to some safe state, restart process for that state.
Starvation – same process may always be picked as victim, include number of rollback in cost
factor.
19. Consider the following snapshot of a system: (10)
A process’s scheduling class defines which algorithm to apply .For time-sharing processes; Linux
uses a prioritized, credit based algorithm. The crediting rule factors in both the process’s history and
its priority This crediting system automatically prioritizes interactive or I/O-bound processes. Linux
implements the FIFO and round-robin real-time scheduling classes; in both cases, each process has a
priority in addition to its scheduling class.
The scheduler runs the process with the highest priority; for equal-priority processes, it runs the
process waiting the longest. FIFO processes continue to run until they either exit or block .A round-
robin process will be preempted after a while and moved to the end of the scheduling queue, so that
round-roping processes of equal priority automatically time-share between themselves.
Symmetric Multiprocessing
Linux 2.0 was the first Linux kernel to support SMP hardware; separate processes or threads can
execute in parallel on separate processors. To preserve the kernel’s nonpreemptible synchronization
requirements, SMP imposes the restriction, via a single kernel spin lock, that only one processor at a
time may execute kernel-mode code
The job of allocating CPU time to different tasks within an operating system. While scheduling is
normally thought of as the running and interrupting of processes, in Linux, scheduling also includes
the running of the various kernel tasks .Running kernel tasks encompasses both tasks that are
requested by a running process and tasks that execute internally on behalf of a device driver. new
scheduling algorithm – preemptive, priority-based
Real-time range
Nice value
Kernel Synchronization
A request for kernel-mode execution can occur in two ways:
1. A running program may request an operating system service, either explicitly via a system
call, or implicitly, for example, when a page fault occurs
2. A device driver may deliver a hardware interrupt that causes the CPU to start executing a
kernel-defined handler for that interrupt.
Kernel synchronization requires a framework that will allow the kernel’s critical sections to run
without interruption by another critical section.
Linux uses two techniques to protect critical sections:
1.Normal kernel code is nonpreemptible when a time interrupt is received while a process
is executing a kernel system service routine, the kernel’s need reached flag is set so that the
scheduler will run once the system call has completed and control is about to be returned to
user mode.
2. The second technique applies to critical sections that occur in an interrupt service routine By using
the processor’s interrupt control hardware to disable interrupts during a critical section, the kernel
guarantees that it can proceed without the risk of concurrent access of shared data structures.
To avoid performance penalties, Linux’s kernel uses a synchronization architecture that allows
long critical sections to run without having interrupts disabled for the critical section’s entire duration.
Interrupt service routines are separated into a top half and a bottom half.
The top half is a normal interrupt service routine, and runs with recursive interrupts disabled. The
bottom half is run, with all interrupts enabled, by a miniature scheduler. That ensures that bottom
halves never interrupt themselves this architecture is completed by a mechanism for disabling
selected bottom halves while executing normal, foreground kernel code.
Interrupt Protection Levels
2 Marks
2 Marks
1. What is a Dispatcher?
2. Define throughput?
5. Define Aging and starvation?
6. What is context switch?
7. What are the benefits of multithreaded programming?
9. What is a thread?
10. Define CPU scheduling
12. Define Medium Term Scheduler
14. What is the difference between process and thread?
15. What are the uses of job queues, ready queue and device queue?
20. What are conditions under which a deadlock situation may arise?
11 MARKS
15. Explain storage management .A system has 2 A resources 3 B and 6 C resources .5 processes their
current allocation and their maximum allocation are shown below. Is the system in a safe state? If so
,show one sequence of processes which allow the system to complete .If not, explain why.
Allocation Max
A B C A B C
Po 0 0 2 2 0 3
P1 1 1 0 2 3 5
P2 0 0 1 1 2 3
P3 1 0 0 2 0 3
P4 0 0 2 0 1 5
16. Explain Deadlock Prevention in detail?
17. What are the various address translation mechanism used in paging
18. Consider the following snapshot of a system: