OS-Unit 2
OS-Unit 2
Process in memory
Process States
As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:
1. new: The process is being created
1
2. running: Instructions are being executed
3. waiting: The process is waiting for some event to occur
4. ready: The process is waiting to be assigned to a processor
5. terminated: The process has finished execution
2
Figure 2.4 Diagram showing CPU switch from process to process
PROCESS SCHEDULING
The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program while it is running.
To meet these objectives, the process scheduler selects an available process (possibly from a set
of several available processes) for program execution on the CPU.
To Maximize CPU use, quickly switch processes onto CPU for time sharing
For a single-processor system, there will never be more than one running process.
If there are more processes, the rest have to wait until the CPU is free and can be
rescheduled.
Scheduling Queues
Maintains scheduling queuesof processes
1. Job queue – set of all processes in the system
2. Ready queue – set of all processes residing in main memory, ready and waiting
to execute
3. Device queues – set of processes waiting for an I/O device Processes migrate
among the various queues
3
Ready Queue and Various I/O Device Queues
4
Figure Queueing-diagram representation of process scheduling.
The process switches from the waiting state to the ready state and is then put back in the
ready queue. This cycle until it terminates, at which time it is removed from all queues and has
its PCB and resources deallocated.
Schedulers
The operating system selects processes from queues for scheduling. The selection process
is carried out by the appropriate scheduler.
5
Selects from among the processes that are ready to execute and
allocates the CPU to one of them.
A process may execute for only a few milliseconds before waiting for an I/O
request.
Executes at least once every 100 milliseconds.
Because of the short time between executions, it must be fast. For Ex: If it takes
10 milliseconds to decide to execute a process for 100 milliseconds, then 10/(100
+ 10) = 9 percent of the CPU is being used (wasted) simply for scheduling the
work.
6
Medium-term scheduler can be added if degree of multiple programming needs to decrease
Remove process from memory, store on disk, bring back in from disk to continue
execution: swapping
Context-switch time is pure overhead, because the system does no useful work while
switching.
OPERATIONS ON PROCESSES
The processes in most systems can execute concurrently, and they may be created and
deleted dynamically. Thus, system must provide mechanisms for:
Process creation
Process termination
Process Creation
Generally, process identified and managed via a process identifier (pid)
During the course of execution, a process may create several new processes.
The creating process is called a parent process, and
The new processes are called the children of that process.
Each of these new processes may in turn create other processes, forming a tree of
processes.
Most operating systems (including UNIX, Linux, and Windows) identify processes
according to a unique process identifier (or pid), which is typically an integer number.
Figure illustrates a typical process tree for the Linux operating system, showing the name
of each process and itspid. The init process (which always has a pid of 1) serves as the root
parent process for all user processes.
Once the system has booted, the init process can also create various user processes, such
as a web or print server, an ssh server, and the like. There are two children of init—k threadd and
sshd.
The kthreadd process is responsible for creating additional processes that perform tasks
on behalf of the kernel (in this situation, khelper and pdflush).
The sshd process is responsible for managing clients that connect to the system by using
ssh(which is short for secure shell).
The login process is responsible for managing clients that directly log onto the system.
In this example, a client has logged on and is using the bash shell, which has been
assigned pid 8416.
8
Figure A tree of processes on a typical linux system.
Resource sharing options
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a new program loaded into it.
UNIX
Examples
A new process is created by the fork ()system call, consists of a copy of the address space
of the original process.
After a fork () system call, one of the two processes typically uses the exec () system call
to replace the process’s memory space with a new program.
9
The exec () system call loads a binary file into memory and starts its execution. In this
manner, the two processes are able to communicate and then go their separate ways.
The parent can then create more children; or, if it has nothing else to do while the child
runs, it can issue a wait () system call to move itself off the ready queue until the termination of
the child.
The only difference is that the value of pid (the process identifier) for the child process is
zero, while that for the parent is an integer value greater than zero.
Process Termination
A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit () system call.
At that point, the process may return a status value (typically an integer) to its parent
process (via the wait () system call).
All the resources of the process—including physical and virtual memory, open files, and
I/O buffers—are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
The child has exceeded its usage of some of the resources that it has been allocated.
The task assigned to the child is no longer required.
The parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.
If a process terminates (either normally or abnormally), then all its children must also be
terminated. This phenomenon, referred to as cascading termination, is normally initiated by the
operating system.
When a process terminates, its resources are deallocated by the operating system.
However, its entry in the process table must remain there until the parent calls wait (), because
the process table contains the process’s exit status.
A process that has terminated, but whose parent has not yet called wait (), is known as a
zombie process. All processes transition to this state when they terminate, but generally they
exist as zombies. Once the parent calls wait (), the process identifier of the zombie process and
its entry in the process table are released.
If a parent did not invoke wait () and instead terminated, thereby leaving its child
processes as orphans.
THREADS- OVERVIEW
A thread is a basic unit of CPU utilization:
It comprises a thread ID, a program counter, a register set, and a stack.
It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
Motivation
Most software applications that run on modern computers are multithreaded.
10
A separate process with several threads runs within application.
If a process has multiple threads of control, it can perform more than one task at a
time. Figure 2.10 illustrates the difference between a traditional single-threaded
process and a multithreaded process.
A web browser might have one thread display images or text while another thread
retrieves data from the network,
For example,
A word processor may have a thread for displaying graphics,
Another thread for responding to keystrokes from the user,
A third thread for performing spelling and grammar checking in the background.
When the server receives a request, it creates a separate process to service that request.
This process-creation method was in common use before threads became popular.
Process creation is
Time consuming and
Resource intensive.
It is generally more efficient to use one process that contains multiple threads.
If the web-server process is multithreaded, the server will create a separate thread that
listens for client requests. When a request is made, rather than creating another process, the
server creates a new thread to service the request and resume listening for additional requests.
Threads also play a vital role in remote procedure call (RPC) systems.
RPC servers are multithreaded. When a server receives a message, it services the
message using a separate thread. This allows the server to service several concurrent requests.
11
Figure Multithreaded server architecture.
Finally, most operating-system kernels are multithreaded.
Several threads operate in the kernel, and each thread performs a specific task, such as
managing devices, managing memory, or interrupt handling.
Benefits
1. Responsiveness – Multithreading an interactive application may allow continued
execution if part of process is blocked, thereby increasing responsiveness to the user.
Especially important for user interfaces.
2. Resource Sharing – threads share resources of process, easier than shared memory
or message passing
3. Economy – cheaper than process creation, thread switching lower overhead than
context switching
4. Scalability – process can take advantage of multiprocessor architectures, where
threads may be running in parallel on different processing cores. A single-threaded
process can run on only one processor, regardless how many are available.
MULTITHREADING MODELS
Threads may be used in the user level or kernel level
User threads - management done by user-level threads library
Three primary thread libraries:
a. POSIX P threads
b. Windows threads
c. Java threads
Kernel threads - Supported by the Kernel
Examples – virtually all general purpose operating systems, including:
a. Windows
b. Solaris
c. Linux
d. Tru64 UNIX
e. Mac OS X
Multithreading Models
i. Many-to-One
ii. One-to-One
iii. Many-to-Many
12
i. Many-to-One
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one may be in
kernel at a time
Few systems currently use this model.
ii. One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
Examples:
a. Windows
b. Linux
c. Solaris 9 and later
Shortcomings
Developers can create as many user threads as necessary, and the corresponding
kernel threads can run in parallel on a multiprocessor
When a thread performs a blocking system call, the kernel can schedule another thread
for execution
14
CPU SCHEDULING
1. Basic Concepts
Maximum CPU utilization obtained with multiprogramming
CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main concern
CPU Scheduler
Short-term scheduler selects from among the processes in ready queue, and allocates
the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
Switches from running to waiting state
Switches from running to ready state
Switches from waiting to ready
Terminates
Scheduling under 1 and 4 is non preemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities
15
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start another
running
SCHEDULING CRITERIA
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
SCHEDULING ALGORITHMS
1. First Come, First-Served Scheduling (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin (RR)
5. Multilevel Queue
6. Multilevel Feedback Queue
16
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
17
SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24
P1 P2 P4 P1 P3
0 1 5 10 17 26
EXAMPLES
i. Nonpreemtive SJF
Process Burst Time
P1 7
P2 3
P3 4
18
The Gantt Chart for SJF (Normal) is:
Example 2:
Process Arrival Time Burst Time
P1 0.0 6
P2 0.0 4
P3 0.0 1
P4 0.0 5
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
19
iii) SJF (non-preemptive, varied arrival times)
3. Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority (smallest integer highest
priority)
Preemptive.
Non preemptive.
SJF is priority scheduling where priority is the inverse of predicted next CPU burst
time.
20
Problem Starvation – low priority processes may never execute
Solution Aging – as time progresses increase the priority of the process
Example 1
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart
P1 P2 P1 P3 P4
0 1 6 16 18 19
Example 2
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt Chart
21
If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits
more than (n-1)qtime units.
Timer interrupts every quantum to schedule next process
Performance
q large FIFO
q small q must be large with respect to context switch, otherwise overhead is too
high
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
22
The Gantt chart is:
Typically, higher average turnaround than SJF, but better response time
Average waiting time = ([(0 – 0) + (77 - 20) + (121 – 97)] + (20 – 0) + [(37 – 0) + (97 -
57) + (134 – 117)] + [(57 – 0) + (117 – 77)] ) / 4
= (0 + 57 + 24) + 20 + (37 + 40 + 17) + (57 + 40) ) / 4
= (81 + 20 + 94 + 97)/4
= 292 / 4 = 73
Average turn-around time = 134 + 37 + 162 + 121) / 4 = 113.5
23
Figure Multilevel queue scheduling.
Critical Section
General structure of process Pi
25
Requirements for the solution of Critical-Section Problem
1. Mutual Exclusion- If process Piis executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress- If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely
3. Bounded Waiting- A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of thenprocesses.
Critical-Section Handling in OS
Two approaches depending on if kernel is preemptive or non- preemptive
1. Preemptive – allows preemption of process when running in kernel mode
2. Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
Essentially free of race conditions in kernel mode.
Synchronization Hardware
Many systems provide hardware support for implementing the critical section code.
All solutions below based on idea of locking
Protecting critical regions via locks
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
Modern machines provide special atomic hardware instructions
Atomic = non-interruptible
Either test memory word and set value
Or swap contents of two memory words
Using the hardware solutions for the critical section problem the following requirements
have met
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
MUTEX LOCKS
The hardware-based solutions to the critical-section problem. The simplest of these tools
is the mutex lock. (In fact, the term mutex is short for mutual exclusion).
The mutex locks are used to protect critical regions and thus prevent race conditions.
do
{
acquire lock
critical section
release lock
remainder section
}
while (true);
Calls to either acquire() or release() must be performed atomically. Thus, mutex locks are often
implemented using one of the hardware mechanisms.
SEMAPHORE
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations:
wait () - originally termed P (from the Dutch proberen, “to test”);
signal () - originally called V (from verhogen,“to increment”).
All modifications to the integer value of the semaphore in the wait ()and signal ()
operations must be executed indivisibly. ie, when one process modifies the semaphore value, no
other process can simultaneously modify that same semaphore value.
Semaphore Usage
Operating systems distinguish between
1. Counting semaphore
2. Binary Semaphore
Counting semaphores
1. The value of a counting semaphore can range over an unrestricted domain ie. finite
number of instances.
2. The semaphore is initialized to the number of resources available.
3. When a process use a resource, it performs a wait () operation & when a process releases
a resource, it performs a signal () operation.
Binary semaphores
1. The value of a binary semaphore can range only between 0 and 1.
2. Thus, binary semaphores behave similarly to mutex locks.
29
Because synch is initialized to 0, P2 will execute S2 only after P1 has invoked signal(synch),
which is after statement S1 has been executed.
Semaphore Implementation:
It must guarantee that no two processes can execute the wait () and signal () on the same
semaphore at the same time.
The implementation becomes the critical section problem where the wait and signal code
are placed in the critical section.
While a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the call to acquire (). ie. It could now have busy
waiting in critical section implementation.
This type of mutex lock is also called a spin lock because the process “spins” while
waiting for the lock to become available. But the implementation code is short.
There will be little busy waiting if critical section rarely occupied.
This is not a good solution because the applications may spend lots of time in critical
sections.
Semaphore Implementation with no busy waiting:
With each semaphore there is an associated waiting queue.
Each entry in a waiting queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
Block – place the process invoking the operation on the appropriate waiting queue
Wakeup – remove one of processes in the waiting queue and place it in the ready queue
Deadlocks
Deadlocks occur where two or more processes are waiting indefinitely for an event that
can be caused by one of the waiting processes. When such a state is reached, these processes are
said to be deadlocked.
30
Suppose that P0 executes wait(S) and then P1 executes wait (Q). When P0 executes wait
(Q), it must wait until P1 executes signal (Q).
Similarly, when P1 executes wait (S), it must wait until P0 executes signal (S).
Since these signal () operations cannot be executed, P0 and P1are deadlocked.
DEADLOCKS
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is called
a deadlock.
System Model
System consists of resources
Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
Each resource type Ri has Wi instances.
Each process utilizes a resource as follows:
request
use
release
Deadlock Characterization:
Deadlock can arise if four conditions hold simultaneously.
1. Mutual exclusion: only one process at a time can use a resource
2. Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
3. No preemption: a resource can be released only voluntarily by the process holding it,
after that process has completed its task
4. Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …,
31
Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is
held by P0.
Deadlocks can occur via system calls, locking, etc.
Resource-Allocation Graph
A set of vertices V and a set of edges E.
V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
request edge – directed edge Pi Rj
assignment edge – directed edge RjPi
Process
Pirequests instance of Rj
Pi
Pi is holding an instance of Rj
P
i
32
Resource Allocation Graph With A Deadlock
Basic Facts
If graph contains no cycles no deadlock
If graph contains a cycle
if only one instance per resource type, then deadlock
if several instances per resource type, possibility of deadlock
33
Ignore the problem and pretend that deadlocks never occur in the system; used by
most operating systems, including UNIX.
DEADLOCK PREVENTION
Restrain the ways request can be made
Mutual Exclusion – not required for sharable resources (e.g., read-only files); must
hold for non-sharable resources
Hold and Wait – must guarantee that whenever a process requests a resource, it does
not hold any other resources
Require process to request and be allocated all its resources
Before it begins execution, or allows process to request resources only when the
process has none allocated to it.
Low resource utilization; starvation possible
No Preemption
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
Preempted resources are added to the list of resources for which the process is
waiting
Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting
Circular Wait - impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration
DEADLOCK AVOIDANCE
Requires that the system has some additional a priori information available
Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need
The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes
Safe State
When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state
System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes in the systems such that for each P i, the resources that Pi can still
request can be satisfied by currently available resources + resources held by all
the Pj, with j <I
That is:
If Pi resource needs are not immediately available, then Pi can wait until all Pjhave
finished
34
When Pj is finished, Pi can obtain needed resources, execute, return allocated resources,
and terminate
Basic Facts
If a system is in safe state no deadlocks
If a system is in unsafe state possibility of deadlock
Avoidance ensure that a system will never enter an unsafe state.
Safe, Unsafe, Deadlock State
Avoidance Algorithms
Single instance of a resource type
Use a resource-allocation graph
Multiple instances of a resource type
Use the banker’s algorithm
Safety Algorithm
Resource-Request Algorithm
35
Unsafe State in Resource-Allocation Graph
Banker’s Algorithm
Multiple instances
Each process must a priori claim maximum use
When a process requests a resource it may have to wait
When a process gets all its resources it must return them in a finite amount of time
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize:
Work = Available
36
Finish [i] = false for i = 0, 1, …,n- 1
2. Find an isuch that both:
(a) Finish [i] = false
(b) NeediWork
If no such i exists, go to step 4
3. Work = Work + AllocationiFinish[i] = true go to step 2
4. If Finish [i] == true for all i, then the system is in a safe state
37
The system is in a safe state since the sequence <P1, P3, P4, P2, P0> satisfies safety criteria
DEADLOCK DETECTION
Allow system to enter deadlock state
Detection algorithm
Recovery scheme
Single Instance of Each Resource Type
Maintain wait-for graph
Nodes are processes
PiPjif Piis waiting forPj
Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle,
there exists a deadlock
An algorithm to detect a cycle in a graph requires an order ofn2operations, where n is the
number of vertices in the graph
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
a) (Work = Available)
b) For i = 1,2, …, n, if Allocation 0, then Finish[i] = false; otherwise, Finish[i]
= true
2. Find an index isuch that both:
a) Finish[i] == false
b) RequestiWork
If no such iexists, go to step 4
3. Work = Work + Allocation Finish[i] = true go to step 2
4. If Finish[i] == false, for some i, 1 in, then the system is in deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked
39
Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes; requests
Deadlock exists, consisting of processes P1, P2, P3, and P4
Detection-Algorithm Usage
When, and how often, to invoke depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled back?
one for each disjoint cycle
If detection algorithm is invoked arbitrarily, there may be many cycles in the resource
graph and so we would not be able to tell which of the many deadlocked processes
“caused” the deadlock.
RECOVERY
40
QUESTION BANK
PART-A
1. What is the difference between a program and a process?
Program is passive entity stored on disk (executable file), process is active
Program becomes process when executable file loaded into memory.
2. With the help of a state transition diagram, List out the various states of a process.
As a process executes, it changes state
1. new: The process is being created
2. running: Instructions are being executed
3. waiting: The process is waiting for some event to occur
4. ready: The process is waiting to be assigned to a processor
5. terminated: The process has finished execution
41
2. Ready queue – set of all processes residing in main memory, ready and
waiting to execute
3. Device queues – set of processes waiting for an I/O device Processes
migrate among the various queues
5. What do you mean by queuing diagram representation?
A common representation of process scheduling is a queueing diagram. Queueing
diagramrepresents queues, resources, flows.
12. With the help of block diagrams, explain the flow of control between
two processes during process switching?
44
2. One-to-One
3. Many-to-Many
23. Give the shortcomings of Many-to-many threading model?
Developers can create as many user threads as necessary, and the corresponding kernel
threads can run in parallel on a multiprocessor
When a thread performs a blocking system call, the kernel can schedule another thread
for execution
24. Why we need process synchronization?
Processes can execute concurrently
May be interrupted at any time, partially completing execution
Concurrent access to shared data may result in data inconsistency
Maintaining data consistency requires process synchronization mechanisms to ensure the
orderly execution of cooperating processes.
25. Define Race condition.
A situation in which several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order in which the access takes place
is called race condition.
26. What do you mean by critical section problem?
Consider system of n processes {p0, p1, … pn-1}
Each process has critical section segment of code
Process may be changing common variables, updating table, writing file, etc
When one process in critical section, no other may be in its critical section
Critical section problem is to design protocol to solve this Each process must ask
permission to enter critical section in entry section, may follow critical section with exit
section, then remainder section
General structure:
27. What are the requirement that must required for Critical section
algorithms. Or List out the requirements for the solution of CS problem?
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
45
28. What is meant by mutual exclusion?
-If process Piis executing in its critical section, then no other processes can be executing
in their critical sections
29. Give two hardware instructions and their definitions which can be
used for implementing mutual exclusion
- Test and set
- Compare and swap
30. What is meant by semaphore? List out the operations used.
Synchronization tool that provides more sophisticated ways (than Mutex locks) for
process to synchronize their activities.
Semaphore S – integer variable
Can only be accessed via two indivisible (atomic) operations
wait() and signal()
Originally called P() and V()
31. Differentiate preemptive and non-preemptive approaches
Preemptive – allows preemption of process when running in kernel mode
Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields
CPU Essentially free of race conditions in kernel mode
32. Give the types of semaphores?
1. Counting semaphore
2. Binary semaphore
33. Define busy waiting.
While a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the while loop, which is also called a spin lock because the
process “spins” while waiting for the lock to become available
34. How can be busy waiting avoided?
With each semaphore there is an associated waiting queue
Two operations:
block – place the process invoking the operation on the appropriate waiting queue
wakeup – remove one of processes in the waiting queue and place it in the ready
queue
35. Define starvation or indefinite blocking
A process may never be removed from the semaphore queue in which it is suspended
36. What is meant by priority inversion?
Priority Inversion – Scheduling problem when lower-priority process holds a lock
needed by higher-priority process
Solved via priority-inheritance protocol
37. What is a mutex? What are the locks used and list out its drawbacks?
Software tools to solve critical section problem
Protect a critical section by first acquire() a lock then release() the lock
Boolean variable indicating if lock is available or not
46
Drawbacks: spinlock, busy waiting
38. List out the Classical Problems of Synchronization.
Classical problems used to test newly-proposed synchronization schemes
1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem
39. Declare the structure for monitors
{
// shared variable declarations
Function F1 (…) { …. }
Function Fn (…) {……}
Initialization code (…) { … }
}
Two operations are allowed on a condition variable:
x.wait() – a process that invokes the operation is suspended until x.signal()
x.signal() – resumes one of processes (if any) that invoked x.wait()
If no x.wait() on the variable, then it has no effect on the variable
40. Define CPU scheduling.
1. CPU scheduler or Short-term scheduler selects from among the processes in ready
queue, and allocates the CPU to one of them
a. Queue may be ordered in various ways
2. CPU scheduling decisions may take place when a process:
a. Switches from running to waiting state
b. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
41. W hat is a Dispatcher?
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
42. W hat is dispatch latency?
Time it takes for the dispatcher to stop one process and start another running
43. W hat are the various scheduling criteria for CPU scheduling?
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their execution per time unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the ready queue
Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
47
44. List out the various scheduling algorithms.
Scheduling Algorithms
1. First Come, First-Served Scheduling (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin (RR)
5. Multilevel Queue
6. Multilevel Feedback Queue
45. What is FCFS Scheduling?
In this algorithm, the process that requests the CPU first is allocated the CPU first. This
implementation of the FCFS policy is easily managed with a FIFO queue.
46. What is drawback of FCFS?
Convoy effect - short process behind long process. i.e., Consider one CPU-bound and
many I/O-bound processes
47. What is aging?
The problem of starvation (low priority processes may never execute) is solved using
aging in which, as time progresses increase the priority of the process
48. Define deadlock.
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is called
a deadlock.
49. What are conditions under which a deadlock situation may arise?
1. Mutual exclusion: only one process at a time can use a resource
2. Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
3. No preemption: a resource can be released only voluntarily by the process holding it,
after that process has completed its task
4. Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …,
Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is
held by P0.
50. What is a resource allocation graph (RAG)?
A set of vertices V and a set of edges E.
V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
request edge – directed edge Pi Rj
assignment edge – directed edge RjPi
48
51. What are the methods for handling deadlocks?
1. Deadlock prevention
2. Deadlock avoidance
3. Deadlock detection
4. Deadlock recovery
49
Claim edge converts to request edge when a process requests a resource
Request edge converted to an assignment edge when the resource is allocated to the
process
When a resource is released by a process, assignment edge reconverts to a claim edge
Resources must be claimed a priori in the system
PART-B
1. Write short notes on
a) Process Scheduling
b) Queues
50
Ans: a. Definition - Process Scheduling
Types of Schedulers- STS, LTS, MTS
Queuing diagram
b. Types of queues with diagrams
2. Write short notes on Operations on processes.
a. Process Creation
b. Process termination
3. Explain about IPC
a. Communication models
b. Direct Vs Indirect
c. Synchronization
d. Buffering
4. Write short notes on threading issues
a. Many to one
b. One to one
c. Many to many
d. Two level
5. Explain about Windows 7 thread and SMP Management
a. Diagram
b. Process and thread object, attributes, services
c. Thread states with diagram
d. SMP Management- hard and soft affinity
6. Explain about critical section problem
a. Definition
b. General structure
c. Requirements for the solutions
d. Two process solutions
e. Synchronization Hardware- test and set, compare and swap
7. Explain about semaphores
a. Definition
b. Two operations- wait, signal
c. Implementations
d. Problems
8. Explain about Monitors
a. Definition
b. Implementation
9. Explain about deadlock avoidance
Definition
a. Single instance of a resource type
b. Use a resource-allocation graph
c. Multiple instances of a resource type
d. Use the banker’s algorithm
51
e. Safety Algorithm
f. Resource-Request Algorithm
10. Explain about deadlock prevention
How the necessary 4 conditions denied
11. Explain about deadlock detection
a. Allow system to enter deadlock state
b. Detection algorithm
c. Recovery scheme
12. Explain about deadlock recovery
a. Recovery from Deadlock: Process Termination
b. Recovery from Deadlock: Resource Preemption
52