0% found this document useful (0 votes)
18 views52 pages

OS-Unit 2

Uploaded by

Dhoni Gamer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views52 pages

OS-Unit 2

Uploaded by

Dhoni Gamer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

UNIT II - PROCESS MANAGEMENT

Processes-Process Concept, Process Scheduling, Threads Overview- Multithreading Models CPU


Scheduling - Scheduling criteria, Scheduling algorithms, Process Synchronization - Critical Section
Problem, Mutex Locks, Semaphores, Deadlock-System model, Deadlock Prevention, Avoidance and
Recovery.
PROCESES AND PROCESS CONCEPT
1. An operating system executes a variety of programs:
 Batch system – jobs
2. Time-shared systems – user programs or tasks
3. Textbook uses the terms job and process almost interchangeably.

Process – a program in execution; process execution must progress in sequential fashion


It has Multiple parts
1. The program code, also called text section
2. Current activity including program counter, processor registers
3. Stack containing temporary data
4. Function parameters, return addresses, local variables
5. Data section containing global variables
6. Heap containing memory dynamically allocated during run time
 Program is passive entity stored on disk (executable file), process is active
 Program becomes process when executable file loaded into memory
7.Execution of program started via GUI mouse clicks, command line entry of its
name, etc
 One program can be several processes
 Consider multiple users executing the same program

Process in memory

Figure Process in memory.

Process States
As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:
1. new: The process is being created
1
2. running: Instructions are being executed
3. waiting: The process is waiting for some event to occur
4. ready: The process is waiting to be assigned to a processor
5. terminated: The process has finished execution

Figure Process states

Process Control Block (PCB)


Each process is represented in the operating system by a process control block (PCB)—
also called a task control block. A PCB is shown in Figure.
1. Process state –may be new, ready, running, waiting, terminated etc
2. Program counter – location of instruction to next execute
3. CPU registers – contents of all process-centric registers
4. CPU scheduling information- priorities, scheduling queue pointers
5. Memory-management information – memory allocated to the process.This
information may include such items as the value of the base and limit registers and
the page tables, or the segment tables, depending on the memory system used by the
operating system
6. Accounting information – the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on.
7. I/O status information – I/O devices allocated to process, list of open files and so
on.

Figure Process control block (PCB)

2
Figure 2.4 Diagram showing CPU switch from process to process

PROCESS SCHEDULING
The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
The objective of time sharing is to switch the CPU among processes so frequently that
users can interact with each program while it is running.
To meet these objectives, the process scheduler selects an available process (possibly from a set
of several available processes) for program execution on the CPU.
 To Maximize CPU use, quickly switch processes onto CPU for time sharing
 For a single-processor system, there will never be more than one running process.
 If there are more processes, the rest have to wait until the CPU is free and can be
rescheduled.
Scheduling Queues
 Maintains scheduling queuesof processes
1. Job queue – set of all processes in the system
2. Ready queue – set of all processes residing in main memory, ready and waiting
to execute
3. Device queues – set of processes waiting for an I/O device Processes migrate
among the various queues

3
Ready Queue and Various I/O Device Queues

Figure The ready queue and various I/O device queues.


As processes enter the system, they are put into a job queue, which consists of all
processes in the system.
The processes that are residing in main memory and are ready and waiting to execute are
kept on a list called the ready queue. This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue.
The system also includes other queues. When a process is allocated the CPU, it executes
for a while and eventually quits, is interrupted, or waits for the occurrence of a particular event,
such as the completion of an I/O request.
Suppose the process makes an I/O request to a shared device, such as a disk. Since there
are many processes in the system, the disk may be busy with the I/O request of some other
process.
The process therefore may have to wait for the disk. The list of processes waiting for a
particular I/O device is called a device queue. Each device has its own device queue.

Representation of Process Scheduling


A common representation of process scheduling is a queueing diagram. Queueing diagram
represents queues, resources, flows.

4
Figure Queueing-diagram representation of process scheduling.

Two types of queues are present:


 Ready queue and a set of device queues.
 The circles represent the resources that serve the queues, and the arrows indicate the flow
of processes in the system.
 A new process is initially put in the ready queue. It waits there until it is selected for
execution, or dispatched.
 Once the process is allocated the CPU and is executing, one of several events could
occur:
 The process could issue an I/O request and then be placed in an I/O queue.
 The process could create a new child process and wait for the child’s termination.
 The process could be removed by force from the CPU, as a result of an interrupt, and be
put back in the ready queue.

The process switches from the waiting state to the ready state and is then put back in the
ready queue. This cycle until it terminates, at which time it is removed from all queues and has
its PCB and resources deallocated.

Schedulers
The operating system selects processes from queues for scheduling. The selection process
is carried out by the appropriate scheduler.

Two types of schedulers:


1. Short Term Schedulers (STS) or CPU Scheduler
2. Long Term Schedulers (LTS) or Job Scheduler

1. Short-term scheduler (or) CPU scheduler – selects which process should be


executed next and allocates CPU
 Sometimes the only scheduler in a system
 Short-term scheduler is invoked frequently (milliseconds)  (must fast)

5
 Selects from among the processes that are ready to execute and
 allocates the CPU to one of them.
 A process may execute for only a few milliseconds before waiting for an I/O
request.
 Executes at least once every 100 milliseconds.
 Because of the short time between executions, it must be fast. For Ex: If it takes
10 milliseconds to decide to execute a process for 100 milliseconds, then 10/(100
+ 10) = 9 percent of the CPU is being used (wasted) simply for scheduling the
work.

2. Long-term scheduler (or) job scheduler – selects which processes should be


brought into the ready queue
 Long-term scheduler is invoked infrequently (seconds, minutes)  (may be
slow)
 The long-term scheduler controls the degree of multiprogramming
In general, most processes can be described as
 Either I/O bound or
 CPU bound.
 I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts
 CPU-bound process – spends more time doing computations; few very long CPU
bursts
Long-term scheduler strives for good process mix

Addition of Medium Term Scheduling


The long-term scheduler selects a good process mix of I/O-bound and CPU-bound
processes.
 If all processes are I/O bound - the ready queue will almost always be empty, and the
short-term scheduler will have little to do.
 If all processes are CPU bound - the I/O waiting queue will almost always be empty,
devices will go unused, and again the system will be unbalanced.

Some operating systems, such as time-sharing systems, may introduce an additional,


intermediate level of scheduling called medium-term scheduler.
The key idea behind a medium-term scheduler - it can be advantageous to remove a
process from memory (and from active contention for the CPU) and thus reduce the degree of
multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping.
The process is swapped out, and is later swapped in, by the medium-term scheduler.
Swapping may be necessary to improve the process mix.

6
Medium-term scheduler can be added if degree of multiple programming needs to decrease
 Remove process from memory, store on disk, bring back in from disk to continue
execution: swapping

Figure Addition of medium-term scheduling to the queueing diagram.

Multitasking in Mobile Systems


 Some mobile systems (e.g., early version of iOS) allow only one process to run, others
suspended
 Due to screen real estate, user interface limits iOS provides for a
 Single foreground process- controlled via user interface
 Multiple background processes– in memory, running, but not on the display, and with
limits
 Limits include single, short task, receiving notification of events, specific long-
running tasks like audio playback
 Android runs foreground and background, with fewer limits
 Background process uses a service to perform tasks
 Service can keep running even if background process is suspended
 Service has no user interface, small memory use
Context Switch
 Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context
switch i.e., When CPU switches to another process, the system must save the state of
the old process and load the saved state for the new process via a context switch
 Context of a process represented in the PCB

Context-switch time is pure overhead, because the system does no useful work while
switching.

Switching speed varies from machine to machine, depending on


 The memory speed,
 The number of registers that must be copied, and
7
 The existence of special instructions (such as a single instruction to load or store all
registers).
The more complex the OS and the PCB, the longer the context switch.

OPERATIONS ON PROCESSES
 The processes in most systems can execute concurrently, and they may be created and
deleted dynamically. Thus, system must provide mechanisms for:
 Process creation
 Process termination

Process Creation
 Generally, process identified and managed via a process identifier (pid)
 During the course of execution, a process may create several new processes.
 The creating process is called a parent process, and
 The new processes are called the children of that process.
 Each of these new processes may in turn create other processes, forming a tree of
processes.
 Most operating systems (including UNIX, Linux, and Windows) identify processes
according to a unique process identifier (or pid), which is typically an integer number.

Figure illustrates a typical process tree for the Linux operating system, showing the name
of each process and itspid. The init process (which always has a pid of 1) serves as the root
parent process for all user processes.
Once the system has booted, the init process can also create various user processes, such
as a web or print server, an ssh server, and the like. There are two children of init—k threadd and
sshd.

 The kthreadd process is responsible for creating additional processes that perform tasks
on behalf of the kernel (in this situation, khelper and pdflush).
 The sshd process is responsible for managing clients that connect to the system by using
ssh(which is short for secure shell).

The login process is responsible for managing clients that directly log onto the system.
In this example, a client has logged on and is using the bash shell, which has been
assigned pid 8416.

8
Figure A tree of processes on a typical linux system.
 Resource sharing options
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources

When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.

There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a new program loaded into it.

UNIX
Examples
A new process is created by the fork ()system call, consists of a copy of the address space
of the original process.

Figure Process creation using the fork () system call

After a fork () system call, one of the two processes typically uses the exec () system call
to replace the process’s memory space with a new program.

9
The exec () system call loads a binary file into memory and starts its execution. In this
manner, the two processes are able to communicate and then go their separate ways.
The parent can then create more children; or, if it has nothing else to do while the child
runs, it can issue a wait () system call to move itself off the ready queue until the termination of
the child.
The only difference is that the value of pid (the process identifier) for the child process is
zero, while that for the parent is an integer value greater than zero.
Process Termination
A process terminates when it finishes executing its final statement and asks the operating
system to delete it by using the exit () system call.
 At that point, the process may return a status value (typically an integer) to its parent
process (via the wait () system call).
 All the resources of the process—including physical and virtual memory, open files, and
I/O buffers—are deallocated by the operating system.

A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
 The child has exceeded its usage of some of the resources that it has been allocated.
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.

If a process terminates (either normally or abnormally), then all its children must also be
terminated. This phenomenon, referred to as cascading termination, is normally initiated by the
operating system.
When a process terminates, its resources are deallocated by the operating system.
However, its entry in the process table must remain there until the parent calls wait (), because
the process table contains the process’s exit status.
A process that has terminated, but whose parent has not yet called wait (), is known as a
zombie process. All processes transition to this state when they terminate, but generally they
exist as zombies. Once the parent calls wait (), the process identifier of the zombie process and
its entry in the process table are released.
If a parent did not invoke wait () and instead terminated, thereby leaving its child
processes as orphans.

THREADS- OVERVIEW
A thread is a basic unit of CPU utilization:
 It comprises a thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section, data
section, and other operating-system resources, such as open files and signals.
Motivation
 Most software applications that run on modern computers are multithreaded.
10
 A separate process with several threads runs within application.
 If a process has multiple threads of control, it can perform more than one task at a
time. Figure 2.10 illustrates the difference between a traditional single-threaded
process and a multithreaded process.

Figure Single-threaded and multithreaded processes.

A web browser might have one thread display images or text while another thread
retrieves data from the network,
For example,
 A word processor may have a thread for displaying graphics,
 Another thread for responding to keystrokes from the user,
 A third thread for performing spelling and grammar checking in the background.

When the server receives a request, it creates a separate process to service that request.
This process-creation method was in common use before threads became popular.
Process creation is
 Time consuming and
 Resource intensive.

It is generally more efficient to use one process that contains multiple threads.
If the web-server process is multithreaded, the server will create a separate thread that
listens for client requests. When a request is made, rather than creating another process, the
server creates a new thread to service the request and resume listening for additional requests.

Threads also play a vital role in remote procedure call (RPC) systems.
RPC servers are multithreaded. When a server receives a message, it services the
message using a separate thread. This allows the server to service several concurrent requests.

11
Figure Multithreaded server architecture.
Finally, most operating-system kernels are multithreaded.
Several threads operate in the kernel, and each thread performs a specific task, such as
managing devices, managing memory, or interrupt handling.
Benefits
1. Responsiveness – Multithreading an interactive application may allow continued
execution if part of process is blocked, thereby increasing responsiveness to the user.
Especially important for user interfaces.
2. Resource Sharing – threads share resources of process, easier than shared memory
or message passing
3. Economy – cheaper than process creation, thread switching lower overhead than
context switching
4. Scalability – process can take advantage of multiprocessor architectures, where
threads may be running in parallel on different processing cores. A single-threaded
process can run on only one processor, regardless how many are available.

MULTITHREADING MODELS
Threads may be used in the user level or kernel level
 User threads - management done by user-level threads library
 Three primary thread libraries:
a. POSIX P threads
b. Windows threads
c. Java threads
 Kernel threads - Supported by the Kernel
Examples – virtually all general purpose operating systems, including:
a. Windows
b. Solaris
c. Linux
d. Tru64 UNIX
e. Mac OS X
Multithreading Models
i. Many-to-One
ii. One-to-One
iii. Many-to-Many

12
i. Many-to-One
 Many user-level threads mapped to single kernel thread
 One thread blocking causes all to block
 Multiple threads may not run in parallel on muticore system because only one may be in
kernel at a time
 Few systems currently use this model.

Figure Many-to-one model.


 Examples:
a. Solaris Green Threads
b. GNU Portable Threads

ii. One-to-One
 Each user-level thread maps to kernel thread
 Creating a user-level thread creates a kernel thread
 More concurrency than many-to-one
 Number of threads per process sometimes restricted due to overhead
 Examples:
a. Windows
b. Linux
c. Solaris 9 and later

Figure one-to-one model.


13
3. Many-to-Many Model
 Allows many user level threads to be mapped to many kernel threads
 Allows the operating system to create a sufficient number of kernel threads
 Solaris prior to version 9
 Windows with the Thread Fiber package.

Shortcomings
 Developers can create as many user threads as necessary, and the corresponding
kernel threads can run in parallel on a multiprocessor
 When a thread performs a blocking system call, the kernel can schedule another thread
for execution

Figure Many-to-many model.


Two-level Model
 Similar to Many to many, except that it allows a user thread to be bound to kernel thread
 Examples
a. IRIX
b. HP-UX
c. Tru64 UNIX
d. Solaris 8 and earlier

Figure Two-level model

14
CPU SCHEDULING
1. Basic Concepts
 Maximum CPU utilization obtained with multiprogramming
 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main concern

Figure Alternating sequence of CPU and I/O bursts.

CPU Scheduler
 Short-term scheduler selects from among the processes in ready queue, and allocates
the CPU to one of them
 Queue may be ordered in various ways
 CPU scheduling decisions may take place when a process:
 Switches from running to waiting state
 Switches from running to ready state
 Switches from waiting to ready
 Terminates
 Scheduling under 1 and 4 is non preemptive
 All other scheduling is preemptive
 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities

15
Dispatcher
 Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart that program
 Dispatch latency – time it takes for the dispatcher to stop one process and start another
running

SCHEDULING CRITERIA
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)

Scheduling Algorithm Optimization Criteria


1. Max CPU utilization
2. Max throughput
3. Min turnaround time
4. Min waiting time
5. Min response time

SCHEDULING ALGORITHMS
1. First Come, First-Served Scheduling (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin (RR)
5. Multilevel Queue
6. Multilevel Feedback Queue

1. First- Come, First-Served (FCFS) Scheduling


In this algorithm, the process that requests the CPU first is allocated the CPU first. This
implementation of the FCFS policy is easily managed with a FIFO queue.

Process Burst Time


P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 ,P2 , P3 The Gantt Chart for the schedule is:

16
P1 P2 P3
0 24 27 30

Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17

 Suppose that the processes arrive in the order:


P2 , P3 , P1
The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

 Waiting time for P1 = 6;P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
 Consider one CPU-bound and many I/O-bound processes

2. Shortest-Job-First (SJF) Scheduling


This algorithm associates with each process the length of the process’s next CPU burst.
When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If
the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.

Associate with each process the length of its next CPU burst
 Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of processes
 The difficulty is knowing the length of the next CPU request
 Could ask the user

(i) Example of SJF (Non preemptive)

Process Burst Time


P1 6
P2 8
P3 7
P4 3

17
SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Determining Length of Next CPU Burst


 Can only estimate the length – should be similar to the previous one
 Then pick process with shortest predicted next CPU burst
 Can be done by using the length of previous CPU bursts, using exponential averaging
 Preemptive version called shortest-remaining-time-first (SRF)

(ii) Example of Shortest-remaining-time-first (SRF) or preemptive SJF


Now we add the concepts of varying arrival times and preemption to the analysis

Process Arrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5

Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec

EXAMPLES
i. Nonpreemtive SJF
Process Burst Time
P1 7
P2 3
P3 4

18
The Gantt Chart for SJF (Normal) is:

Average waiting time = (0 + 3 + 7)/3 = 3.33

Example 2:
Process Arrival Time Burst Time
P1 0.0 6
P2 0.0 4
P3 0.0 1
P4 0.0 5

i) SJF (non-preemptive, simultaneous arrival)

Average waiting time = (0 + 1 + 5 + 10)/4 = 4


Average turn-around time = (1 + 5 + 10 + 16)/4 = 8

ii) Non-Preemptive SJF (varied arrival times)

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

19
iii) SJF (non-preemptive, varied arrival times)

Average waiting time = ((0 – 0) + (8 – 2) + (7 – 4) + (12 – 5))/4


= (0 + 6 + 3 + 7)/4 = 4
Average turn-around time = ((7 – 0) + (12 – 2) + (8 - 4) + (16 – 5))/4
= (7 + 10 + 4 + 11) / 4 = 8

iv) Pre-emptive SJF (Shortest-remaining-time-first)

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

SJF (pre-emptive, varied arrival times)

Average waiting time = ([(0 – 0) + (11 - 2)] + [(2 – 2) + (5 – 4)] + (4 - 4) + (7 – 5) ) /4


= 9 + 1 + 0 + 2)/4
=3
Average turn-around time = (16 + 7 + 5 + 11)/4 = 9.75

3. Priority Scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority (smallest integer  highest
priority)
 Preemptive.
 Non preemptive.
 SJF is priority scheduling where priority is the inverse of predicted next CPU burst
time.

20
 Problem Starvation – low priority processes may never execute
 Solution Aging – as time progresses increase the priority of the process

Example 1
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priority scheduling Gantt Chart

P1 P2 P1 P3 P4
0 1 6 16 18 19

Average waiting time = 8.2 msec

Example 2
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt Chart

Average waiting time = (6 + 0 + 16 + 18 + 1)/5 = 8.2

4. Round Robin (RR)


 Each process gets a small unit of CPU time (time quantum q), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to the end
of the ready queue.

21
 If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units at once. No process waits
more than (n-1)qtime units.
 Timer interrupts every quantum to schedule next process
 Performance
 q large  FIFO
 q small q must be large with respect to context switch, otherwise overhead is too
high

Example 1: RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3

The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

 Typically, higher average turnaround than SJF, but better response


 q should be large compared to context switch time
 q usually 10ms to 100ms, context switch < 10 usec

Example 2: RR with Time Quantum = 20

Process Burst Time


P1 53
P2 17
P3 68
P4 24

22
The Gantt chart is:

Typically, higher average turnaround than SJF, but better response time
Average waiting time = ([(0 – 0) + (77 - 20) + (121 – 97)] + (20 – 0) + [(37 – 0) + (97 -
57) + (134 – 117)] + [(57 – 0) + (117 – 77)] ) / 4
= (0 + 57 + 24) + 20 + (37 + 40 + 17) + (57 + 40) ) / 4
= (81 + 20 + 94 + 97)/4
= 292 / 4 = 73
Average turn-around time = 134 + 37 + 162 + 121) / 4 = 113.5

Time Quantum and Context Switch Time

Figure How a smaller time quantum increases context switches

5.Multilevel Queue Scheduling


 Ready queue is partitioned into separate queues, eg:
 foreground (interactive)
 background (batch)
 Process permanently in a given queue
 Each queue has its own scheduling algorithm:
 foreground – RR
 background – FCFS
 Scheduling must be done between the queues:
 Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
 Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR , 20% to background in FCFS.

23
Figure Multilevel queue scheduling.

6. Multilevel Feedback Queue


 A process can move between the various queues; aging can be implemented this way
 Multilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when that process needs
service
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0which is servedFCFS
 When it gains CPU, job receives 8 milliseconds
 If it does not finish in 8 milliseconds, job is moved to queue Q1
 At Q1 job is again served FCFS and receives 16 additional milliseconds
 If it still does not complete, it is preempted and moved to queue Q2

Figure Multilevel feedback queues


24
PROCESS SYNCHRONIZATION
Background
Concurrent access to shared data may result in data inconsistency.
There are various mechanisms to ensure the orderly execution of cooperating processes
that share a logical address space, so that data consistency is maintained.
 Processes can execute concurrently
 May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly execution of
cooperating processes
 A cooperating process is one that can affect or be affected by other processes executing
in the system.
 Ex: Producer Consumer Problem
 Cooperating processes can either directly share a logical address space (that is, both code
and data) or be allowed to share data only through files or messages
Race Condition
A situation in which several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order in which the access takes place
is called race condition.

CRITICAL SECTION PROBLEM


 Consider system of nprocesses {p0, p1, … pn-1}
 Each process has critical section segment of code
a. Process may be changing common variables, updating table, writing file, etc
b. When one process in critical section, no other may be in its critical section
 Critical section problem is to design protocol to solve this
 Each process must ask permission to enter critical section in entry section, may
follow critical section with exit section, then remainder section

Critical Section
General structure of process Pi

25
Requirements for the solution of Critical-Section Problem
1. Mutual Exclusion- If process Piis executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress- If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely
3. Bounded Waiting- A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of thenprocesses.
Critical-Section Handling in OS
Two approaches depending on if kernel is preemptive or non- preemptive
1. Preemptive – allows preemption of process when running in kernel mode
2. Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
 Essentially free of race conditions in kernel mode.

Two process Solutions:


Peterson’s Solution:
 Good algorithmic description of solving the problem
 Two process solution
 Assume that the load and store machine-language instructions are atomic; that is,
cannot be interrupted
 The two processes share two variables:
a. int turn;
b. Boolean flag[2]
c. The variable turn indicates whose turn it is to enter the critical section
 The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready!

Algorithm for Process Pi


do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);

Provable that the three CS requirement are met:


1. Mutual exclusion is preserved
26
Pi enters CS only if:
Either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

Synchronization Hardware
 Many systems provide hardware support for implementing the critical section code.
 All solutions below based on idea of locking
 Protecting critical regions via locks
 Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware instructions
 Atomic = non-interruptible
 Either test memory word and set value
 Or swap contents of two memory words

Solution to Critical-section Problem Using Locks


do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

Using the hardware solutions for the critical section problem the following requirements
have met
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

MUTEX LOCKS
The hardware-based solutions to the critical-section problem. The simplest of these tools
is the mutex lock. (In fact, the term mutex is short for mutual exclusion).

The mutex locks are used to protect critical regions and thus prevent race conditions.

There are two functions:


 The acquire () function acquires the lock, a process must acquire the lock before
entering a critical section and
 The release () function releases the lock, when it exits the critical section.
27
A mutex lock has a boolean variable available whose value indicates if the lock is available or
not.
 If the lock is available, a call to acquire() succeeds, and the lock is then considered
unavailable. A process that attempts to acquire an unavailable lock is blocked until the
lock is released.
 But this solution requires busy waiting. This lock therefore called a spinlock.
The definition of acquire () is as follows:
acquire()
{
while (!available); /* busy wait */
available = false;;
}
The definition of release () is as follows:
release()
{
Available=true;
}

Solution to the critical-section problem using mutex locks

do
{
acquire lock
critical section
release lock
remainder section
}
while (true);

Calls to either acquire() or release() must be performed atomically. Thus, mutex locks are often
implemented using one of the hardware mechanisms.

SEMAPHORE
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations:
 wait () - originally termed P (from the Dutch proberen, “to test”);
 signal () - originally called V (from verhogen,“to increment”).

The definition of wait () is as follows:


wait(S){
while (S<=0)
28
; // busy wait
S--;
}

The definition of signal () is as follows:


signal(S){
S++;
}

All modifications to the integer value of the semaphore in the wait ()and signal ()
operations must be executed indivisibly. ie, when one process modifies the semaphore value, no
other process can simultaneously modify that same semaphore value.

Semaphore Usage
Operating systems distinguish between
1. Counting semaphore
2. Binary Semaphore

Counting semaphores
1. The value of a counting semaphore can range over an unrestricted domain ie. finite
number of instances.
2. The semaphore is initialized to the number of resources available.
3. When a process use a resource, it performs a wait () operation & when a process releases
a resource, it performs a signal () operation.

Binary semaphores
1. The value of a binary semaphore can range only between 0 and 1.
2. Thus, binary semaphores behave similarly to mutex locks.

Semaphores can also be used to solve various synchronization problems.


For example, consider two concurrently running processes:
P1 with a statement S1 and P2 with a statement S2. Suppose that S2 is required to
execute only after S1 has completed. Then by letting P1 and P2 sharing a common semaphore
synch, initialized to 0.

In process P1, the statements are inserted as


S1;
signal(synch);
In processP2, the statements are inserted as
wait(synch);
S2;

29
Because synch is initialized to 0, P2 will execute S2 only after P1 has invoked signal(synch),
which is after statement S1 has been executed.
Semaphore Implementation:
 It must guarantee that no two processes can execute the wait () and signal () on the same
semaphore at the same time.
 The implementation becomes the critical section problem where the wait and signal code
are placed in the critical section.
 While a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the call to acquire (). ie. It could now have busy
waiting in critical section implementation.
 This type of mutex lock is also called a spin lock because the process “spins” while
waiting for the lock to become available. But the implementation code is short.
 There will be little busy waiting if critical section rarely occupied.
 This is not a good solution because the applications may spend lots of time in critical
sections.
Semaphore Implementation with no busy waiting:
With each semaphore there is an associated waiting queue.
Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
Two operations:
 Block – place the process invoking the operation on the appropriate waiting queue
 Wakeup – remove one of processes in the waiting queue and place it in the ready queue

Deadlocks
Deadlocks occur where two or more processes are waiting indefinitely for an event that
can be caused by one of the waiting processes. When such a state is reached, these processes are
said to be deadlocked.

The event is the execution of a signal () operation.


Example
Consider a system consisting of two processes, P0 and P1, each accessing two semaphores, S and
Q, set to the value 1:
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
.. ..
.. ..
.. ..
signal(S); signal(Q);
signal(Q); signal(S);

30
Suppose that P0 executes wait(S) and then P1 executes wait (Q). When P0 executes wait
(Q), it must wait until P1 executes signal (Q).
Similarly, when P1 executes wait (S), it must wait until P0 executes signal (S).
Since these signal () operations cannot be executed, P0 and P1are deadlocked.

Indefinite blocking or starvation:


A situation in which processes wait indefinitely within the semaphore and if the
processes are removed from the list associated with a semaphore in LIFO (last-in, first-out)
order, then starvation occurs.

Classical Problems of Synchronization:


Classical problems used to test newly-proposed synchronization schemes
1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem

DEADLOCKS
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is called
a deadlock.

System Model
 System consists of resources
 Resource types R1, R2, . . ., Rm
 CPU cycles, memory space, I/O devices
 Each resource type Ri has Wi instances.
 Each process utilizes a resource as follows:
 request
 use
 release
Deadlock Characterization:
Deadlock can arise if four conditions hold simultaneously.
1. Mutual exclusion: only one process at a time can use a resource
2. Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
3. No preemption: a resource can be released only voluntarily by the process holding it,
after that process has completed its task
4. Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …,

31
Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is
held by P0.
Deadlocks can occur via system calls, locking, etc.
Resource-Allocation Graph
A set of vertices V and a set of edges E.
 V is partitioned into two types:
 P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
 R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
 request edge – directed edge Pi Rj
 assignment edge – directed edge RjPi
 Process

 Resource Type with 4 instances

 Pirequests instance of Rj

Pi

 Pi is holding an instance of Rj

P
i

Example of a Resource Allocation Graph

Figure Resource-allocation graphs.

32
Resource Allocation Graph With A Deadlock

Figure Resource-allocation graph with a deadlock.

Figure Resource-allocation graph with a cycle but no deadlock

Basic Facts
 If graph contains no cycles  no deadlock
 If graph contains a cycle 
 if only one instance per resource type, then deadlock
 if several instances per resource type, possibility of deadlock

Methods for handling Deadlocks


 Ensure that the system will never enter a deadlock state:
1. Deadlock prevention
2. Deadlock avoidance
3. Deadlock detection
4. Deadlock recovery
 Allow the system to enter a deadlock state and then recover.

33
 Ignore the problem and pretend that deadlocks never occur in the system; used by
most operating systems, including UNIX.

DEADLOCK PREVENTION
Restrain the ways request can be made
 Mutual Exclusion – not required for sharable resources (e.g., read-only files); must
hold for non-sharable resources
 Hold and Wait – must guarantee that whenever a process requests a resource, it does
not hold any other resources
 Require process to request and be allocated all its resources
Before it begins execution, or allows process to request resources only when the
process has none allocated to it.
 Low resource utilization; starvation possible
 No Preemption
 If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
 Preempted resources are added to the list of resources for which the process is
waiting
 Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting
 Circular Wait - impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration
DEADLOCK AVOIDANCE
Requires that the system has some additional a priori information available
 Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need
 The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition
 Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes

Safe State
When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state
 System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes in the systems such that for each P i, the resources that Pi can still
 request can be satisfied by currently available resources + resources held by all
the Pj, with j <I
 That is:
 If Pi resource needs are not immediately available, then Pi can wait until all Pjhave
finished

34
 When Pj is finished, Pi can obtain needed resources, execute, return allocated resources,
and terminate

 When Pi terminates, Pi +1 can obtain its needed resources, and so on

Basic Facts
 If a system is in safe state  no deadlocks
 If a system is in unsafe state  possibility of deadlock
 Avoidance ensure that a system will never enter an unsafe state.
Safe, Unsafe, Deadlock State

Figure Safe, unsafe, and deadlocked state spaces.

Avoidance Algorithms
 Single instance of a resource type
 Use a resource-allocation graph
 Multiple instances of a resource type
 Use the banker’s algorithm
 Safety Algorithm
 Resource-Request Algorithm

Resource-Allocation Graph Scheme


 Claim edge PiRj indicated that process Pj may request resource Rj; represented by
a dashed line
 Claim edge converts to request edge when a process requests a resource
 Request edge converted to an assignment edge when the resource is allocated to the
process
 When a resource is released by a process, assignment edge reconverts to a claim
edge
 Resources must be claimed a priori in the system

35
Unsafe State in Resource-Allocation Graph

Figure An unsafe state in a resource-allocation graph

Resource-Allocation Graph Algorithm


 Suppose that process Pi requests a resource Rj
 The request can be granted only if converting the request edge to an assignment edge
does not result in the formation of a cycle in the resource allocation graph

Banker’s Algorithm
 Multiple instances
 Each process must a priori claim maximum use
 When a process requests a resource it may have to wait
 When a process gets all its resources it must return them in a finite amount of time

Data Structures for the Banker’s Algorithm


Let n = number of processes, and m = number of resources types
1. Available: Vector of length m. If available [j] = k, there are k instances of resource
type Rjavailable
2. Max: n x m matrix. If Max [i,j] = k, then process Pimay request at most k instances
of resource type Rj
3. Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of Rj
4. Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rjto
complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]

Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize:
Work = Available

36
Finish [i] = false for i = 0, 1, …,n- 1
2. Find an isuch that both:
(a) Finish [i] = false
(b) NeediWork
If no such i exists, go to step 4
3. Work = Work + AllocationiFinish[i] = true go to step 2
4. If Finish [i] == true for all i, then the system is in a safe state

Resource-Request Algorithm for Process Pi


Requesti = request vector for process Pi. If Requesti[j] = k then process Pi wants k
instances of resource type Rj
1. If RequestiNeedigo to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim
2. If RequestiAvailable, go to step 3. Otherwise Pi must wait, since resources are not
available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available –Requesti;
Allocationi= Allocationi + Requesti;
Needi=Needi – Requesti;
 If safe  the resources are allocated to Pi
 If unsafe Pi must wait, and the old resource-allocation state is restored
Example of Banker’s Algorithm
 5 processes P0 through P4;
 3 resource types:
A (10 instances), B (5instances), and C (7 instances)
 Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433
 The content of the matrix Need is defined to be Max – Allocation
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431

37
 The system is in a safe state since the sequence <P1, P3, P4, P2, P0> satisfies safety criteria

Example: P1 Request (1,0,2)


 Check that Request  Available (that is, (1,0,2)  (3,3,2)  true
Allocation Need Available
ABC ABC ABC
P0 010 743 230
P1 302 020
P2 302 600
P3 211 011
P4 002 431
 Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety
requirement
 Can request for (3,3,0) by P4 be granted?
 Can request for (0,2,0) by P0 be granted?

DEADLOCK DETECTION
 Allow system to enter deadlock state
 Detection algorithm
 Recovery scheme
Single Instance of Each Resource Type
 Maintain wait-for graph
 Nodes are processes
 PiPjif Piis waiting forPj
 Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle,
there exists a deadlock
 An algorithm to detect a cycle in a graph requires an order ofn2operations, where n is the
number of vertices in the graph

Resource-Allocation Graph and Wait-for Graph

Figure (a) Resource-allocation graph. (b) Corresponding wait-for graph


38
Several Instances of a Resource Type
 Available: A vector of length m indicates the number of available resources of each type
 Allocation: An n x mmatrix defines the number of resources of each type currently
allocated to each process
 Request: An n x mmatrix indicates the current request of each process. If Request [i][j]
= k, then processPi is requestingk more instances of resource type Rj.

Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
a) (Work = Available)
b) For i = 1,2, …, n, if Allocation 0, then Finish[i] = false; otherwise, Finish[i]
= true
2. Find an index isuch that both:
a) Finish[i] == false
b) RequestiWork
If no such iexists, go to step 4
3. Work = Work + Allocation Finish[i] = true go to step 2
4. If Finish[i] == false, for some i, 1 in, then the system is in deadlock state.
Moreover, if Finish[i] == false, then Pi is deadlocked

Example of Detection Algorithm


 Five processes P0 through P4;three resource types A (7 instances), B (2 instances), and C
(6 instances)
 Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
P4 002 002
 Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
 P2 requests an additional instance of typeC
Request
ABC
P0 000
P1 202
P2 001
P3 100
P4 002
 State of system?

39
 Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes; requests
 Deadlock exists, consisting of processes P1, P2, P3, and P4

Detection-Algorithm Usage
 When, and how often, to invoke depends on:
 How often a deadlock is likely to occur?
 How many processes will need to be rolled back?
 one for each disjoint cycle
 If detection algorithm is invoked arbitrarily, there may be many cycles in the resource
graph and so we would not be able to tell which of the many deadlocked processes
“caused” the deadlock.

RECOVERY

i. Recovery using Process Termination


 Abort all deadlocked processes
 Abort one process at a time until the deadlock cycle is eliminated
 In which order should we choose to abort?
1. Priority of the process
2. How long process has computed, and how much longer to completion
3. Resources the process has used
4. Resources process needs to complete
5. How many processes will need to be terminated
6. Is process interactive or batch?

ii. Recovery using Resource Preemption


 Selecting a victim – minimize cost
 Rollback – return to some safe state, restart process for that state
 Starvation – same process may always be picked as victim, include number of rollback
in cost factor

40
QUESTION BANK
PART-A
1. What is the difference between a program and a process?
 Program is passive entity stored on disk (executable file), process is active
 Program becomes process when executable file loaded into memory.
2. With the help of a state transition diagram, List out the various states of a process.
As a process executes, it changes state
1. new: The process is being created
2. running: Instructions are being executed
3. waiting: The process is waiting for some event to occur
4. ready: The process is waiting to be assigned to a processor
5. terminated: The process has finished execution

3. What do you mean by PCB?


Information associated with each process is called Process Control Block (PCB) also called
task block, which includes
1. Process state
2. Program counter
3. CPU registers
4. CPU scheduling information-
5. Memory-management information
6. Accounting information
7. I/O status information

4.Give the scheduling queues of processes?


1. Job queue – set of all processes in the system

41
2. Ready queue – set of all processes residing in main memory, ready and
waiting to execute
3. Device queues – set of processes waiting for an I/O device Processes
migrate among the various queues
5. What do you mean by queuing diagram representation?
A common representation of process scheduling is a queueing diagram. Queueing
diagramrepresents queues, resources, flows.

6. What is meant by schedulers? Give its types.


The operating system selects processes from queues for scheduling. The selection process is
carried out by the appropriate scheduler.
Two types of schedulers
1. Short Term Schedulers (STS) or CPU Scheduler
2. Long Term Schedulers (LTS) or Job Scheduler.
7. Define STS.
 Short-term scheduler (or CPU scheduler) – selects which process should be executed
next and allocates CPU
 Sometimes the only scheduler in a system
 Short-term scheduler is invoked frequently (milliseconds)  (must be fast).
8. Define LTS
 Long-term scheduler (or job scheduler) – selects which processes should be brought into
the ready queue
 Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)
 The long-term scheduler controls the degree of multiprogramming.
9. What is the work of Medium Term Scheduler (MTS)?
Medium-term scheduler can be added if degree of multiple programming needs to
decrease
 Remove process from memory, store on disk, bring back in from disk to continue
execution:
 swapping
42
10. Define Context switch.
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context switch i.e.,
When CPU switches to another process, the system must save the state of the old process and
load the saved state for the new process via a context switch
11. What is the motivation for
1. Multi-programming and
2. Time sharing?
The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization. The objective of time sharing is to switch the CPU among so
frequently that users can interact with each program while it is running.
 To Maximize CPU use, quickly switch processes onto CPU for time sharing
 Process schedulerselects among available processes for next execution on CPU

12. With the help of block diagrams, explain the flow of control between
two processes during process switching?

13. What happens when process context is switched? Is it an over-head?


Context of a process represented in the PCB
 Context-switch time is overhead; the system does no useful work while switching
 The more complex the OS and the PCB  the longer the context switch
 Context switch time dependent on hardware support
43
 Some hardware provides multiple sets of registers per CPU  multiple contexts
loaded at once.
14. What are the operations can be done on processes?
1. Process creation
2. Process termination
15. What do you mean by cascading termination?
Some operating systems do not allow child to exists if its parent has terminated. If
process terminates, then all its children must also be terminated. This is called as cascading
termination. The termination is initiated by the operating system.
16. What is meant by independent processes ?
A process is independent if it cannot affect or be affected by the other processes
executing in the system. Any process that does not share data with any other process is
independent.
17. What is meant by cooperating processes?
 A process is cooperating if it can affect or be affected by the other processes
executing in the system.
 Cooperating process can affect or be affected by other processes, including sharing
data.
18. Why cooperating processes are needed in a system?
 Information sharing - an environment must be provided to allow concurrent access
to information(shared file etc)
 Computation speedup - To improve the speedup, tasks are divided into subtasks,
and executes in parallel
 Modularity-dividing the system functions into separate processes or threads
 Convenience- Even an individual user may work on many tasks at the same time.
19. Define thread.
A thread is a basic unit of CPU utilization;
 It comprises a thread ID, a program counter, a register set, and a stack.
 It shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals.
 A traditional (or heavyweight) process has a single thread of control.
20. What are the benefits of threads?
1. Responsiveness
2. Resource Sharing
3. Economy
4. Scalability
21. What are the two types of threads?
1. User threads
2. Kernel threads
22. List out the threading models?
1. Many-to-One

44
2. One-to-One
3. Many-to-Many
23. Give the shortcomings of Many-to-many threading model?
 Developers can create as many user threads as necessary, and the corresponding kernel
threads can run in parallel on a multiprocessor
 When a thread performs a blocking system call, the kernel can schedule another thread
for execution
24. Why we need process synchronization?
 Processes can execute concurrently
 May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires process synchronization mechanisms to ensure the
orderly execution of cooperating processes.
25. Define Race condition.
A situation in which several processes access and manipulate the same data concurrently
and the outcome of the execution depends on the particular order in which the access takes place
is called race condition.
26. What do you mean by critical section problem?
 Consider system of n processes {p0, p1, … pn-1}
 Each process has critical section segment of code
 Process may be changing common variables, updating table, writing file, etc
 When one process in critical section, no other may be in its critical section
 Critical section problem is to design protocol to solve this Each process must ask
permission to enter critical section in entry section, may follow critical section with exit
section, then remainder section
General structure:

27. What are the requirement that must required for Critical section
algorithms. Or List out the requirements for the solution of CS problem?
1. Mutual Exclusion
2. Progress
3. Bounded Waiting

45
28. What is meant by mutual exclusion?
-If process Piis executing in its critical section, then no other processes can be executing
in their critical sections
29. Give two hardware instructions and their definitions which can be
used for implementing mutual exclusion
- Test and set
- Compare and swap
30. What is meant by semaphore? List out the operations used.
 Synchronization tool that provides more sophisticated ways (than Mutex locks) for
process to synchronize their activities.
 Semaphore S – integer variable
 Can only be accessed via two indivisible (atomic) operations
wait() and signal()
 Originally called P() and V()
31. Differentiate preemptive and non-preemptive approaches
Preemptive – allows preemption of process when running in kernel mode
Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields
CPU Essentially free of race conditions in kernel mode
32. Give the types of semaphores?
1. Counting semaphore
2. Binary semaphore
33. Define busy waiting.
While a process is in its critical section, any other process that tries to enter its critical
section must loop continuously in the while loop, which is also called a spin lock because the
process “spins” while waiting for the lock to become available
34. How can be busy waiting avoided?
 With each semaphore there is an associated waiting queue
 Two operations:
 block – place the process invoking the operation on the appropriate waiting queue
 wakeup – remove one of processes in the waiting queue and place it in the ready
queue
35. Define starvation or indefinite blocking
A process may never be removed from the semaphore queue in which it is suspended
36. What is meant by priority inversion?
Priority Inversion – Scheduling problem when lower-priority process holds a lock
needed by higher-priority process
 Solved via priority-inheritance protocol
37. What is a mutex? What are the locks used and list out its drawbacks?
 Software tools to solve critical section problem
 Protect a critical section by first acquire() a lock then release() the lock
 Boolean variable indicating if lock is available or not

46
Drawbacks: spinlock, busy waiting
38. List out the Classical Problems of Synchronization.
Classical problems used to test newly-proposed synchronization schemes
1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem
39. Declare the structure for monitors
{
// shared variable declarations
Function F1 (…) { …. }
Function Fn (…) {……}
Initialization code (…) { … }
}
Two operations are allowed on a condition variable:
 x.wait() – a process that invokes the operation is suspended until x.signal()
 x.signal() – resumes one of processes (if any) that invoked x.wait()
 If no x.wait() on the variable, then it has no effect on the variable
40. Define CPU scheduling.
1. CPU scheduler or Short-term scheduler selects from among the processes in ready
queue, and allocates the CPU to one of them
a. Queue may be ordered in various ways
2. CPU scheduling decisions may take place when a process:
a. Switches from running to waiting state
b. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
41. W hat is a Dispatcher?
 Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart that program
42. W hat is dispatch latency?
Time it takes for the dispatcher to stop one process and start another running
43. W hat are the various scheduling criteria for CPU scheduling?
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)

47
44. List out the various scheduling algorithms.
Scheduling Algorithms
1. First Come, First-Served Scheduling (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin (RR)
5. Multilevel Queue
6. Multilevel Feedback Queue
45. What is FCFS Scheduling?
In this algorithm, the process that requests the CPU first is allocated the CPU first. This
implementation of the FCFS policy is easily managed with a FIFO queue.
46. What is drawback of FCFS?
Convoy effect - short process behind long process. i.e., Consider one CPU-bound and
many I/O-bound processes
47. What is aging?
The problem of starvation (low priority processes may never execute) is solved using
aging in which, as time progresses increase the priority of the process
48. Define deadlock.
In a multiprogramming environment, several processes may compete for a finite number
of resources. A process requests resources; if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change state,
because the resources it has requested are held by other waiting processes. This situation is called
a deadlock.
49. What are conditions under which a deadlock situation may arise?
1. Mutual exclusion: only one process at a time can use a resource
2. Hold and wait: a process holding at least one resource is waiting to acquire additional
resources held by other processes
3. No preemption: a resource can be released only voluntarily by the process holding it,
after that process has completed its task
4. Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such that P0 is
waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …,
Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is
held by P0.
50. What is a resource allocation graph (RAG)?
A set of vertices V and a set of edges E.
 V is partitioned into two types:
 P = {P1, P2, …, Pn}, the set consisting of all the processes in the system
 R = {R1, R2, …, Rm}, the set consisting of all resource types in the system
 request edge – directed edge Pi Rj
 assignment edge – directed edge RjPi

48
51. What are the methods for handling deadlocks?
1. Deadlock prevention
2. Deadlock avoidance
3. Deadlock detection
4. Deadlock recovery

52. Define deadlock prevention.


By denying any one of the necessary condition ( 4 conditions) for a deadlock to occur

53. Define deadlock avoidance.


Requires that the system has some additional a priori information available
 Simplest and most useful model requires that each process declare the maximum
numberof resources of each type that it may need
 The deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that there can never be a circular-wait condition
 Resource-allocation state is defined by the number of available and allocated resources,
and the maximum demands of the processes

54. What are a safe state and an unsafe state?


System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the processes
in the systems such that for each Pi, the resources that Pi can still request can be satisfied by
currently available resources + resources held by all the Pj, with j <I

55. What is a claim edge?


 Claim edgePiRj indicated that process Pj may request resource Rj; represented by a
dashed line

49
 Claim edge converts to request edge when a process requests a resource
 Request edge converted to an assignment edge when the resource is allocated to the
process
 When a resource is released by a process, assignment edge reconverts to a claim edge
 Resources must be claimed a priori in the system

56. What is a WFG( Wait-For – Graph)?


WFG use to discover the presence of deadlocks in a system.
 Nodes are processes
 PiPjif Piis waiting forPj

(a) Resource-allocation graph. (b) Corresponding wait-for graph.

57. How a system is recovered from a deadlock by resource preemption?


 Selecting a victim – minimize cost
 Rollback – return to some safe state, restart process for that state
 Starvation – same process may always be picked as victim, include number of rollback
in cost factor

PART-B
1. Write short notes on
a) Process Scheduling
b) Queues

50
Ans: a. Definition - Process Scheduling
Types of Schedulers- STS, LTS, MTS
Queuing diagram
b. Types of queues with diagrams
2. Write short notes on Operations on processes.
a. Process Creation
b. Process termination
3. Explain about IPC
a. Communication models
b. Direct Vs Indirect
c. Synchronization
d. Buffering
4. Write short notes on threading issues
a. Many to one
b. One to one
c. Many to many
d. Two level
5. Explain about Windows 7 thread and SMP Management
a. Diagram
b. Process and thread object, attributes, services
c. Thread states with diagram
d. SMP Management- hard and soft affinity
6. Explain about critical section problem
a. Definition
b. General structure
c. Requirements for the solutions
d. Two process solutions
e. Synchronization Hardware- test and set, compare and swap
7. Explain about semaphores
a. Definition
b. Two operations- wait, signal
c. Implementations
d. Problems
8. Explain about Monitors
a. Definition
b. Implementation
9. Explain about deadlock avoidance
Definition
a. Single instance of a resource type
b. Use a resource-allocation graph
c. Multiple instances of a resource type
d. Use the banker’s algorithm
51
e. Safety Algorithm
f. Resource-Request Algorithm
10. Explain about deadlock prevention
How the necessary 4 conditions denied
11. Explain about deadlock detection
a. Allow system to enter deadlock state
b. Detection algorithm
c. Recovery scheme
12. Explain about deadlock recovery
a. Recovery from Deadlock: Process Termination
b. Recovery from Deadlock: Resource Preemption

52

You might also like