Unit 3 (With Page Number)
Unit 3 (With Page Number)
Lecture 7
Process Concept:
Process:
Process States:
1
Lecture 8
In the OS, each process is represented by its PCB (Process Control Block). The PCB,
generally contains the following information:
• Process State: The state may be new, ready, running, and waiting, halted, and so on.
• Process ID
• Program Counter (PC) value: The counter indicates the address of the next
instruction to be executed or this process.
• Register values: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.
2
If we have a single processor in our system, there is only one running process at a time.
Other ready processes wait for the processor.
3
Lecture 9
Operations on process:
A. Process Creation
Parent process create children processes, which, in turn create other processes,
forming a tree of processes
Resource sharing
Execution
4
B. Process Termination
Process executes last statement and asks the operating system to delete it
(exit)
The process address space consists of the linear address range presented to each
process. Each process is given a flat 32- or 64-bit address space, with the size
depending on the architecture. The term "flat" describes the fact that the address
space exists in a single range. (As an example, a 32-bit address space extends
from the address 0 to 429496729.)
Some operating systems provide a segmented address space, with addresses
existing not in a single linear range, but instead in multiple segments. Modern
virtual memory operating systems generally have a flat memory model and not
a segmented one.
A memory address is a given value within the address space, such as 4021f000.
The process can access a memory address only in a valid memory area.
Memory areas have associated permissions, such as readable, writable, and
executable, that the associated process must respect. If a process accesses a
memory address not in a valid memory area, or if it accesses a valid area in an
invalid manner, the kernel kills the process with the dreaded "Segmentation
5
Fault" message.
6
Memory areas can contain all sorts of goodies, such as
A memory map of the executable file's code, called the text section.
A memory map of the executable file's initialized global variables,
called the data section.
A memory map of the zero page (a page consisting of all zeros, used for
purposes such as this) containing uninitialized global variables, called
the bss section
A memory map of the zero page used for the process's user-space stack
(do not confuse this with the process's kernel stack, which is separate
and maintained and used by the kernel)
An additional text, data, and bss section for each shared library, such as
the C library and dynamic linker, loaded into the process's address space.
Any memory mapped files
Any shared memory segments
Any anonymous memory mappings, such as those associated with
malloc().
7
Lecture 10
Threads:
Introduction to Thread:
9
Multithreading Models (Management of Threads):
E. Threads may be provided either at the user level, for user threads, or by the kernel,
for kernel threads. User threads are supported above the kernel and are managed
without kernel support, whereas kernel threads are supported and managed directly
by the operating system. There must exist a relationship between user threads and
kernel threads. There are three common ways of establishing this relationship.
A. Many-to-One Model:
B. One-to-One Model:
C. Many-to-Many Model:
10
concurrency.
11
The many-to-many model suffers from
neither of these shortcomings:
Developers can create as many user
threads as necessary, and the
corresponding kernel threads can run
in parallel on a multiprocessor. Also,
when a thread performs a blocking
system call, the kernel can schedule
another thread for execution.
Lecture 11
12
Scheduling Queue: queue is generally stored as a linked list. A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field
that points to the next PCB in the ready queue.
Job queue – set of all processes in the system
Schedulers: A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes from
these queues in some fashion. The selection process is carried out by the
appropriate scheduler. Schedulers have two types:
1. Long Term Scheduler 2. Short Term Scheduler
A. selects which process should be A. selects which process should be
brought into the ready queue. executed next & allocates CPU.
B. L.T.S. is invoked very B. STS is invoked very
infrequently (seconds, minutes). frequently (milliseconds).
C. L.T.S. controls the degree of
multiprogramming (number of
processes in memory)
13
Medium-Term Scheduler:
Some operating systems, such as time-sharing systems, may introduce an
additional, intermediate level of scheduling. medium-term scheduler is that
sometimes it can be advantageous to remove processes from memory (and from
active contention for the CPU) and thus reduce the degree of multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is called swapping.
Dispatcher:
Dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler. This function involves:
Switching Context
o When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process.
o Context-switch time is overhead; the system does no useful work while
switching.
o Time dependent on hardware support.
Switching to user mode
Jumping to the proper location in the user program to restart that program.
The dispatcher should be as fast as possible, given that it is invoked during every
14
process switch. The time it takes for the dispatcher to stop one process and start
another running is known as dispatch latency.
15
Q. Write short note on Preemptive Scheduling & Non-Preemptive Scheduling:
Non-Preemptive scheduling: When a process enters the state of running, the state
of that process is not deleted from the scheduler until it finishes its service time.
Throughput: the number of processes that are completed per time unit,
called throughput.
Optimization Criteria:
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
Scheduling Algorithms:
With this scheme, the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue. When
a process enters the ready queue, its PCB is linked onto the tail of the queue. When
17
the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue.
Example:
Process p1,p2,p3,p4,p5 having arrival time of 0,2,3,5,8 microseconds and processing
time 3,3,1,4,2 microseconds, Draw Gantt Chart & Calculate Average Turn Around
Time, Average Waiting Time, CPU Utilization & Throughput using FCFS.
Processes Arrival Time Processing Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 0 3 3-0=3 3-3=0
P2 2 3 6-2=4 4-3=1
P3 3 1 7-3=4 4-1=3
P4 5 4 11-5=6 6-4=2
P5 8 2 13-8=5 5-2=3
GANTT CHART:
P1 P2 P3 P4 P5
0 3 6 7 11 13
Average T.A.T. =(3+4+4+6+5)/5 = 22/5 = 4.4 Microsecond
Average W.T. = (0+1+3+2+3)/5 =9/5 = 1.8 Microsecond
CPU Utilization = (13/13)*100 = 100%
Throughput = 5/13 = 6.38
18
Lecture 12
Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time
Two schemes:
i. nonpreemptive – once CPU given to the process it cannot be
preempted until completes its CPU burst
ii. preemptive – if a new process arrives with CPU burst length less
than remaining time of current executing process, preempt. This
scheme is known as the Shortest-Remaining-Time-First (SRTF)
SJF is optimal – gives minimum average waiting time for a given set of
processes
Example:
Process p1,p2,p3,p4 having burst time of 6,8,7,3 microseconds. Draw Gantt Chart
& Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization &
Throughput using SJF.
Processes Burst Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P4 3 3-0=3 3-3=0
P1 6 9-0=9 9-6=3
P3 7 16-0=16 16-7=9
P2 8 24-0=24 24-8=16
GANTT CHART
P4 P1 P3 P2
0 3 9 16 24
Average T.A.T. =(3+9+16+24)/4 =
13 microsecond Average W.T. =
(0+3+9+16)/4 =28/4 = 7 microsecond
19
CPU Utilization = (24/24)*100 =
100%
Throughput = 4/24
20
Example: Example of Non-Preemptive SJF
21
Lecture 13
Priority Scheduling:
GANTT CHART:
P2 P5 P1 P3 P4
0 1 6 16 18 19
22
Lecture 14
Round-Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added
to the end of the ready queue.
If there are n processes in the ready queue and the time quantum is q, then
each process gets 1/n of the CPU time in chunks of at most q time units at
once. No process waits more than (n-1)q time units.
Used for time sharing & multiuser O.S.
FCFS with preemptive scheduling.
Example:
Process p1,p2,p3 having processing time of 24,3,3 milliseconds.
Draw Gantt Chart & Calculate Average Turn Around Time, Average
Waiting Time, CPU Utilization & Throughput using Round Robin with
time slice of 4milliseconds.
Processes Processing T.A.T. W.T.
Time
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 24 30-0=30 30-24=6
P2 3 7-0=7 7-3=4
P3 3 10-0=10 10-3=7
23
GANTT CHART
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Average T.A.T. =(30+7+7)/3 = 44/3 = 14.67 millisecond
Average W.T. = (6+4+7)/3 =17/3 = 5.67 millisecond
24
D. Multilevel Feedback Queue Scheduling
A process can move between the various queues; aging can be implemented this
way
Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter
when that process needs service
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS. When it gains
CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.
25
26
E. Multiple Processor Scheduling
CPU scheduling more complex when multiple CPUs are available
Homogeneous processors within a multiprocessor
Load sharing
Asymmetric multiprocessing – only one processor accesses the system data
structures
Problem 2: Consider the set of process A,B,C,D,E having arrival time of 0,2,3,3.5,4
and execution time 4,7,3,3,5 and the following scheduling algorithms:
a. FCFS
b. Round Robin (quantum=2)
c. Round Robin (quantum=1)
If there is tie within the processes, the tie is broken in the favour of the oldest process
i) draw the GANTT Chart and find the average waiting time & response time for the
algorithms. Comment on your result which one is better and why?
ii)If the scheduler takes 0.2 unit of CPU Time in context switch for the completed
job & o.1 unit of additional CPU time for incomplete jobs for saving their context,
calculate the percentage of CPU time wasted in each case.
27
Problem 3: Processes A,B,C,D,E having arrival time 0,0,1,2,2 and execution time
10,2,3,1,4 and priority 3,1,3,5,2. Draw the Gantt Chart and find average waiting time
and response time of the process set.
Problem 4: Process p1,p2,p3 having burst time 7,3,9 and priority 1,2,3 and
arrival time 0,4,7.
Calculate turn around time and average waiting time using
i) SJF
ii) priority. (both preemptive)
problem 5: Process p1,p2,p3,p4 having arrival time 0,1,2,3 and burst time 8,4,9,5.
Calculate turn around time and waiting time using SJF, FCFS.
28
Lecture 15
Deadlock: A set of blocked processes each holding a resource and waiting to acquire a
resource held by another process in the set.
Deadlock Problem: Bridge Crossing Example
System Model:
A system consists of a finite number of resources to be distributed among a
number of competing processes. The resources are partitioned into several types,
each consisting of some
number of identical instances. Resources are like Memory cycles, files, and
space, CPU devices (such as printers and DVD drives).
29
If a system has two CPUs, then the resource type CPU has two instances. Similarly,
the resource type printer may have five instances.
Resource types R1, R2, . . ., Rm ( CPU cycles, memory space, I/O devices)
Each resource type Ri has Wi instances.
Each process utilizes a resource as follows:
i. Request: If the request cannot be granted immediately (for
example, if the resource is being used by another process), then the
requesting process must wait until it can acquire the resource.
ii. Use: The process can operate on the resource (for example, if the
resource is a printer, the process can print on the printer).
iii. Release: The process releases the resource.
30
Resource-Allocation Graph:
Deadlocks can be described more precisely in terms of a directed graph called a
system resource-allocation graph. A set of vertices V and a set of edges E.
V is partitioned into two types:
i. P = {P1, P2, …, Pn},
the set consisting of
all the processes in
the system.
ii. R = {R1, R2, …, Rm},
the set consisting of
all resource types in
the system.
request edge – directed edge P1 Rj
assignment edge – directed edge Rj Pi
Deadlock
31
Lecture 16
Preempted resources are added to the list of resources for which the
process is waiting.
Process will be restarted only when it can regain its old resources, as
well as the new ones that it is requesting.
iv. Circular Wait – impose a total ordering of all resource types, and require that
each process requests resources in an increasing order of enumeration.
32
Lecture 17
Deadlock Avoidance: Requires that the system has some additional a priori information
available.
Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
A. Safe State:
When a process requests an available resource, system must decide
if immediate allocation leaves the system in a safe state.
System is in safe state if there exists a
sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for
each Pi, the resources that Pi can still
request can be satisfied by currently
available resources + resources held by
all the Pj, with j < i.
That is:
i. If Pi resource needs are not
immediately available, then Pi
can wait until all Pj have
finished.
ii. When Pj is finished, Pi can
obtain needed resources,
execute, return allocated
resources, and terminate.
33
iii. When Pi terminates, Pi +1 can obtain its needed resources, and so on.
If a system is in safe state no deadlocks.
If a system is in unsafe state possibility of deadlock.
Avoidance ensure that a system will never enter an unsafe state.
B. Avoidance Algorithm
A. Single instance of a resource type: Use a resource-allocation graph
B. Multiple instances of a resource type: Use the banker’s algorithm
34
Resource-Allocation Graph Algorithm
Suppose that process Pi requests a resource Rj
The request can be granted only if converting the request edge
to an assignment edge does not result in the formation of a cycle
in the resource allocation graph
35
Banker’s Algorithm
Discussed in separate ppt file.
Deadlock Detection
In this environment, the system must provide:
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred
• An algorithm to recover from the deadlock.
A. Process Termination:
B. Resource Preemption:
37
IMPORTANT QUESTIONS
1 Explain threads
2 What do you understand by Process? Explain various states of process with suitable diagram. Explain process
control block.
3 What is a deadlock? Discuss the necessary conditions for deadlock with examples
4 Describe Banker’s algorithm for safe allocation.
5 What are the various scheduling criteria for CPU scheduling
6 What is the use of inter process communication and context switching
7 Discuss the usage of wait-for graph method
8
Consider the following snapshot of a system:
Process R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 2 2 3 3 6 8 7 7 10
P2 2 0 3 4 3 3
P3 1 2 4 3 4 4
P2 1 4
P3 2 9
P4 3 5
What is the average waiting and turn around time for these process with:
38
FCFS Scheduling
Preemptive SJF Scheduling
15
Consider the following process:
P2 1 4
P3 2 9
P4 3 5
Draw Gantt chart and find the average waiting time and average turnaround time:
Burs
Process Arrival t Priority
Time Time
0 6 3
P1
P2 1 4 1
P3 2 5 2
P4 3 8 4
Draw Gantt chart and find the average waiting time and average turnaround time:
(i) SRTF Scheduling
(ii) Round robin (time quantum:3)
16 What is the need for Process Control Block (PCB)?
17 Draw process state transition diagram
18
Define the multilevel feedback queues scheduling.
19 Discuss the performance criteria for CPU Scheduling.
39