Unit 2 Process and Process Scheduling Students OK
Unit 2 Process and Process Scheduling Students OK
Process
• A process is basically a program in execution. The execution of a process must progress
in a sequential fashion.
• For example, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
• When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data.
• The following image shows a simplified layout of a process inside main memory −
S.N. Component & Description
1
Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.
4
Data
This section contains the global and static variables.
Program
• A program is a piece of code which may be a single line or millions of lines.
• A computer program is usually written by a computer programmer in a programming
language.
• For example, here is a simple program written in C programming language −
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
1
Start
This is the initial state when a process is first started/created.
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can
run. Process may come into this state after Start state.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system,
it is moved to the terminated state where it waits to be removed from main
memory.
1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2
Process privileges
This is required to allow/disallow access to system resources.
3
Process ID
Unique identification for each of the process in the operating system.
4
Pointer
A pointer to parent process.
5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed
for this process.
6
CPU registers
Various CPU registers where process need to be stored for execution for running
state.
7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.
8
Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.
10
IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
Introduction: CPU scheduling
CPU scheduling is a process that allows one process to use the CPU while the execution of
another process is on hold (in waiting state) due to unavailability of any resource like I/O
etc, thereby making full use of CPU.
Major Objective
• The aim of CPU scheduling is to make the system efficient, fast, and fair.
Scheduling Levels
Scheduling determines the priority level for various requests for a computer system, including
the order in which requests are addressed and which types of requests and communications take
priority in terms of computer CPU and bandwidth.
• Tasks or requests are prioritized and scheduled to complete based on the maximum
amount of work or tasks the system can handle at once.
• High level scheduling is also sometimes called "long-term scheduling."
• Low level scheduling determines which tasks will be addressed and in what order. These
tasks have already been approved to be worked on, so low level scheduling is more
detail oriented than other levels.
CPU Scheduling: Scheduling Criteria
There are many different criteria to check when considering the "best" scheduling
algorithm, they are:
CPU Utilization
• To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be
working most of the time (Ideally 100% of the time).
• Considering a real system, CPU usage should range from 40% (lightly loaded) to
90% (heavily loaded.)
Throughput
• It is the total number of processes completed per unit of time or rather says the total
amount of work done in a unit of time.
Turnaround Time
Waiting Time
• The amount of time a process has been waiting in the ready queue to acquire get
control on the CPU.
Response Time
• Amount of time it takes from when a request was submitted until the first response
is produced.
• Remember, it is the time till the first response and not the completion of process
execution (final response).
Note:
In general CPU utilization and Throughput are maximized and other factors are reduced
for proper optimization.
Types of Scheduling:
• Round Robin Scheduling (RR), Shortest Remaining Time First (SRTF), Priority
(preemptive version) Scheduling, etc
• Shortest Job First (SJF basically non-preemptive) Scheduling and Priority (non-
preemptive version) Scheduling, etc
Thread in OS:
• A process is divided into several light-weight processes, each light-weight process
is said to be a thread.
• The thread has a program counter that keeps the track of which instruction to
execute next.
• It has a stack which contains the executing thread history.
• Threads operate in many respects, in the same manner as the process.
Example of Thread:
Word Processor:
A programmer wishes to type the text in word processor. Then the programmer opens a
file in a word processor and types the text (It is a thread), and the text is automatically
formatted (It is another thread). The text automatically specifies the spelling mistakes (It
is another thread), and the file is automatically saved on the disk (It is another thread).
Life Cycle of Thread:
Scheduling Techniques
4.4.1. Priority Scheduling
4.4.2. Deadline Scheduling
4.4.3. First-In-First-Out Scheduling
4.4.4. Round Robin Scheduling
4.4.5. Shortest-Job-First (SJF) Scheduling
4.4.6. Shortest-Remaining-Time (SRT) Scheduling
4.4.7. Highest-Response-Ration-Next (HRN) Scheduling
4.4.8. Multilevel Feedback Queues
• Using the CPU for maximum time so that it will not remain free
Throughput:
• Throughput is the measure of completing the number of processes per unit time.
Turnaround Time:
Response Time:
CPU Burst:
In this Algorithm, Process that request CPU first, CPU is allocated first to that
process .i.e. the algorithm is known as first come, first served.
When a process entered in the ready queue, its Process Control Block (PCB) is added to
end of the Queue.
When CPU gets free after a process execution, a new process from the head of the
queue is removed and enters in running state.
Let us discuss an example, some process p1, p2, p3, and p4 arrived at time 0, with the CPU
burst.
P1 20
P2 4
P3 6
P4 4
Waiting time:
If we Change the execution order of processes, like P4, P3, P2 and P1 then Gantt chart
shown the result.
Waiting time:
FCFS Scheduling algorithm suffered from Convoy effect. This effect resulted
in lower CPU utilization and more waiting time.
• In the list of processes, if one significant process gets CPU first, then all small process
are waiting for getting the CPU. As a result, average waiting time is more in
Example1.
• But If we talk about Example 2, Average waiting time is less.
• Once CPU has been allocated to a process, the process keeps the CPU until process itself
terminate or any I/O request.
What is Convoy Effect?
In FCFS scheduling the burst time of the first job is the highest among all the other jobs so this
situation is referred to as a convoy effect.
• If the CPU currently executing the process that has higher burst time at the front end of
the ready queue then the processes of lower burst time may get blocked by the currently
running process which means they may never get the CPU if the job in the execution has
a very high burst time. This is known as the convoy effect or starvation.
Convoy Effect
Hence in Convoy Effect, one slow process slows down the performance of the entire set of
processes, and leads to wastage of CPU time and other devices.
Consider the below processes available in the ready queue for execution, with arrival time as 0
for all and given burst times.
• The process P4 will be picked up first as it has the shortest burst time, then P2, followed
by P3 and at last P1.
• If we scheduled the same set of processes using the First come first serve algorithm
got average waiting time to be 18.75 ms,
• Whereas with SJF, the average waiting time comes out 4.5 ms.
Non-preemptive:
Problem
P1 20
P2 4
P3 6
P4 4
Mode: Non-preemptive
Waiting time:
SJF algorithm is used in long-term scheduling. We can’t determine the length of next CPU
Pre-emptive Shortest Job First
• In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they
arrive, but as a process with short burst time arrives, the existing process is preempted
or removed from execution, and the shorter job is executed first.
• The average waiting time for preemptive shortest job first scheduling is less than both,
non preemptive SJF scheduling and FCFS scheduling
P1 10 0
P2 4 2
P3 6 4
P4 2 5
Mode: Preemptive
Formula
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
Disadvantages of SJF