0% found this document useful (0 votes)
18 views20 pages

Unit 2 Process and Process Scheduling Students OK

Uploaded by

genshinplayer428
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views20 pages

Unit 2 Process and Process Scheduling Students OK

Uploaded by

genshinplayer428
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit 2 - 1.1 Introduce Process, Program and process life cycle.

Process 1.2 Describe Process Control block.


and 1.3 Explain Process state.
process 1.4 Introduce process scheduling.
scheduling 1.5 Explain Process scheduling queues and types of Process
schedulers (short term scheduler, Medium term scheduler
and Long-term schedule.
1.6 Illustrate the concept of Preemptive and Non-Preemptive
Scheduling.
1.7 Illustrate the concept of threat and its life cycle.
1.8 Describe Algorithm: FCFS/SJF/SRT(Shortest
Remaining Time).

Introduce Process, Program and process life cycle.

Process
• A process is basically a program in execution. The execution of a process must progress
in a sequential fashion.
• For example, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
• When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data.
• The following image shows a simplified layout of a process inside main memory −
S.N. Component & Description

1
Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2
Heap
This is dynamically allocated memory to a process during its run time.

3
Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.

4
Data
This section contains the global and static variables.
Program
• A program is a piece of code which may be a single line or millions of lines.
• A computer program is usually written by a computer programmer in a programming
language.
• For example, here is a simple program written in C programming language −
#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}

• A computer program is a collection of instructions that performs a specific task when


executed by a computer.
• When we compare a program with a process, we can conclude that a process is a
dynamic instance of a computer program.

Process Life Cycle


When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1
Start
This is the initial state when a process is first started/created.

2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can
run. Process may come into this state after Start state.

3
Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.

5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system,
it is moved to the terminated state where it waits to be removed from main
memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every
process). A PCB keeps all the information needed to keep track of a process as listed below in
the table −

S.N. Information & Description

1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.

2
Process privileges
This is required to allow/disallow access to system resources.

3
Process ID
Unique identification for each of the process in the operating system.
4
Pointer
A pointer to parent process.

5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed
for this process.

6
CPU registers
Various CPU registers where process need to be stored for execution for running
state.

7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the
process.

8
Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.

10
IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
Introduction: CPU scheduling
CPU scheduling is a process that allows one process to use the CPU while the execution of
another process is on hold (in waiting state) due to unavailability of any resource like I/O
etc, thereby making full use of CPU.

Major Objective

• The aim of CPU scheduling is to make the system efficient, fast, and fair.

Scheduling Levels

Scheduling determines the priority level for various requests for a computer system, including
the order in which requests are addressed and which types of requests and communications take
priority in terms of computer CPU and bandwidth.

High Level Scheduling

• Tasks or requests are prioritized and scheduled to complete based on the maximum
amount of work or tasks the system can handle at once.
• High level scheduling is also sometimes called "long-term scheduling."

Low Level Scheduling

• Low level scheduling determines which tasks will be addressed and in what order. These
tasks have already been approved to be worked on, so low level scheduling is more
detail oriented than other levels.
CPU Scheduling: Scheduling Criteria

There are many different criteria to check when considering the "best" scheduling
algorithm, they are:

CPU Utilization

• To make out the best use of the CPU and not to waste any CPU cycle, the CPU would be
working most of the time (Ideally 100% of the time).
• Considering a real system, CPU usage should range from 40% (lightly loaded) to
90% (heavily loaded.)

Throughput

• It is the total number of processes completed per unit of time or rather says the total
amount of work done in a unit of time.

Turnaround Time

• It is the amount of time taken to execute a particular process,

Waiting Time

• The amount of time a process has been waiting in the ready queue to acquire get
control on the CPU.

Response Time

• Amount of time it takes from when a request was submitted until the first response
is produced.
• Remember, it is the time till the first response and not the completion of process
execution (final response).

Note:
In general CPU utilization and Throughput are maximized and other factors are reduced
for proper optimization.

Types of Scheduling:

Scheduling algorithm can be categorized in two ways


Some Algorithms that are based on preemptive scheduling are:

• Round Robin Scheduling (RR), Shortest Remaining Time First (SRTF), Priority
(preemptive version) Scheduling, etc

Some Algorithms based on non-preemptive scheduling are:

• Shortest Job First (SJF basically non-preemptive) Scheduling and Priority (non-
preemptive version) Scheduling, etc
Thread in OS:
• A process is divided into several light-weight processes, each light-weight process
is said to be a thread.
• The thread has a program counter that keeps the track of which instruction to
execute next.
• It has a stack which contains the executing thread history.
• Threads operate in many respects, in the same manner as the process.

Example of Thread:
Word Processor:
A programmer wishes to type the text in word processor. Then the programmer opens a
file in a word processor and types the text (It is a thread), and the text is automatically
formatted (It is another thread). The text automatically specifies the spelling mistakes (It
is another thread), and the file is automatically saved on the disk (It is another thread).
Life Cycle of Thread:

1. Born State: A thread that has just been created.


2. Ready State: The thread is waiting for the processor (CPU).
3. Running: The System assigns the processor to the thread means that the thread is
being executed.
4. Blocked State: The thread is waiting for an event to occur or waiting for an I/O
device.
5. Sleep: A sleeping thread becomes ready after the designated sleep time expires.
6. Dead: The execution of the thread is finished.

Scheduling Techniques
4.4.1. Priority Scheduling
4.4.2. Deadline Scheduling
4.4.3. First-In-First-Out Scheduling
4.4.4. Round Robin Scheduling
4.4.5. Shortest-Job-First (SJF) Scheduling
4.4.6. Shortest-Remaining-Time (SRT) Scheduling
4.4.7. Highest-Response-Ration-Next (HRN) Scheduling
4.4.8. Multilevel Feedback Queues

To understand these scheduling algorithms, let us discuss some


terms.
CPU Utilization:

• Using the CPU for maximum time so that it will not remain free

Throughput:

• Throughput is the measure of completing the number of processes per unit time.

Turnaround Time:

• Time interval from submission of a process to completion of a process.

Turnaround time = Completion time – Arrival Time

Response Time:

• Time is taken by CPU to response the process.

CPU Burst:

• The time required by Process for execution.


Types of Scheduling Algorithms
1) First Come First Served Scheduling (FCFS) :

In this Algorithm, Process that request CPU first, CPU is allocated first to that
process .i.e. the algorithm is known as first come, first served.

It based on FIFO (First in First Out) queue data structure.

When a process entered in the ready queue, its Process Control Block (PCB) is added to
end of the Queue.

When CPU gets free after a process execution, a new process from the head of the
queue is removed and enters in running state.

Let us discuss an example, some process p1, p2, p3, and p4 arrived at time 0, with the CPU
burst.

Process CPU Burst

P1 20

P2 4

P3 6

P4 4

Gantt Chart Shown the result as below

Waiting time:

Waiting time for process P1=0, P2=20, P3=24 and P4=30.

Average Waiting time = (0+20+24+30)/4=18.4.


Example2.

If we Change the execution order of processes, like P4, P3, P2 and P1 then Gantt chart
shown the result.

Waiting time:

Waiting time for process P1=14, P2=10, P3=4, and P4=0.

Now Average Waiting Time = (0+4+10+14)/4=7

FCFS Scheduling algorithm suffered from Convoy effect. This effect resulted
in lower CPU utilization and more waiting time.

• In the list of processes, if one significant process gets CPU first, then all small process
are waiting for getting the CPU. As a result, average waiting time is more in
Example1.
• But If we talk about Example 2, Average waiting time is less.

FCFS is a non-preemptive algorithm.

• Once CPU has been allocated to a process, the process keeps the CPU until process itself
terminate or any I/O request.
What is Convoy Effect?
In FCFS scheduling the burst time of the first job is the highest among all the other jobs so this
situation is referred to as a convoy effect.

• If the CPU currently executing the process that has higher burst time at the front end of
the ready queue then the processes of lower burst time may get blocked by the currently
running process which means they may never get the CPU if the job in the execution has
a very high burst time. This is known as the convoy effect or starvation.

Convoy Effect

Hence in Convoy Effect, one slow process slows down the performance of the entire set of
processes, and leads to wastage of CPU time and other devices.

How to avoid Convoy Effect?

• To avoid Convoy Effect, preemptive scheduling algorithms like Round Robin


Scheduling can be used
o – as the smaller processes don’t have to wait much for CPU time – making
their execution faster and leading to less resources sitting idle
Shortest–Job–First Scheduling:
Shortest Job First scheduling works on the process with the shortest burst time or duration first.

• This is the best approach to minimize waiting time.


• This is used in Batch Systems.
• It is of two types:
1. Non Pre-emptive
2. Pre-emptive

Non Pre-emptive Shortest Job First

Consider the below processes available in the ready queue for execution, with arrival time as 0
for all and given burst times.

• The process P4 will be picked up first as it has the shortest burst time, then P2, followed
by P3 and at last P1.
• If we scheduled the same set of processes using the First come first serve algorithm
got average waiting time to be 18.75 ms,
• Whereas with SJF, the average waiting time comes out 4.5 ms.
Non-preemptive:
Problem

Process CPU Burst

P1 20

P2 4

P3 6

P4 4

Criteria: CPU Burst

Mode: Non-preemptive

Gantt Chart Shown the result as below

Waiting time:

• Waiting time for process P1=14, P2=0, P3=8 and P4=4.


• Average Waiting time = (14+0+8+4)/4 = 6.5.

Processes BT CT(completion) WT AWT


P1 20 34 14 (14+0+8+4)/4
P2 4 4 0 = 6.5.
P3 6 14 8
P4 4 8 4

SJF algorithm is used in long-term scheduling. We can’t determine the length of next CPU
Pre-emptive Shortest Job First

(Also called Shortest Remaining Time First)

• In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they
arrive, but as a process with short burst time arrives, the existing process is preempted
or removed from execution, and the shorter job is executed first.
• The average waiting time for preemptive shortest job first scheduling is less than both,
non preemptive SJF scheduling and FCFS scheduling

Preemptive SJF scheduling also is known as Shortest Remaining Time first


Scheduling

Preemptive Scheduling Algorithm example:

Process CPU Burst (millisecond) Arrival Time

P1 10 0

P2 4 2

P3 6 4

P4 2 5

Criteria: CPU Burst

Mode: Preemptive

Formula

• TAT (Turn Around Time) = CT (Completion Time) – AT (Arrival Time)


• WT (Waiting Time) = TAT - BT

Gantt Chart Shown the result as below


➢ In above example, Process P1 is running (AT = 0 ms), after 2 ms, P2 enter in the
ready queue with CPU Burst length 4.
➢ Now P1 is preempted (as burst time is 10 ms), P2 gets CPU (as AT = 2 ms & burst
time = 4 ms). In mid between of execution P3 and P4 get to enter with CPU burst 6, 2
➢ But P2 is not preempted, because of small burst (already executed).
➢ Now p4, p3, and P1 get executed

Processes AT BT CT(completion) TAT = WT = AWT


CT-AT TAT- BT
P1 0 10 22 22 12 (12+0+4+1)/4 =
P2 2 4 6 4 0 17/4 ms
P3 4 6 14 10 4
P4 5 2 8 3 1

Advantages of SJF

1. Maximum throughput
2. Minimum average waiting and turnaround time

Disadvantages of SJF

1. May suffer with the problem of starvation


2. It is not implementable because the exact Burst time for a process can't be known in
advance.

You might also like