0% found this document useful (0 votes)
30 views42 pages

OS Unit-2 For BCA

The document covers the syllabus and key concepts of Process Management in Operating Systems, including process and thread concepts, process states, and inter-process communication. It discusses the responsibilities of the operating system, types of threads, and the differences between processes and threads, as well as various CPU scheduling algorithms. Additionally, it explains process control blocks, operations on processes, and timing metrics related to process execution.

Uploaded by

tanuvishu6699
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views42 pages

OS Unit-2 For BCA

The document covers the syllabus and key concepts of Process Management in Operating Systems, including process and thread concepts, process states, and inter-process communication. It discusses the responsibilities of the operating system, types of threads, and the differences between processes and threads, as well as various CPU scheduling algorithms. Additionally, it explains process control blocks, operations on processes, and timing metrics related to process execution.

Uploaded by

tanuvishu6699
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Operating System(CA 23.

102)
Unit-2
BCA 2nd Sem
By : Dr. Vinod Kumar

1
Syllabus(Unit-2)
• Process Management:
• Process and Thread Concept,
• Process States,
• Process Control Block,
• Operations with examples from UNIX (fork, exec) and/or Windows,
• Inter-process communication (shared memory and message passing),
• Scheduling Algorithms, Performance Evaluation.

2
The process concept
• A Program does nothing unless its instructions are executed by a CPU.
• Process: A program in execution is called a process.
• In order to accomplish its task, process needs the computer
resources.
• There may exist more than one process in the system which may
require the same resource at the same time. Therefore, the operating
system has to manage all the processes and the resources in a
convenient and efficient way.
• Some resources may need to be executed by one process at one time
to maintain the consistency otherwise the system can become
inconsistent and deadlock may occur.
3
Responsibilities of OS
The operating system is responsible for the following activities in
connection with Process Management
• Scheduling processes and threads on the CPUs.
• Creating and deleting both user and system processes.
• Suspending and resuming processes.
• Providing mechanisms for process synchronization.
• Providing mechanisms for process communication.

4
Thread in Operating System
• A thread is a single sequence stream within a process. Threads are also
called lightweight processes as they possess some of the properties of processes. Each
thread belongs to exactly one process.
• In an operating system that supports multithreading, the process can consist of many
threads. But threads can be effective only if the CPU is more than 1 otherwise two
threads have to context switch for that single CPU.
• All threads belonging to the same process share – code section, data section, and OS
resources (e.g. open files and signals)
• But each thread has its own (thread control block) – thread ID, program counter, register
set, and a stack
• Any operating system process can execute a thread. we can say that single process can
have multiple threads.
5
Why Do We Need Thread?
• Threads run in parallel improving the application performance. Each such thread
has its own CPU state and stack, but they share the address space of the process
and the environment.
• Threads can share common data so they do not need to use inter-process
communication. Like the processes, threads also have states like ready, executing,
blocked, etc.
• Priority can be assigned to the threads just like the process, and the highest
priority thread is scheduled first.
• Each thread has its own Thread Control Block (TCB). Like the process, a context
switch occurs for the thread, and register contents are saved in (TCB). As threads
share the same address space and resources, synchronization is also required for
the various activities of the thread.

6
Components of Threads
These are the basic components of the Operating System.
• Stack Space: Stores local variables, function calls, and return
addresses specific to the thread.
• Register Set: Hold temporary data and intermediate results for the
thread’s execution.
• Program Counter: Tracks the current instruction being executed by
the thread.

7
Types of Thread in Operating System
Threads are of two types: User Level Thread and Kernel Level Thread
1. User Level Thread
• User Level Thread is a type of thread that is not created using
system calls. The kernel has no work in the management of user-
level threads. User-level threads can be easily implemented by the
user. In case when user-level threads are single-handed processes,
kernel-level thread manages them.
2. Kernel Level Threads
• A kernel Level Thread is a type of thread that can recognize the
Operating system easily. Kernel Level Threads has its own thread
table where it keeps track of the system. The operating System
Kernel helps in managing threads. Kernel Threads have somehow
longer context switching time. Kernel helps in the management of
threads.
8
Difference Between Process and Thread

• The primary difference is that threads within the same process run in
a shared memory space, while processes run in separate memory
spaces.
• Threads are not independent of one another like processes are, and
as a result, threads share with other threads their code section, data
section, and OS resources (like open files and signals). But, like a
process, a thread has its own program counter (PC), register set, and
stack space.

9
Single threaded and multithreaded process

10
What is Multi-Threading?

• A thread is also known as a lightweight process. The idea is to achieve parallelism


by dividing a process into multiple threads.
• For example, in a browser, multiple tabs can be different threads. MS Word uses
multiple threads: one thread to format the text, another thread to process inputs,
etc. More advantages of multithreading are discussed below.
• Multithreading is a technique used in operating systems to improve the
performance and responsiveness of computer systems.
• Multithreading allows multiple threads (i.e., lightweight processes) to share the
same resources of a single process, such as the CPU, memory, and I/O devices.

11
Process State
As a process executes, it changes state. The
state of a process is defined in part by the
current activity of that process.
Each process may be in one of the following
state.
• new: The process is being created
• running: Instructions are being executed
• waiting: The process is waiting for some
event to occur
• ready: The process is waiting to be
assigned to a processor
• terminated: The process has finished
execution
12
Process Control Block (PCB)
Each process is represented in the operating system by a process
control block (PCB) also called task control block. It contains
many piece of information associated with a specific process,
including these:
• Process state – The state may be new, running, waiting, etc
• Program counter – The counter indicates the address of the
next of the next instruction to be executed for this process.
• CPU registers – contents of all process-centric registers such as
index register, general purpose register etc.
• CPU scheduling information – This information includes a
process priorities, scheduling queue pointers and any other
scheduling parameters.

13
Process Control Block (PCB) (cont…)
• Memory-management information – This information
may include such information as the value of the base
and limit registers, the page tables or the segment table,
depending on the memory allocated to the process.
• Accounting information – it include the information
such as the amount CPU used, clock time elapsed since
start, time limits, account number, job or process
numbers, and so on.
• I/O status information – It includes the list of I/O
devices allocated to this process, a list of open files, and
so on.

14
Operations on the Process
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and will
be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one process
and start executing it. Selecting the process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may
come to the blocked or wait state during the execution then in that case the processor starts
executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.
15
Various Times related to the Process
1. Arrival Time
The time at which the process enters into the
ready queue is called the arrival time.
2. Burst Time
The total amount of time required by the CPU to
execute the whole process is called the Burst Time.
This does not include the waiting time. It is
confusing to calculate the execution time for a
process even before executing it hence the
scheduling problems based on the burst time
cannot be implemented in reality.

16
Various Times related to the Process
3. Completion Time
The Time at which the process enters into the completion
state or the time at which the process completes its
execution, is called completion time.
4. Turnaround time
The total amount of time spent by the process from its
arrival to its completion, is called Turnaround time.
5. Waiting Time
The Total amount of time for which the process waits for
the CPU to be assigned is called waiting time.
6. Response Time
The difference between the arrival time and the time at
which the process first gets the CPU is called Response
Time.
17
CPU Scheduling
• In Multiprogramming systems, the Operating system schedules the
processes on the CPU to have the maximum utilization of it and this
procedure is called CPU scheduling. The Operating System uses
various scheduling algorithm to schedule the processes.

18
CPU Scheduler
 Short-term scheduler selects from among the processes in ready queue, and allocates
the CPU to one of them
 Queue may be ordered in various ways
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
2. Switches from waiting to ready
3. Terminates
 Scheduling under 1 and 4 is nonpreemptive
 All other scheduling is preemptive
 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities
19
Why do we need Scheduling?
• In Multiprogramming, if the long term scheduler picks more I/O bound processes
then most of the time, the CPU remains idol.
• The task of Operating system is to optimize the utilization of resources.
• If most of the running processes change their state from running to waiting then
there may always be a possibility of deadlock in the system.
• Hence to reduce this overhead, the OS needs to schedule the jobs to get the
optimal utilization of CPU and to avoid the possibility to deadlock.

20
Job or CPU Scheduling Algorithms in OS
There are various algorithms which are used by the Operating System
to schedule the processes on the processor in an efficient way.
The Purpose of a Scheduling algorithm
• Maximum CPU utilization
• Fare allocation of CPU
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Minimum response time

21
Job or CPU Scheduling Algorithms in OS
1. First Come First Serve
It is the simplest algorithm to implement. The process with the minimal arrival time will get
the CPU first. The lesser the arrival time, the sooner will the process gets the CPU. It is the
non-preemptive type of scheduling.
2. Round Robin
In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the
processes will get executed in the cyclic way. Each of the process will get the CPU for a
small amount of time (called time quantum) and then get back to the ready queue to wait
for its next turn. It is a preemptive type of scheduling.
3. Shortest Job First
The job with the shortest burst time will get the CPU first. The lesser the burst time, the
sooner will the process get the CPU. It is the non-preemptive type of scheduling.

22
Job or CPU Scheduling Algorithms in OS
4. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job
according to the remaining time of the execution.
5. Priority based scheduling
In this algorithm, the priority will be assigned to each of the processes. The
higher the priority, the sooner will the process get the CPU. If the priority of
the two processes is same then they will be scheduled according to their
arrival time.
6. Highest Response Ratio Next
In this scheduling Algorithm, the process with highest response ratio will be
scheduled next. This reduces the starvation in the system.
23
First Come First Serve (FCFS) CPU Process
Scheduling
• First Come First Serve CPU Scheduling Algorithm shortly known as
FCFS is the first algorithm of CPU Process Scheduling Algorithm.
• In First Come First Serve Algorithm what we do is to allow the process
to execute in linear manner.
• This means that whichever process enters process enters the ready
queue first is executed first. This shows that First Come First Serve
Algorithm follows First In First Out (FIFO) principle.
• The First Come First Serve Algorithm can be executed in Pre Emptive
and Non Pre Emptive manner.
24
PreEmptive and non-preEmptive Approach
Pre Emptive Approach
In this instance of Pre Emptive Process Scheduling, the OS allots the resources to a
Process for a predetermined period of time. The process transitions from running
state to ready state or from waiting state to ready state during resource allocation.
This switching happens because the CPU may assign other processes precedence
and substitute the currently active process for the higher priority process.
Non-Pre Emptive Approach
In this case of Non Pre Emptive Process Scheduling, the resource cannot be
withdrawn from a process before the process has finished running. When a running
process finishes and transitions to the waiting state, resources are switched.

25
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
26
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
• The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect - short process behind long process
• Consider one CPU-bound and many I/O-bound processes

27
Advantages /Disadvantages of FCFS
Advantages of FCFS CPU Process Scheduling
• In order to allocate processes, it uses the First In First Out queue.
• The FCFS CPU Scheduling Process is straight forward and easy to implement.
• In the FCFS situation pre emptive scheduling, there is no chance of process starving.
• As there is no consideration of process priority, it is an equitable algorithm.
Disadvantages of FCFS CPU Process Scheduling
• FCFS CPU Scheduling Algorithm has Long Waiting Time
• FCFS CPU Scheduling favors CPU over Input or Output operations
• In FCFS there is a chance of occurrence of Convoy Effect
• Because FCFS is so straight forward, it often isn't very effective. Extended waiting periods go hand
in hand with this. All other orders are left idle if the CPU is busy processing one time-consuming
order.
28
Shortest-Job-First (SJF) Scheduling
• In SJF scheduling, the process with the lowest burst time, among the list of available
processes in the ready queue, is going to be scheduled next.
• However, it is very difficult to predict the burst time needed for a process hence this
algorithm is very difficult to implement in the system.
Advantages of SJF
• Maximum throughput
• Minimum average waiting and turnaround time
Disadvantages of SJF
• May suffer with the problem of starvation
• It is not implementable because the exact Burst time for a process can't be known in
advance.
29
Example of SJF
ProcessArriva l TimeBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

• SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

30
Shortest Remaining Time First (SRTF)
Scheduling Algorithm
• This Algorithm is the preemptive version of SJF scheduling.
• In SRTF, the execution of the process can be stopped after certain amount of
time.
• At the arrival of every process, the short term scheduler schedules the process
with the least remaining burst time among the list of available processes and the
running process.
• Once all the processes are available in the ready queue, No preemption will be
done and the algorithm will work as SJF scheduling.
• The context of the process is saved in the Process Control Block when the
process is removed from the execution and the next process is scheduled. This
PCB is accessed on the next execution of this process.
31
Shortest Remaining Time First (SRTF)
Example:
• Now we add the concepts of varying arrival times and preemption to the analysis
ProcessAar Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26

• Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 msec


32
Round Robin Scheduling Algorithm
• This algorithm is very special because it is going to remove all the Flaws which we have
detected in the previous CPU Process Scheduling Algorithms.
• There is a lot of popularity for this Round Robin CPU Scheduling is because Round Robin
works only in Pre-emptive state. This makes it very reliable.
Round Robin CPU Scheduling
• Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is
ever used in the history of CPU Scheduling Algorithms.
• Round Robin CPU Scheduling each process gets a small unit of CPU time known as Time
Quantum (TQ).
• Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is carried
out cyclically. The system defines a specific time slice, known as a time quantum.

33
Round Robin Scheduling Algorithm
(CONT….)
• First, the processes which are eligible to enter the ready queue enter the ready
queue. After entering the first process in Ready Queue is executed for a Time
Quantum chunk of time. After execution is complete, the process is removed
from the ready queue. Even now the process requires some time to complete its
execution, then the process is added to Ready Queue.
• The Ready Queue does not hold processes which already present in the Ready
Queue. The Ready Queue is designed in such a manner that it does not hold non
unique processes. By holding same processes Redundancy of the processes
increases.
• After, the process execution is complete, the Ready Queue does not take the
completed process for holding

34
Round Robin Scheduling Algorithm
(CONT….)

35
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
• The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

• Typically, higher average turnaround than SJF, but better response


• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 usec

36
Advantages/ disadvantages of Round
Robin Scheduling Algorithm
Advantages
• A fair amount of CPU is allocated to each job.
• Because it doesn't depend on the burst time, it can truly be implemented in the system.
• It is not affected by the convoy effect or the starvation problem as occurred in First Come
First Serve CPU Scheduling Algorithm.
Disadvantages
• Low Operating System slicing times will result in decreased CPU output.
• Round Robin CPU Scheduling approach takes longer to swap contexts.
• Time quantum has a significant impact on its performance.
• The procedures cannot have priorities established.

37
Priority Scheduling Algorithm
• In Priority scheduling, there is a priority number assigned to each process.
• In some systems, the lower the number, the higher the priority. While, in the
others, the higher the number, the higher will be the priority.
• The Process with the higher priority among the available processes is given the
CPU.
• There are two types of priority scheduling algorithm exists. One
is Preemptive priority scheduling while the other is Non Preemptive Priority
scheduling.

38
Priority Scheduling Algorithm (cont…)
• The priority number assigned to each of the process may or may not
vary.
• If the priority number doesn't change itself throughout the process, it
is called static priority.
• while if it keeps changing itself at the regular intervals, it is
called dynamic priority.

39
Example of Priority Scheduling
ProcessAarri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

• Priority scheduling Gantt Chart

• Average waiting time = 8.2 msec


40
Multiple Processors Scheduling in OS
• Multiple processor scheduling or multiprocessor scheduling focuses
on designing the system's scheduling function, which consists of more
than one processor.
• Multiple CPUs share the load (load sharing) in multiprocessor
scheduling so that various processes run simultaneously.
• In general, multiprocessor scheduling is complex as compared to
single processor scheduling.
• In the multiprocessor scheduling, there are many processors, and
they are identical, and we can run any process at any time.
41
Multiple Processors Scheduling in OS
• The multiple CPUs in the system are in close communication, which
shares a common bus, memory, and other peripheral devices. So we
can say that the system is tightly coupled.
• These systems are used when we want to process a bulk amount of
data, and these systems are mainly used in satellite, weather
forecasting, etc.
• Multiprocessor systems may be heterogeneous (different kinds of
CPUs) or homogenous (the same CPU). There may be special
scheduling constraints, such as devices connected via a private bus to
only one CPU.

42

You might also like