0% found this document useful (0 votes)
33 views12 pages

Os Unit 2

Uploaded by

velluraju11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views12 pages

Os Unit 2

Uploaded by

velluraju11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

FUNDAMENTALS OF OPERATING SYSTEMS

UNIT II: Processes and CPU Scheduling:


Process Concepts- Process Scheduling- Operations on Processes- Cooperating Processes- Threads-
Basic Concepts of CPU Scheduling- Scheduling Criteria - Scheduling Algorithms.
Process Concepts
Definition of Process
An active program which running now on the Operating System is known as the process.
Processes change their state as they execute and can be either new, ready, running, waiting or
terminated. Basically, a process is a simple program.
Process in an Operating System
A process is actively running software or a computer code. Any procedure must be carried
out in a precise order.
When a program is loaded into memory, it may be divided into the four components stack,
heap, text, and data to form a process. The flow of a process in the main memory is shown in the
diagram below.

Stack
Stack stores temporary information such as method or function arguments, the return address,
and local variables.
Heap
Heap is the memory area where a process is dynamically allotted while it is running.
Text
This includes the current activity represented by the value of Program Counter and the contents of the
processor's registers.
Data
Data contains both global and static variables.
Process Life Cycle:
When a process executes, it passes through different states.
In general, a process can have one of the following five states at a time.

State Description

Start This is the initial state when a process is first started/created.

Ready The process is waiting to be assigned to a processor.


Once the process has been assigned to a processor by the OS scheduler, the
Running
process state is set to running and the processor executes its instructions.

Process moves into the waiting state if it needs to wait for a resource, such
Waiting
as waiting for user input, or waiting for a file to become available.

Terminated
It occurs, when the process has finished execution.
or Exit

Definition of a Program
A program is a piece of code which may be a single line or millions of lines. A computer program is
usually written by a computer programmer in a programming language.
Process Control Block
A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track
of a process
A Process Control Block consists of :
No. Information & Description

Process State
1
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

Process privileges
2
This is required to allow/disallow access to system resources.

Process ID
3
Unique identification for each of the process in the operating system.

Pointer
4
A pointer to parent process.

Program Counter
5 Program Counter is a pointer to the address of the next instruction to be executed for
this process.

CPU registers
6
Various CPU registers where process need to be stored for execution for running state.

CPU Scheduling Information


7 Process priority and other scheduling information which is required to schedule the
process.

Memory management information


8 This includes the information of page table, memory limits, Segment table depending
on memory used by the operating system.
Accounting information
9 This includes the amount of CPU used for process execution, time limits, execution
ID etc.

IO status information
10
This includes a list of I/O devices allocated to the process.

Process Scheduling
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system.
Categories in Scheduling
• Non-preemptive:
In this scheduling, once the resources (CPU cycles) are allocated to a process, the process
holds the CPU till it gets terminated or reaches a waiting state. Scheduling does not interrupt
a process running CPU in the middle of the execution. Instead, it waits till the process
completes its CPU burst time, and then it can allocate the CPU to another process.
• Preemptive:
Preemptive scheduling is used when a process switches from the running state to the ready
state or from the waiting state to the ready state. The resources (mainly CPU cycles) are
allocated to the process for a limited amount of time and then taken away, and the process
is again placed back in the ready queue.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. OS maintains
the following important process scheduling queues:
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue

Schedulers
Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types:
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long Term Scheduler
• It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
• The main purpose of a long-term scheduler is to balance the number of CPU- bound jobs
and the I/O bound jobs in the main memory.
Short Term Scheduler
• It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
• Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
Medium term scheduler
• Medium-term scheduling is a part of swapping. It removes the processes from the memory.
It reduces the degree of multiprogramming.
• A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process
from memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped
out or rolled out.
Context Switching
❖ Each time the CPU makes a switch, the system temporarily suspends the currently executing task,
saves its state (context) to a process control block (PCB), and then executes the next task in the
queue.
❖ If that task has been started, the CPU retrieves the state for that task so it can start the execution
where it left off.
❖ Context switching is a core capability of a multitasking operating system.
❖ Context switching enables multiple processes to share a single CPU.

Operation on a Process
The execution of a process is a complex activity. It involves various operations. Following are the
operations that are performed while execution of a process.
• The process is in new state when it is being created.
• Then the process is moved to the ready state, where it waits till it is taken for execution. There
may be many such processes in the ready state.
• One of these processes will be selected and will be given the processor, and the selected process
moves to the running state.
• A process, while running, may have to wait for I/O or wait for any other event to take place. That
process is now moved to the waiting state.
• Once the process completes execution, it moves to the terminated state.
Process Synchronization
• Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner.
• In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems.
• On the basis of synchronization, processes are categorized as one of the following two
types:
• Independent Process
The process that does not affect or is affected by the other process while its execution
then the process is called Independent Process. Example The process that does not share
any shared variable, database, files, etc.
• Cooperating Process
The process that affects or is affected by the other process while execution, is called
a Cooperating Process. Example The process that share file, variable, database, etc are the
Cooperating Process.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
Concept of cooperating processes and how they are used in operating systems
• Inter-Process Communication (IPC): Cooperating processes interact with each other via
Inter-Process Communication (IPC). As they are interacting to each other and sharing some
resources with another, running task get the synchronization and possibilities of deadlock
decreases. To implement the IPC, there are many options such as pipes, message queues,
semaphores, and shared memory.
• Concurrent execution: Cooperating processes executes simultaneously which can be done
by operating system scheduler which helps to select the process from ready queue to go to
the running state. Because of concurrent execution of several processes, the completion time
decreases.
• Resource sharing: Cooperating processes cooperate by sharing resources including CPU,
memory, and I/O hardware. If several processes are sharing resources as if they have their
turn, synchronization increases as well as the response time of process increase.
• Deadlocks: As cooperating processes shares their resources, there might be a deadlock
condition. Deadlock means if p1 process holds the resource A and wait for B and p2 process
hold the B and wait for A. In this condition deadlock occur in cooperating process.
• Process scheduling: Scheduler schedules the process which should be executed next by
using several scheduling algorithms such as Round-Robin, FCFS, SJF, Priority etc.
Threads
Thread is a separate execution path. It is a lightweight process that the operating system can schedule
and run concurrently with other threads. The operating system creates and manages threads, and they
share the same memory and resources as the program that created them. This enables multiple threads to
collaborate and work efficiently within a single program.
Each thread belongs to exactly one process. In an operating system that supports multithreading, the
process can consist of many threads.
Why Do We Need Thread?
❖ Threads run in parallel improving the application performance. Each thread has its own CPU
state and stack, but they share the address space of the process and the environment.
❖ Threads can share common data so they do not need to use interprocess communication.
❖ Like the processes, threads also have states like ready, executing, blocked, etc.
❖ Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.
❖ Each thread has its own Thread Control Block (TCB). Like the process, a context switch
occurs for the thread, and register contents are saved in (TCB).
❖ As threads share the same address space and resources, synchronization is also required for
the various activities of the thread.
Multi threading
Multithreading is a technique used in operating systems to improve the performance and
responsiveness of computer systems. Multithreading allows multiple threads (i.e.,
lightweight processes) to share the same resources of a single process, such as the CPU,
memory, and I/O devices.
For example, in a browser, multiple tabs can be different threads
Advantages of Thread
Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
Faster context switch: Context switch time between threads is lower compared to the
process context switch.
Effective utilization of multiprocessor system: If we have multiple threads in a single
process, then we can schedule multiple threads on multiple processors. This will make
process execution faster.
Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. Note: Stacks and registers can’t be shared among the threads. Each thread
has its own stack and registers.
Communication: Communication between multiple threads is easier, as the threads share
a common address space. while in the process we have to follow some specific
communication techniques for communication between the two processes.
Enhanced throughput of the system: If a process is divided into multiple threads, and
each thread function is considered as one job, then the number of jobs completed per unit of
time is increased, thus increasing the throughput of the system.
Types of Threads
Threads are of two types. These are described below.
• User Level Thread
• Kernel Level Thread
• User Level Threads
User Level Thread:
User Level Thread is a type of thread that is not created using system calls. The kernel has
no work in the management of user-level threads. User-level threads can be easily
implemented by the user. In case when user-level threads are single-handed processes,
kernel-level thread manages them.
Advantages of User-Level Threads
• Implementation of the User-Level Thread is easier than Kernel Level Thread.
• Context Switch Time is less in User Level Thread.
• User-Level Thread is more efficient than Kernel-Level Thread.
• Because of the presence of only Program Counter, Register Set, and Stack Space, it has
a simple representation.
Disadvantages of User-Level Threads
• There is a lack of coordination between Thread and Kernel.
• In case of a page fault, the whole process can be blocked.
Kernel Level Threads
A kernel Level Thread is a type of thread that can recognize the Operating system easily.
Kernel Level Threads has its own thread table where it keeps track of the system. The
operating System Kernel helps in managing threads. Kernel Threads have somehow longer
context switching time. Kernel helps in the management of threads.
Advantages of Kernel-Level Threads
• It has up-to-date information on all threads.
• Applications that block frequency are to be handled by the Kernel-Level Threads.
• Whenever any process requires more time to process, Kernel-Level Thread provides
more time to it.
Disadvantages of Kernel-Level threads
• Kernel-Level Thread is slower than User-Level Thread.
• Implementation of this type of thread is a little more complex than a user-level thread.
Components of Threads
These are the basic components of the Operating System.
• Stack Space
• Register Set
• Program Counter
Difference between Process and Thread

S.No Process Thread

Process means any program is in


Thread means a segment of a process.
1. execution.

The process takes more time to


The thread takes less time to terminate.
2. terminate.
It takes more time for creation. It takes less time for creation.
3.
It also takes more time for context
It takes less time for context switching.
4. switching.
The process is less efficient in terms Thread is more efficient in terms of
5. of communication. communication.
We don’t need multi programs in action for
6. Multiprogramming holds the multiple threads because a single process
concepts of multi- process. consists of multiple threads.
7. The process is isolated. Threads share memory.
The process is called the heavyweight A Thread is lightweight as each thread in a
8.
process. process shares code, data, and resources.
Thread switching does not require calling
9. Process switching uses an interface in an operating system and causes an
an operating system. interrupt to the kernel.
If one process is blocked then it will
10. not affect the execution of other If a user-level thread is blocked, then all
processes other user- level threads are blocked.
The process has its own Process Thread has Parents’ PCB, its own Thread
11. Control Block, Stack, and Address Control Block, and Stack and common
Space. Address space.
Changes to the parent process do not Since all threads of the same process share
affect child processes. address space and other resources so any
12.
changes to the main thread may affect the
behavior of the other threads of the process.
A system call is involved in it. No system call is involved, it is created
13.
using APIs.
The process does not share data with
14.
each other. Threads share data with each other.
Basic Concepts of CPU Scheduling
Process Scheduling is the process of handling the removal of an active process from the CPU
and selecting another process based on a specific strategy. Process Scheduling is an integral part
of Multi-programming applications. Such operating systems allow more than one process to be
loaded into usable memory at a time and the loaded shared CPU process uses repetition time.
There are three types of process schedulers:
• Long term or Job Scheduler
• Short term or CPU Scheduler
• Medium-term Scheduler
Why do we need to schedule processes?
Scheduling is important in many different computer environments. One of the most important
areas is scheduling which programs will work on the CPU. This task is handled by the Operating
System (OS) of the computer and there are many different ways in which we can choose to
configure programs.
Process Scheduling allows the OS to allocate CPU time for each process. Another important
reason to use a process scheduling system is that it keeps the CPU busy at all times. This allows
you to get less response time for programs.
What is the need for CPU scheduling algorithm?
CPU scheduling is the process of deciding which process will own the CPU to use while another
process is suspended. The main function of the CPU scheduling is to ensure that whenever the
CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use
line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding processes then
most of the time, the CPU remains an idle. The function of an effective program is to improve
resource utilization.
Objectives of Process Scheduling Algorithm:
• Utilization of CPU at maximum level.
• Keep CPU as busy as possible.
• Allocation of CPU should be fair.
• Throughput should be Maximum. i.e. Number of processes that complete
their execution per time unit should be maximized.
• Minimum turnaround time, i.e. time taken by a process to finish execution
should be the least.
• There should be a minimum waiting time and the process should not starve in
the ready queue.
• Minimum response time. It means that the time when a process produces
the first response should be as less as possible.
CPU Scheduling Criteria
CPU scheduling is an essential part of modern operating systems as it enables multiple
processes to run at the same time on the same processor. In short, the CPU scheduler decides the
order and priority of the processes to run and allocates the CPU time based on various parameters
such as CPU usage, throughput, turnaround, waiting time, and response time.
CPU scheduling is essential for the system’s performance and ensures that processes are
executed correctly and on time.
Criteria of CPU Scheduling
CPU Scheduling has several criteria. Some of them are mentioned below.
CPU Utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent
depending on the load upon the system.
Throughput
A measure of the work done by the CPU is the number of processes being executed and
completed per unit of time. This is called throughput. The throughput may vary depending on
the length or duration of the processes.
Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process.
The time elapsed from the time of submission of a process to the time of completion is known
as the turnaround time. Turn- around time is the sum of times spent waiting to get into memory,
waiting in the ready queue, executing in CPU, and waiting for I/O.
Turn Around Time = Completion Time - Arrival Time. Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it
starts execution. It only affects the waiting time of a process i.e. time spent by a process waiting
in the ready queue.
Waiting Time = Turnaround Time - Burst Time. Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce
some output fairly early and continue computing new results while previous results are being
output to the user. Thus another criterion is the time taken from submission of the process of the
request until the first response is produced. This measure is called response time.

Response Time = CPU Allocation Time(when the CPU was allocated for the first) - Arrival
Time
Completion Time
The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.
Priority
If the operating system assigns priorities to processes, the scheduling mechanism should
favor the higher-priority processes.
Predictability
A given process always should run in about the same amount of time under a similar system
load.
CPU Scheduling Algorithms
There are mainly two types of scheduling methods:
Preemptive Scheduling: Preemptive scheduling is used when a process switches from
running state to ready state or from the waiting state to the ready state.
Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process
terminates, or when a process switches from running state to waiting state.
The following are the CPU scheduling algorithms
First Come First Serve:
FCFS considered to be the simplest of all operating system scheduling algorithms. First
come first serve scheduling algorithm states that the process that requests the CPU first is
allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
• FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
• Easy to implement
• First come, first serve method
Disadvantages of FCFS:
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much efficient.

Shortest Job First (SJF):


Shortest job first (SJF) is a scheduling process that selects the waiting process with the
smallest execution time to execute next. This scheduling method may or may not be preemptive.
Significantly reduces the average waiting time for other processes waiting to be executed. The
full form of SJF is Shortest Job First
Characteristics of SJF:
• Shortest Job first has the advantage of having a minimum average waiting time
among all operating system scheduling algorithms.
• It is associated with each task as a unit of time to complete.
• It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
Advantages of Shortest Job first:
• As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
• SJF is generally used for long term scheduling
Disadvantages of SJF:
• One of the demerit SJF has is starvation.
• Many times it becomes complicated to predict the length of the upcoming CPU request

Longest Job First (LJF):


Longest Job First (LJF) scheduling process is just opposite of shortest job first (SJF), as
the name suggests this algorithm is based upon the fact that the process with the largest burst time
is processed first. Longest Job First is non- preemptive in nature.
Characteristics of LJF:
• Among all the processes waiting in a waiting queue, CPU is always assigned to the
process having largest burst time.
• If two processes have the same burst time then the tie is broken using
FCFS i.e. the process that arrived first is processed first.
• LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
• No other task can schedule until the longest job or process executes
completely.
• All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
• Generally, the LJF algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
• This may lead to convoy effect.

Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU
scheduling algorithm that works based on the priority of a process. In this algorithm, the editor
sets the functions to be as important, meaning that the most important process must be done first.
In the case of any conflict, that is, where there are more than one processor with equal value, then
the most important CPU planning algorithm works on the basis of the FCFS (First Come First
Serve) algorithm.
• Characteristics of Priority Scheduling:
• Schedules tasks based on priority.
• When the higher priority work arrives while a task with less priority is executed, the
higher priority work takes the place of the less priority one and
• The latter is suspended until the execution is complete.
• Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
• The average waiting time is less than FCFS
• Less complex
Disadvantages of Priority Scheduling:
• One of the most common demerits of the Preemptive priority CPU scheduling algorithm
is the Starvation Problem. This is the problem in which a process has to wait for a longer amount
of time to get scheduled into the CPU. This condition is called the starvation problem.

Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a
fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
• It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
• One of the most widely used methods in CPU scheduling as a core.
• It is considered preemptive as the processes are given to the CPU for a very limited
time.
Advantages of Round robin:
• Round robin seems to be fair as every process gets an equal share of CPU.
• The newly created process is added to the end of the ready queue.

You might also like