Os Unit 2
Os Unit 2
Stack
Stack stores temporary information such as method or function arguments, the return address,
and local variables.
Heap
Heap is the memory area where a process is dynamically allotted while it is running.
Text
This includes the current activity represented by the value of Program Counter and the contents of the
processor's registers.
Data
Data contains both global and static variables.
Process Life Cycle:
When a process executes, it passes through different states.
In general, a process can have one of the following five states at a time.
State Description
Process moves into the waiting state if it needs to wait for a resource, such
Waiting
as waiting for user input, or waiting for a file to become available.
Terminated
It occurs, when the process has finished execution.
or Exit
Definition of a Program
A program is a piece of code which may be a single line or millions of lines. A computer program is
usually written by a computer programmer in a programming language.
Process Control Block
A Process Control Block is a data structure maintained by the Operating System for every process. The
PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track
of a process
A Process Control Block consists of :
No. Information & Description
Process State
1
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process privileges
2
This is required to allow/disallow access to system resources.
Process ID
3
Unique identification for each of the process in the operating system.
Pointer
4
A pointer to parent process.
Program Counter
5 Program Counter is a pointer to the address of the next instruction to be executed for
this process.
CPU registers
6
Various CPU registers where process need to be stored for execution for running state.
IO status information
10
This includes a list of I/O devices allocated to the process.
Process Scheduling
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system.
Categories in Scheduling
• Non-preemptive:
In this scheduling, once the resources (CPU cycles) are allocated to a process, the process
holds the CPU till it gets terminated or reaches a waiting state. Scheduling does not interrupt
a process running CPU in the middle of the execution. Instead, it waits till the process
completes its CPU burst time, and then it can allocate the CPU to another process.
• Preemptive:
Preemptive scheduling is used when a process switches from the running state to the ready
state or from the waiting state to the ready state. The resources (mainly CPU cycles) are
allocated to the process for a limited amount of time and then taken away, and the process
is again placed back in the ready queue.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. OS maintains
the following important process scheduling queues:
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue
Schedulers
Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types:
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long Term Scheduler
• It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
• The main purpose of a long-term scheduler is to balance the number of CPU- bound jobs
and the I/O bound jobs in the main memory.
Short Term Scheduler
• It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
• Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
Medium term scheduler
• Medium-term scheduling is a part of swapping. It removes the processes from the memory.
It reduces the degree of multiprogramming.
• A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process
from memory and make space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the process is said to be swapped
out or rolled out.
Context Switching
❖ Each time the CPU makes a switch, the system temporarily suspends the currently executing task,
saves its state (context) to a process control block (PCB), and then executes the next task in the
queue.
❖ If that task has been started, the CPU retrieves the state for that task so it can start the execution
where it left off.
❖ Context switching is a core capability of a multitasking operating system.
❖ Context switching enables multiple processes to share a single CPU.
Operation on a Process
The execution of a process is a complex activity. It involves various operations. Following are the
operations that are performed while execution of a process.
• The process is in new state when it is being created.
• Then the process is moved to the ready state, where it waits till it is taken for execution. There
may be many such processes in the ready state.
• One of these processes will be selected and will be given the processor, and the selected process
moves to the running state.
• A process, while running, may have to wait for I/O or wait for any other event to take place. That
process is now moved to the waiting state.
• Once the process completes execution, it moves to the terminated state.
Process Synchronization
• Process Synchronization is the coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and predictable
manner.
• In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems.
• On the basis of synchronization, processes are categorized as one of the following two
types:
• Independent Process
The process that does not affect or is affected by the other process while its execution
then the process is called Independent Process. Example The process that does not share
any shared variable, database, files, etc.
• Cooperating Process
The process that affects or is affected by the other process while execution, is called
a Cooperating Process. Example The process that share file, variable, database, etc are the
Cooperating Process.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
Concept of cooperating processes and how they are used in operating systems
• Inter-Process Communication (IPC): Cooperating processes interact with each other via
Inter-Process Communication (IPC). As they are interacting to each other and sharing some
resources with another, running task get the synchronization and possibilities of deadlock
decreases. To implement the IPC, there are many options such as pipes, message queues,
semaphores, and shared memory.
• Concurrent execution: Cooperating processes executes simultaneously which can be done
by operating system scheduler which helps to select the process from ready queue to go to
the running state. Because of concurrent execution of several processes, the completion time
decreases.
• Resource sharing: Cooperating processes cooperate by sharing resources including CPU,
memory, and I/O hardware. If several processes are sharing resources as if they have their
turn, synchronization increases as well as the response time of process increase.
• Deadlocks: As cooperating processes shares their resources, there might be a deadlock
condition. Deadlock means if p1 process holds the resource A and wait for B and p2 process
hold the B and wait for A. In this condition deadlock occur in cooperating process.
• Process scheduling: Scheduler schedules the process which should be executed next by
using several scheduling algorithms such as Round-Robin, FCFS, SJF, Priority etc.
Threads
Thread is a separate execution path. It is a lightweight process that the operating system can schedule
and run concurrently with other threads. The operating system creates and manages threads, and they
share the same memory and resources as the program that created them. This enables multiple threads to
collaborate and work efficiently within a single program.
Each thread belongs to exactly one process. In an operating system that supports multithreading, the
process can consist of many threads.
Why Do We Need Thread?
❖ Threads run in parallel improving the application performance. Each thread has its own CPU
state and stack, but they share the address space of the process and the environment.
❖ Threads can share common data so they do not need to use interprocess communication.
❖ Like the processes, threads also have states like ready, executing, blocked, etc.
❖ Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.
❖ Each thread has its own Thread Control Block (TCB). Like the process, a context switch
occurs for the thread, and register contents are saved in (TCB).
❖ As threads share the same address space and resources, synchronization is also required for
the various activities of the thread.
Multi threading
Multithreading is a technique used in operating systems to improve the performance and
responsiveness of computer systems. Multithreading allows multiple threads (i.e.,
lightweight processes) to share the same resources of a single process, such as the CPU,
memory, and I/O devices.
For example, in a browser, multiple tabs can be different threads
Advantages of Thread
Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
Faster context switch: Context switch time between threads is lower compared to the
process context switch.
Effective utilization of multiprocessor system: If we have multiple threads in a single
process, then we can schedule multiple threads on multiple processors. This will make
process execution faster.
Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. Note: Stacks and registers can’t be shared among the threads. Each thread
has its own stack and registers.
Communication: Communication between multiple threads is easier, as the threads share
a common address space. while in the process we have to follow some specific
communication techniques for communication between the two processes.
Enhanced throughput of the system: If a process is divided into multiple threads, and
each thread function is considered as one job, then the number of jobs completed per unit of
time is increased, thus increasing the throughput of the system.
Types of Threads
Threads are of two types. These are described below.
• User Level Thread
• Kernel Level Thread
• User Level Threads
User Level Thread:
User Level Thread is a type of thread that is not created using system calls. The kernel has
no work in the management of user-level threads. User-level threads can be easily
implemented by the user. In case when user-level threads are single-handed processes,
kernel-level thread manages them.
Advantages of User-Level Threads
• Implementation of the User-Level Thread is easier than Kernel Level Thread.
• Context Switch Time is less in User Level Thread.
• User-Level Thread is more efficient than Kernel-Level Thread.
• Because of the presence of only Program Counter, Register Set, and Stack Space, it has
a simple representation.
Disadvantages of User-Level Threads
• There is a lack of coordination between Thread and Kernel.
• In case of a page fault, the whole process can be blocked.
Kernel Level Threads
A kernel Level Thread is a type of thread that can recognize the Operating system easily.
Kernel Level Threads has its own thread table where it keeps track of the system. The
operating System Kernel helps in managing threads. Kernel Threads have somehow longer
context switching time. Kernel helps in the management of threads.
Advantages of Kernel-Level Threads
• It has up-to-date information on all threads.
• Applications that block frequency are to be handled by the Kernel-Level Threads.
• Whenever any process requires more time to process, Kernel-Level Thread provides
more time to it.
Disadvantages of Kernel-Level threads
• Kernel-Level Thread is slower than User-Level Thread.
• Implementation of this type of thread is a little more complex than a user-level thread.
Components of Threads
These are the basic components of the Operating System.
• Stack Space
• Register Set
• Program Counter
Difference between Process and Thread
Response Time = CPU Allocation Time(when the CPU was allocated for the first) - Arrival
Time
Completion Time
The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.
Priority
If the operating system assigns priorities to processes, the scheduling mechanism should
favor the higher-priority processes.
Predictability
A given process always should run in about the same amount of time under a similar system
load.
CPU Scheduling Algorithms
There are mainly two types of scheduling methods:
Preemptive Scheduling: Preemptive scheduling is used when a process switches from
running state to ready state or from the waiting state to the ready state.
Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process
terminates, or when a process switches from running state to waiting state.
The following are the CPU scheduling algorithms
First Come First Serve:
FCFS considered to be the simplest of all operating system scheduling algorithms. First
come first serve scheduling algorithm states that the process that requests the CPU first is
allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
• FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
• Easy to implement
• First come, first serve method
Disadvantages of FCFS:
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much efficient.
Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU
scheduling algorithm that works based on the priority of a process. In this algorithm, the editor
sets the functions to be as important, meaning that the most important process must be done first.
In the case of any conflict, that is, where there are more than one processor with equal value, then
the most important CPU planning algorithm works on the basis of the FCFS (First Come First
Serve) algorithm.
• Characteristics of Priority Scheduling:
• Schedules tasks based on priority.
• When the higher priority work arrives while a task with less priority is executed, the
higher priority work takes the place of the less priority one and
• The latter is suspended until the execution is complete.
• Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
• The average waiting time is less than FCFS
• Less complex
Disadvantages of Priority Scheduling:
• One of the most common demerits of the Preemptive priority CPU scheduling algorithm
is the Starvation Problem. This is the problem in which a process has to wait for a longer amount
of time to get scheduled into the CPU. This condition is called the starvation problem.
Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a
fixed time slot. It is the preemptive version of First come First Serve CPU Scheduling algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
• It’s simple, easy to use, and starvation-free as all processes get the balanced CPU
allocation.
• One of the most widely used methods in CPU scheduling as a core.
• It is considered preemptive as the processes are given to the CPU for a very limited
time.
Advantages of Round robin:
• Round robin seems to be fair as every process gets an equal share of CPU.
• The newly created process is added to the end of the ready queue.