0% found this document useful (0 votes)
13 views97 pages

OS Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 97

Operating System

Operating System Tutorial provides the basic and advanced concepts of


operating system . Our Operating system tutorial is designed for beginners,
professionals and GATE aspirants. We have designed this tutorial after the
completion of a deep research about every concept.

The content is described in detailed manner and has the ability to answer
most of your queries. The tutorial also contains the numerical examples
based on previous year GATE questions which will help you to address the
problems in a practical manner.

Operating System can be defined as an interface between user and the


hardware. It provides an environment to the user so that, the user can
perform its task in convenient and efficient way.

The Operating System Tutorial is divided into various parts based on its
functions such as Process Management, Process Synchronization, Deadlocks
and File Management.

ADVERTISEMENT

Operating System Definition and Function


In the Computer System (comprises of Hardware and software), Hardware
can only understand machine code (in the form of 0 and 1) which doesn't
make any sense to a naive user.

We need a system which can act as an intermediary and manage all the
processes and resources present in the system.
An Operating System can be defined as an interface between user and
hardware. It is responsible for the execution of all the processes, Resource
Allocation, CPU management, File Management and many other tasks.

The purpose of an operating system is to provide an environment in which a


user can execute programs in convenient and efficient manner.

Structure of a Computer System


A Computer System consists of:

ADVERTISEMENT

o Users (people who are using the computer)


o Application Programs (Compilers, Databases, Games, Video player,
Browsers, etc.)
o System Programs (Shells, Editors, Compilers, etc.)
o Operating System ( A special program which acts as an interface
between user and hardware )
o Hardware ( CPU, Disks, Memory, etc)
What does an Operating system do?
1. Process Management
2. Process Synchronization
3. Memory Management
4. CPU Scheduling
5. File Management
6. Security

Functions of Operating System




An Operating System acts as a communication bridge (interface)


between the user and computer hardware. The purpose of an
operating system is to provide a platform on which a user can
execute programs conveniently and efficiently.
An operating system is a piece of software that manages the
allocation of Computer Hardware. The coordination of the hardware
must be appropriate to ensure the correct working of the computer
system and to prevent user programs from interfering with the
proper working of the system.
The main goal of the Operating System is to make the computer
environment more convenient to use and the Secondary goal is to
use the resources most efficiently.
Why are Operating Systems Used?
Operating System is used as a communication channel between the
Computer hardware and the user. It works as an intermediate between
System Hardware and End-User. Operating System handles the following
responsibilities:
 It controls all the computer resources.
 It provides valuable services to user programs.
 It coordinates the execution of user programs.
 It provides resources for user programs.
 It provides an interface (virtual machine) to the user.
 It hides the complexity of software.
 It supports multiple execution modes.
 It monitors the execution of user programs to prevent errors.
Functions of an Operating System
Memory Management
The operating system manages the Primary Memory or Main Memory. Main
memory is made up of a large array of bytes or words where each byte or
word is assigned a certain address. Main memory is fast storage and it can
be accessed directly by the CPU. For a program to be executed, it should be
first loaded in the main memory. An operating system manages the
allocation and deallocation of memory to various processes and ensures that
the other process does not consume the memory allocated to one process.
An Operating System performs the following activities for Memory
Management:
 It keeps track of primary memory, i.e., which bytes of memory are used by
which user program. The memory addresses that have already been
allocated and the memory addresses of the memory that has not yet been
used.
 In multiprogramming, the OS decides the order in which processes are
granted memory access, and for how long.
 It Allocates the memory to a process when the process requests it and
deallocates the memory when the process has terminated or is
performing an I/O operation.
Memory Management

Processor Management
In a multi-programming environment, the OS decides the order in which
processes have access to the processor, and how much processing time
each process has. This function of OS is called Process Scheduling. An
Operating System performs the following activities for Processor
Management.
An operating system manages the processor’s work by allocating various
jobs to it and ensuring that each process receives enough time from the
processor to function properly.
Keeps track of the status of processes. The program which performs this
task is known as a traffic controller. Allocates the CPU that is a processor to
a process. De-allocates processor when a process is no longer required.

Processor Management

Device Management
An OS manages device communication via its respective drivers. It performs
the following activities for device management.
 Keeps track of all devices connected to the system. Designates a
program responsible for every device known as the Input/Output
controller.
 Decide which process gets access to a certain device and for how long.
 Allocates devices effectively and efficiently. Deallocates devices when
they are no longer required.
 There are various input and output devices. An OS controls the working
of these input-output devices.
 It receives the requests from these devices, performs a specific task, and
communicates back to the requesting process.
File Management
A file system is organized into directories for efficient or easy navigation and
usage. These directories may contain other directories and other files. An
Operating System carries out the following file management activities. It
keeps track of where information is stored, user access settings, the status
of every file, and more. These facilities are collectively known as the file
system. An OS keeps track of information regarding the creation, deletion,
transfer, copy, and storage of files in an organized way. It also maintains the
integrity of the data stored in these files, including the file directory structure,
by protecting against unauthorized access.

File Management

User Interface or Command Interpreter


The user interacts with the computer system through the operating system.
Hence OS acts as an interface between the user and the computer
hardware. This user interface is offered through a set of commands or
a graphical user interface (GUI) . Through this interface, the user makes
interacts with the applications and the machine hardware.
Command Interpreter

Booting the Computer


The process of starting or restarting the computer is known as booting. If the
computer is switched off completely and if turned on then it is called cold
booting. Warm booting is a process of using the operating system to restart
the computer.
Security
The operating system uses password protection to protect user data and
similar other techniques. it also prevents unauthorized access to programs
and user data. The operating system provides various techniques which
assure the integrity and confidentiality of user data. The following security
measures are used to protect user data:
 Protection against unauthorized access through login.
 Protection against intrusion by keeping the firewall active.
 Protecting the system memory against malicious access.
 Displaying messages related to system vulnerabilities.
Control Over System Performance
Operating systems play a pivotal role in controlling and optimizing system
performance. They act as intermediaries between hardware and software,
ensuring that computing resources are efficiently utilized. One fundamental
aspect is resource allocation, where the OS allocates CPU time, memory,
and I/O devices to different processes, striving to provide fair and optimal
resource utilization. Process scheduling, a critical function, helps decide
which processes or threads should run when preventing any single task from
monopolizing the CPU and enabling effective multitasking.
Major Achievements

Major Achievements of OS manages computer hardware and


software resources. Major Achievements of OS are given as
follows:

1. Process

2. Memory Management

3. Information Protection & Security

4. System Structure

Process:

A process is a program at the time of execution. The term


‘process‘ was first used by Daley and Dennis.R in 1960. At
the time of developing multi-programming, time-sharing and
real-time systems some problems are raised due to the
timing and synchronization.

Memory Management:

Here memory means main memory (RAM), and the term


memory management specifies how to utilize the memory
efficiently. So, the main task of memory management is
‘efficient memory utilization and efficient processor
utilization.

(i) Process isolation: It simply means that controlling one


process interacts with the data and memory of another
process.

(ii) Automatic allocation and management: Memory should


be allocated dynamically based on the priorities of the
process. Otherwise, the process waiting time will increase,
which decreases CPU utilization and memory utilization.

(iii) Protection and access control: Do not apply protection


techniques and access control to all the processes, better to
apply to the important application only. It will save execution
time.
(iv) Long-term Storage: Long-term storage of process
reduces memory utilization

Information protection and security:

Here the term protection means that secure the resources


and information from unauthorized persons. The operating
system follows a variety of methods for protection and
security.

(i) Access control: The operating system provides access


permissions to the users about important files and
applications.

(ii) Information flow control: The operating system regulates


the flow of data within the system.

(iii) Certification: The operating system provides the


priorities and hierarchies to the resources using this then we
can control unauthorized processes.

System Structure:

In the Olden Days, the code for an operating system is very


few. Later more and more features have been added to the
operating systems. The code of the operating system is
generally increased.
History of the Operating System/Evaluation
Operating System
The operating system is a system program that serves as an interface
between the computing system and the end-user. Operating systems create
an environment where the user can run any programs or communicate with
software or applications in a comfortable and well-organized way.

Furthermore, an operating is a software program that manages and controls


the execution of application programs, software resources and computer
hardware. It also helps manage the software/hardware resource, such as file
management, memory management, input/ output and many peripheral
devices like a disk drive, printers, etc. These are the popular operating
system: Linux OS, Windows OS, Mac OS, VMS, OS/400 etc.
Functions of Operating System
ADVERTISEMENT

ADVERTISEMENT

o Processor management
o Act as a Resource Manager
o Memory Management
o File Management
o Security
o Device Management
o Input devices / Output devices
o Deadlock Prevention
o Time Management
o Coordinate with system software or hardware

Types of Operating System


1. Batch Operating System
2. Time-Sharing Operating System
3. Embedded Operating System
4. Multiprogramming Operating System
5. Network Operating System
6. Distributed Operating System
7. Multiprocessing Operating System
8. Real-Time Operating System

Batch Operating System


In Batch Operating System, there is no direct interaction between user and
computer. Therefore, the user needs to prepare jobs and save offline mode
to punch card or paper tape or magnetic tape. After creating the jobs, hand it
over to the computer operator; then the operator sort or creates the similar
types of batches like B2, B3, and B4. Now, the computer operator submits
batches into the CPU to execute the jobs one by one. After that, CPUs start
executing jobs, and when all jobs are finished, the computer operator
provides the output to the user.

Time-Sharing Operating System


It is the type of operating system that allows us to connect many people
located at different locations to share and use a specific system at a single
time. The time-sharing operating system is the logical extension of the
multiprogramming through which users can run multiple tasks concurrently.
Furthermore, it provides each user his terminal for input or output that
impacts the program or processor currently running on the system. It
represents the CPU's time is shared between many user processes. Or, the
processor's time that is shared between multiple users simultaneously
termed as time-sharing.
Embedded Operating System
The Embedded operating system is the specific purpose operating system
used in the computer system's embedded hardware configuration. These
operating systems are designed to work on dedicated devices like
automated teller machines (ATMs), airplane systems, digital home
assistants, and the internet of things (IoT) devices.
Multiprogramming Operating System
Due to the CPU's underutilization and the waiting for I/O resource till that
CPU remains idle. It shows the improper use of system resources. Hence, the
operating system introduces a new concept that is known as
multiprogramming. A multiprogramming operating system refers to the
concepts wherein two or more processes or programs activate
simultaneously to execute the processes one after another by the same
computer system. When a program is in run mode and uses CPU, another
program or file uses I/O resources at the same time or waiting for another
system resources to become available. It improves the use of system
resources, thereby increasing system throughput. Such a system is known as
a multiprogramming operating system.
Network Operating System
A network operating system is an important category of the operating
system that operates on a server using network devices like a switch, router,
or firewall to handle data, applications and other network resources. It
provides connectivity among the autonomous operating system, called as a
network operating system. The network operating system is also useful to
share data, files, hardware devices and printer resources among multiple
computers to communicate with each other.
ADVERTISEMENT

Types of network operating system

o Peer-to-peer network operating system: The type of network operating


system allows users to share files, resources between two or more computer
machines using a LAN.

o Client-Server network operating system: It is the type of network


operating system that allows the users to access resources, functions, and
applications through a common server or center hub of the resources. The
client workstation can access all resources that exist in the central hub of the
network. Multiple clients can access and share different types of the resource
over the network from different locations.

Distributed Operating system


A distributed operating system provides an environment in which multiple
independent CPU or processor communicates with each other through
physically separate computational nodes. Each node contains specific
software that communicates with the global aggregate operating system.
With the ease of a distributed system, the programmer or developer can
easily access any operating system and resource to execute the
computational tasks and achieve a common goal. It is the extension of a
network operating system that facilitates a high degree of connectivity to
communicate with other users over the network.
Multiprocessing Operating System
It is the type of operating system that refers to using two or more central
processing units (CPU) in a single computer system. However, these
multiprocessor systems or parallel operating systems are used to increase
the computer system's efficiency. With the use of a multiprocessor system,
they share computer bus, clock, memory and input or output device for
concurrent execution of process or program and resource management in
the CPU.

Real-Time Operating System


A real-time operating system is an important type of operating system used
to provide services and data processing resources for applications in which
the time interval required to process & respond to input/output should be so
small without any delay real-time system. For example, real-life situations
governing an automatic car, traffic signal, nuclear reactor or an aircraft
require an immediate response to complete tasks within a specified time
delay. Hence, a real-time operating system must be fast and responsive for
an embedded system, weapon system, robots, scientific research &
experiments and various real-time objects.

Types of the real-time operating system:


Hard Real-Time System
These types of OS are used with those required to complete critical tasks within the
defined time limit. If the response time is high, it is not accepted by the system or
may face serious issues like a system failure. In a hard real-time system, the
secondary storage is either limited or missing, so these system stored data in the
ROM.
Soft Real-Time System
A soft real-time system is a less restrictive system that can accept software and
hardware resources delays by the operating system. In a soft real-time system, a
critical task prioritizes less important tasks, and that priority retains active until
completion of the task. Also, a time limit is set for a specific job, which enables short
time delays for further tasks that are acceptable. For example, computer audio or
video, virtual reality, reservation system, projects like undersea, etc.
Generations of Operating System
The First Generation (1940 to early 1950s)

When the first electronic computer was developed in 1940, it was created
without any operating system. In early times, users have full access to the
computer machine and write a program for each task in absolute machine
language. The programmer can perform and solve only simple mathematical
calculations during the computer generation, and this calculation does not
require an operating system.

The Second Generation (1955 - 1965)

The first operating system (OS) was created in the early 1950s and was
known as GMOS. General Motors has developed OS for the IBM computer.
The second-generation operating system was based on a single stream batch
processing system because it collects all similar jobs in groups or batches
and then submits the jobs to the operating system using a punch card to
complete all jobs in a machine. At each completion of jobs (either normally or
abnormally), control transfer to the operating system that is cleaned after
completing one job and then continues to read and initiates the next job in a
punch card. After that, new machines were called mainframes, which were
very big and used by professional operators.

The Third Generation (1965 - 1980)

During the late 1960s, operating system designers were very capable of
developing a new operating system that could simultaneously perform
multiple tasks in a single computer program called multiprogramming. The
introduction of multiprogramming plays a very important role in
developing operating systems that allow a CPU to be busy every time by
performing different tasks on a computer at the same time. During the third
generation, there was a new development of minicomputer's phenomenal
growth starting in 1961 with the DEC PDP-1. These PDP's leads to the
creation of personal computers in the fourth generation.

The Fourth Generation (1980 - Present Day)

The fourth generation of operating systems is related to the development of


the personal computer. However, the personal computer is very similar to
the minicomputers that were developed in the third generation. The cost of a
personal computer was very high at that time; there were small fractions of
minicomputers costs. A major factor related to creating personal computers
was the birth of Microsoft and the Windows operating system. Microsoft
created the first window operating system in 1975. After introducing the
Microsoft Windows OS, Bill Gates and Paul Allen had the vision to take
personal computers to the next level. Therefore, they introduced the MS-
DOS in 1981; however, it was very difficult for the person to understand its
cryptic commands. Today, Windows has become the most popular and most
commonly used operating system technology. And then, Windows released
various operating systems such as Windows 95, Windows 98, Windows XP
and the latest operating system, Windows 7. Currently, most Windows users
use the Windows 10 operating system. Besides the Windows operating
system, Apple is another popular operating system built in the 1980s, and
this operating system was developed by Steve Jobs, a co-founder of Apple.
They named the operating system Macintosh OS or Mac OS.

Advantages of Operating System


o It is helpful to monitor and regulate resources.
o It can easily operate since it has a basic graphical user interface to
communicate with your device.
o It is used to create interaction between the users and the computer
application or hardware.
o The performance of the computer system is based on the CPU.
o The response time and throughput time of any process or program are fast.
o It can share different resources like fax, printer, etc.
o It also offers a forum for various types of applications like system and web
application.

Disadvantage of the Operating System


o It allows only a few tasks that can run at the same time.
o It any error occurred in the operating system; the stored data can be
destroyed.
o It is a very difficult task or works for the OS to provide entire security from
the viruses because any threat or virus can occur at any time in a system.
o An unknown user can easily use any system without the permission of the
original user.
o The cost of operating system costs is very high.

Multiprocessor and Multicore System in


Operating System
Multicore and multiprocessor systems both serve to accelerate the
computing process. A multicore contains multiple cores or processing units in
a single CPU. A multiprocessor is made up of several CPUs. A multicore
processor does not need complex configurations like a multiprocessor. In
contrast, A multiprocessor is much reliable and capable of running many
programs. In this article, you will learn about the Multiprocessor and
Multicore system in the operating system with their advantages and
disadvantages.

What is a Multiprocessor System?


A multiprocessor has multiple CPUs or processors in the system. Multiple
instructions are executed simultaneously by these systems. As a result,
throughput is increased. If one CPU fails, the other processors will continue
to work normally. So, multiprocessors are more reliable.

Shared memory or distributed memory can be used in multiprocessor


systems. Each processor in a shared memory multiprocessor shares main
memory and peripherals to execute instructions concurrently. In these
systems, all CPUs access the main memory over the same bus. Most CPUs
will be idle as the bus traffic increases. This type of multiprocessor is also
known as the symmetric multiprocessor. It provides a single memory space
for all processors.

Each CPU in a distributed memory multiprocessor has its own private


memory. Each processor can use local data to accomplish the computational
tasks. The processor may use the bus to communicate with other processors
or access the main memory if remote data is required.

Advantages and disadvantages of Multiprocessor System


There are various advantages and disadvantages of the multiprocessor
system. Some advantages and disadvantages of the multiprocessor system
are as follows:

Advantages

There are various advantages of the multiprocessor system. Some


advantages of the multiprocessor system are as follows:

1. It is a very reliable system because multiple processors may share their work
between the systems, and the work is completed with collaboration.
2. It requires complex configuration.
3. Parallel processing is achieved via multiprocessing.
4. If multiple processors work at the same time, the throughput may increase.
5. Multiple processors execute the multiple processes a few times.

Disadvantages

There are various disadvantages of the multiprocessor system. Some


disadvantages of the multiprocessor system are as follows:

1. Multiprocessors work with different systems, so processors require memory


space.
2. If one of the processors fails, the work is shared among the remaining
processors.
3. These types of systems are very expensive.
4. If any processor is already utilizing an I/O device, additional processors may
not utilize the same I/O device that creates deadlock.
5. The operating system implementation is complicated because multiple
processors communicate with each other.

What is a Multicore System?


A single computing component with multiple cores (independent processing
units) is known as a multicore processor. It denotes the presence of a single
CPU with several cores in the system. Individually, these cores may read and
run computer instructions. They work in such a way that the computer
system appears to have several processors, although they are cores, not
processors. These cores may execute normal processors instructions,
including add, move data, and branch.

A single processor in a multicore system may run many instructions


simultaneously, increasing the overall speed of the system's program
execution. It decreases the amount of heat generated by the CPU while
enhancing the speed with which instructions are executed. Multicore
processors are used in various applications, including general-purpose,
embedded, network, and graphics processing (GPU).

The software techniques used to implement the cores in a multicore system


are responsible for the system's performance. The extra focus has been put
on developing software that may execute in parallel because you want to
achieve parallel execution with the help of many cores'

Advantages and disadvantages of Multicore System


There are various advantages and disadvantages of the multicore system.
Some advantages and disadvantages of the multicore system are as follows:

Advantages

There are various advantages of the multicore system. Some advantages of


the multicore system are as follows:

1. Multicore processors may execute more data than single-core processors.


2. When you are using multicore processors, the PCB requires less space.
3. It will have less traffic.
4. Multicores are often integrated into a single integrated circuit die or onto
numerous dies but packaged as a single chip. As a result, Cache Coherency is
increased.
5. These systems are energy efficient because they provide increased
performance while using less energy.

Disadvantages
There are various disadvantages of the multicore system. Some
disadvantages of the multicore system are as follows:

1. Some OSs are still using the single-core processor.


2. These are very difficult to manage than single-core processors.
3. These systems use huge electricity.
4. Multicore systems become hot while doing the work.
5. These are much expensive than single-core processors.
6. Operating systems designed for multicore processors will run slightly slower
on single-core processors.

Main Differences between the Multiprocessor and


Multicore System

Here, you will learn the main differences between the Multiprocessor and
Multicore systems. Various differences between the Multiprocessor and
Multicore system are as follows:

1. A multiprocessor system with multiple CPUs allows programs to be processed


simultaneously. On the other hand, the multicore system is a single processor
with multiple independent processing units called cores that may read and
execute program instructions.
2. Multiprocessor systems outperform multicore systems in terms of reliability.
A multiprocessor is a computer with many processors. If one of any
processors fails in the system, the other processors will not be affected.
3. Multiprocessors run multiple programs faster than the multicore system. On
the other hand, a multicore system quickly executes a single program.
4. Multicore systems have less traffic than multiprocessors system because the
cores are integrated into a single chip.
5. Multiprocessors require complex configuration. On the other hand, a
multicore system doesn't need to be configured.
6. Multiprocessors are expensive as compared to multicore systems. On the
other hand, multicore systems are cheaper than multiprocessors systems.

Head-to-head Comparison between the


Multiprocessors and Multicore Systems
Here, you will learn the head-to-head comparison between the
Multiprocessors and Multicore systems. The main differences between the
Multiprocessors and Multicore systems are as follows:

Features Multiprocessors Multicore

Definition It is a system with multiple CPUs that A multicore processor is a singl


allows processing programs processor that contains multipl
simultaneously. independent processing units known a
cores that may read and execute program
instructions.

Execution Multiprocessors run multiple The multicore executes a single program


programs faster than a multicore faster.
system.

Reliability It is more reliable than the multicore It is not much reliable than th
system. If one of any processors fails multiprocessors.
in the system, the other processors
will not be affected.

Traffic It has high traffic than the multicore It has less traffic than th
system. multiprocessors.

Cost It is more expensive as compared to a These are cheaper than th


multicore system. multiprocessors system.

Configurati It requires complex configuration. It doesn't need to be configured.


on

Types of Operating Systems (OS)


An operating system is a well-organized collection of programs that manages
the computer hardware. It is a type of system software that is responsible for
the smooth functioning of the computer system.

Batch Operating System


In the 1970s, Batch processing was very popular. In this technique, similar
types of jobs were batched together and executed in time. People were used
to having a single computer which was called a mainframe.

In Batch operating system, access is given to more than one person; they
submit their respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve
and then executes the jobs one by one. The users collect their respective
output when all the jobs get executed.

The purpose of this operating system was mainly to transfer control from one
job to another as soon as the job was completed. It contained a small set of
programs called the resident monitor that always resided in one part of the
main memory. The remaining part is used for servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates
CPU time between two jobs.

Disadvantages of Batch OS
1. Starvation

Batch processing suffers from starvation.

For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution
time of J1 is very high, then the other four jobs will never be executed, or
they will have to wait for a very long time. Hence the other processes get
starved.

2. Not Interactive

Batch Processing is not suitable for jobs that are dependent on the user's
input. If a job requires the input of two numbers from the console, then it will
never get it in the batch processing scenario since the user is not present at
the time of execution.

Multiprogramming Operating System


Multiprogramming is an extension to batch processing where the CPU is
always kept busy. Each process needs two types of system time: CPU time
and IO time.

In a multiprogramming environment, when a process does its I/O, The CPU


can start the execution of other processes. Therefore, multiprogramming
improves the efficiency of the system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to
execute.
o Response time can also be reduced.

Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction
with the computer system.

Multiprocessing Operating System


In Multiprocessing, Parallel computing is achieved. There are more than one
processors present in the system which can execute more than one process
at the same time. This will increase the throughput of the system.
In Multiprocessing, Parallel computing is achieved. More than one processor
present in the system can execute more than one process simultaneously,
which will increase the throughput of the system.

Advantages of Multiprocessing operating system:

o Increased reliability: Due to the multiprocessing system, processing tasks


can be distributed among several processors. This increases reliability as if
one processor fails, the task can be given to another processor for
completion.
o Increased throughout: As several processors increase, more work can be
done in less.

Disadvantages of Multiprocessing operating System

o Multiprocessing operating system is more complex and sophisticated as it


takes care of multiple CPUs simultaneously.

Multitasking Operating System

The multitasking operating system is a logical extension of a


multiprogramming system that enables multiple programs simultaneously.
It allows a user to perform more than one computer task at the same time.
Advantages of Multitasking operating system
o This operating system is more suited to supporting multiple users
simultaneously.
o The multitasking operating systems have well-defined memory management.

Disadvantages of Multitasking operating system


o The multiple processors are busier at the same time to complete any task in
a multitasking environment, so the CPU generates more heat.

Network Operating System


An Operating system, which includes software and associated protocols to
communicate with other computers via a network conveniently and cost-
effectively, is called Network Operating System.

Advantages of Network Operating System


o In this type of operating system, network traffic reduces due to the division
between clients and the server.
o This type of system is less expensive to set up and maintain.

Disadvantages of Network Operating System


o In this type of operating system, the failure of any node in a system affects
the whole system.
o Security and performance are important issues. So trained network
administrators are required for network administration.

ADVERTISEMENT

Real Time Operating System


In Real-Time Systems, each job carries a certain deadline within which the
job is supposed to be completed, otherwise, the huge loss will be there, or
even if the result is produced, it will be completely useless.

The Application of a Real-Time system exists in the case of military


applications, if you want to drop a missile, then the missile is supposed to be
dropped with a certain precision.
Advantages of Real-time operating system:
o Easy to layout, develop and execute real-time applications under the real-
time operating system.
o In a Real-time operating system, the maximum utilization of devices and
systems.

Disadvantages of Real-time operating system:


o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume critical CPU
cycles.

Time-Sharing Operating System


In the Time Sharing operating system, computer resources are allocated in a
time-dependent fashion to several programs simultaneously. Thus it helps to
provide a large number of user's direct access to the main computer. It is a
logical extension of multiprogramming. In time-sharing, the CPU is switched
among multiple programs given by different users on a scheduled basis.
A time-sharing operating system allows many users to be served
simultaneously, so sophisticated CPU scheduling schemes and Input/output
management are required.

Time-sharing operating systems are very difficult and expensive to build.

Advantages of Time Sharing Operating System


o The time-sharing operating system provides effective utilization and sharing
of resources.
o This system reduces CPU idle and response time.

Disadvantages of Time Sharing Operating System


o Data transmission rates are very high in comparison to other methods.
o Security and integrity of user programs loaded in memory and data need to
be maintained as many users access the system at the same time.

Distributed Operating System


The Distributed Operating system is not installed on a single machine, it is
divided into parts, and these parts are loaded on different machines. A part
of the distributed Operating system is installed on each machine to make
their communication possible. Distributed Operating systems are much more
complex, large, and sophisticated than Network operating systems because
they also have to take care of varying networking protocols.

Advantages of Distributed Operating System


o The distributed operating system provides sharing of resources.
o This type of system is fault-tolerant.

What is the Process in Operating Systems


In this tutorial, we are going to learn about the Process in Operating
Systems. This is the most important concept of the Operating Systems. This
is very important because we would be always busy listening the word
named Process everywhere in the subject named Operating System.

The main duty or the work of the Operating System is to complete the given
process in less than the given stipulated time. So, the term process is very
important for the subject named Operating Systems. Now, let us learn
everything about the term process in a very deep manner.

Definition of Process
Basically, a process is a simple program.

An active program which running now on the Operating System is known as


the process. The Process is the base of all computing things. Although
process is relatively similar to the computer code but, the method is not the
same as computer code. A process is a "active" entity, in contrast to the
program, which is sometimes thought of as some sort of "passive" entity.
The properties that the process holds include the state of the hardware, the
RAM, the CPU, and other attributes.

Process in an Operating System


A process is actively running software or a computer code. Any procedure
must be carried out in a precise order. An entity that helps in describing the
fundamental work unit that must be implemented in any system is referred
to as a process.

In other words, we create computer programs as text files that, when


executed, create processes that carry out all of the tasks listed in the
program.

When a program is loaded into memory, it may be divided into the four
components stack, heap, text, and data to form a process. The simplified
depiction of a process in the main memory is shown in the diagram below.

Process Control Block


An Operating System helps in process creation, scheduling, and termination
with the help of Process Control Block. The Process Control Block (PCB),
which is part of the Operating System, aids in managing how processes
operate. Every OS process has a Process Control Block related to it. By
keeping data on different things including their state, I/O status, and CPU
Scheduling, a PCB maintains track of processes.

Now, let us understand the Process Control Block with the help of the
components present in the Process Control Block.

A Process Control Block consists of :

1. Process ID
2. Process State
3. Program Counter
4. CPU Registers
5. CPU Scheduling Information
6. Accounting and Business Information
7. Memory Management Information
8. Input Output Status Information

Now, let us understand about each and every component in detail now.

1) Process ID

It is a Identification mark which is present for the Process. This is very useful
for finding the process. It is also very useful for identifying the process also.

2) Process State

Now, let us know about each and every process states in detail. Let me
explain about each and every state

i) New State

A Program which is going to be taken up by the Operating System directly


into the Main Memory is known as a New Process State

ii) Ready State

The ready state, when a process waits for the CPU to be assigned, is the first
state it enters after being formed. The operating system pulls new processes
from secondary memory and places them all in main memory.

The term "ready state processes" refers to processes that are in the main
memory and are prepared for execution. Numerous processes could be
active at the moment.

iii) Running State

The Operating System will select one of the processes from the ready state
based on the scheduling mechanism. As a result, if our system only has one
CPU, there will only ever be one process operating at any given moment. We
can execute n processes concurrently in the system if there are n
processors.

iv) Waiting or Blocking State

Depending on the scheduling mechanism or the inherent behavior of the


process, a process can go from the Running state to the Block or Wait states.
The OS switches a process to the block or wait state and allots the CPU to
the other processes while it waits for a specific resource to be allocated or
for user input.

v) Terminated State

A process enters the termination state once it has completed its execution.
The operating system will end the process and erase the whole context of
the process (Process Control Block).

3) Program Counter

The address of the following instruction to be executed from memory is


stored in a CPU register called a program counter (PC) in the computer
processor. It is a digital counter required for both task execution speed and
for monitoring the present stage of execution.

An instruction counter, instruction pointer, instruction addresses register, or


sequence control register are other names for a program counter.

4) CPU Registers

When the process is in a running state, here is where the contents of the
processor registers are kept. Accumulators, index and general-purpose
registers, instruction registers, and condition code registers are the many
categories of CPU registers.

5) CPU Scheduling Information

It is necessary to arrange a procedure for execution. This schedule


determines when it transitions from ready to running. Process priority,
scheduling queue pointers (to indicate the order of execution), and several
other scheduling parameters are all included in CPU scheduling information.

6) Accounting and Business Information

The State of Business Addressing and Information includes information such


as CPU use, the amount of time a process uses in real time, the number of
jobs or processes, etc.

7) Memory Management Information

The Memory Management Information section contains information on the


page, segment tables, and the value of the base and limit registers. It relies
on the operating system's memory system.
8) Input Output Status Information

This Input Output Status Information section consists of Input and Output
related information which includes about the process statuses, etc.

Thread in Operating System


Last Updated : 27 Jun, 2024



A thread is a single sequence stream within a process. Threads are


also called lightweight processes as they possess some of the
properties of processes. Each thread belongs to exactly one process.
In an operating system that supports multithreading, the process
can consist of many threads. But threads can be effective only if the
CPU is more than 1 otherwise two threads have to context switch for
that single CPU.
What is Thread in Operating Systems?
In a process, a thread refers to a single sequential activity being
executed. these activities are also known as thread of execution or
thread control. Now, any operating system process can execute a
thread. we can say, that a process can have multiple threads.
Why Do We Need Thread?
 Threads run in parallel improving the application performance.
Each such thread has its own CPU state and stack, but they share
the address space of the process and the environment.
 Threads can share common data so they do not need to use inter-
process communication. Like the processes, threads also have
states like ready, executing, blocked, etc.
 Priority can be assigned to the threads just like the process, and
the highest priority thread is scheduled first.
 Each thread has its own Thread Control Block (TCB). Like the
process, a context switch occurs for the thread, and register
contents are saved in (TCB). As threads share the same address
space and resources, synchronization is also required for the
various activities of the thread.
Fault Tolerance Defined
Fault tolerance is a process that enables an operating system to respond to a failure in
hardware or software. This fault-tolerance definition refers to the system’s ability to continue
operating despite failures or malfunctions.

An operating system that offers a solid definition for faults cannot be disrupted by a single
point of failure. It ensures business continuity and the high availability of crucial applications
and systems regardless of any failures.

How Does Fault Tolerance Work?


Fault tolerance can be built into a system to remove the risk of it having a single point of
failure. To do so, the system must have no single component that, if it were to stop working
effectively, would result in the entire system failing.

Fault tolerance is reliant on aspects like load balancing and failover, which remove the
risk of a single point of failure. It will typically be part of the operating system’s interface,
which enables programmers to check the performance of data throughout a transaction.

A fault-tolerance process follows two core models:

Normal functioning

This describes a situation when a fault-tolerant system encounters a fault but continues to
function as usual. This means the system sees no change in performance metrics like
throughput or response time.

Graceful degradation

Other types of fault-tolerant systems will go through graceful degradation of performance


when certain faults occur. That means the impact the fault has on the system’s performance
is proportionate to the fault severity. In other words, a small fault will only have a small
impact on the system’s performance rather than causing the entire system to fail or have
major performance issues.

Benefits of a Fault-tolerance System


The key benefit of fault tolerance is to minimize or avoid the risk of systems becoming
unavailable due to a component error. This is particularly important in critical systems that
are relied on to ensure people’s safety, such as air traffic control, and systems that protect
and secure critical data and high-value transactions.
The core components to improving fault tolerance include:
The key benefit of fault tolerance is to minimize or avoid the risk of systems becoming
unavailable due to a component error. This is particularly important in critical systems that
are relied on to ensure people’s safety, such as air traffic control, and systems that protect
and secure critical data and high-value transactions.

The core components to improving fault tolerance include:

Diversity

If a system’s main electricity supply fails, potentially due to a storm that causes a power
outage or affects a power station, it will not be possible to access alternative electricity
sources. In this event, fault tolerance can be sourced through diversity, which provides
electricity from sources like backup generators that take over when a main power failure
occurs.

Some diverse fault-tolerance options result in the backup not having the same level of
capacity as the primary source. This may, in some cases, require the system to ensure
graceful degradation until the primary power source is restored.

Redundancy

Fault-tolerant systems use redundancy to remove the single point of failure. The system is
equipped with one or more power supply units (PSUs), which do not need to power the
system when the primary PSU functions as normal. In the event the primary PSU fails or
suffers a fault, it can be removed from service and replaced by a redundant PSU, which
takes over system function and performance.

Alternatively, redundancy can be imposed at a system level, which means an entire


alternate computer system is in place in case a failure occurs.

Replication

Replication is a more complex approach to achieving fault tolerance. It involves using


multiple identical versions of systems and subsystems and ensuring their functions always
provide identical results. If the results are not identical, then a democratic procedure is used
to identify the faulty system. Alternatively, a procedure can be used to check for a system
that shows a different result, which indicates it is faulty.
Replication can either take place at the component level, which involves multiple processors
running simultaneously, or at the system level, which involves identical computer systems
running simultaneously.
Components of Fault-tolerant Systems
Fault-tolerant systems also use backup components, which automatically replace failed
components to prevent a loss of service. These backup components include:

Hardware systems

Hardware systems can be backed up by systems that are identical or equivalent to them. A
typical example is a server made fault-tolerant by deploying an identical server that runs in
parallel to it and mirrors all its operations, such as the redundant array of inexpensive disks
(RAID), which combines physical disk components to achieve redundancy and improved
performance.

Software systems

Software systems can be made fault-tolerant by backing them up with other software. A
common example is backing up a database that contains customer data to ensure it can
continuously replicate onto another machine. As a result, in the event that a primary
database fails, normal operations will continue because they are automatically replicated
and redirected onto the backup database.

Power sources

Power sources can also be made fault-tolerant by using alternative sources to support
them. One approach is to run devices on an uninterruptible power supply (UPS). Another is
to use backup power generators that ensure storage and hardware, heating, ventilation, and
air conditioning (HVAC) continue to operate as normal if the primary power source fails.

Concurrency in Operating System




Concurrency is the execution of multiple instruction sequences at


the same time. It happens in the operating system when there are
several process threads running in parallel. The running process
threads always communicate with each other through shared
memory or message passing. Concurrency results in the sharing of
resources resulting in problems like deadlocks and resource
starvation.
It helps in techniques like coordinating the execution of processes,
memory allocation, and execution scheduling for maximizing
throughput.
There are several motivations for allowing concurrent execution
 Physical resource Sharing: Multiuser environment since
hardware resources are limited
 Logical resource Sharing: Shared file (same piece of
information)
 Computation Speedup: Parallel execution
 Modularity: Divide system functions into separation processes
Relationship Between Processes of Operating
System
The Processes executing in the operating system is one of the
following two types:
 Independent Processes
 Cooperating Processes
Independent Processes
Its state is not shared with any other process.
 The result of execution depends only on the input state.
 The result of the execution will always be the same for the same
input.
 The termination of the independent process will not terminate
any other.
Cooperating System
Its state is shared along other processes.
 The result of the execution depends on relative execution
sequence and cannot be predicted in advanced(Non-
deterministic).
 The result of the execution will not always be the same for the
same input.
 The termination of the cooperating process may affect other
process.
Process Operation in Operating System
Most systems support at least two types of operations that can be
invoked on a process creation and process deletion.
Process Creation
A parent process and then children of that process can be created.
When more than one process is created several possible
implementations exist.
 Parent and child can execute concurrently.
 The Parents waits until all of its children have terminated.
 The parent and children share all resources in common.
 The children share only a subset of their parent’s resources.
Process Termination
A child process can be terminated in the following ways:
 A parent may terminate the execution of one of its children for a
following reasons:
1. The child has exceeded its allocation resource usage.
2. The task assigned to its child is no longer required.
 If a parent has terminated than its children must be terminated.
Principles of Concurrency
Both interleaved and overlapped processes can be viewed as
examples of concurrent processes, they both present the same
problems.
The relative speed of execution cannot be predicted. It depends on
the following:
 The activities of other processes
 The way operating system handles interrupts
 The scheduling policies of the operating system
Problems in Concurrency
 Sharing global resources: Sharing of global resources safely is
difficult. If two processes both make use of a global variable and
both perform read and write on that variable, then the order in
which various read and write are executed is critical.
 Optimal allocation of resources: It is difficult for the operating
system to manage the allocation of resources optimally.
 Locating programming errors: It is very difficult to locate a
programming error because reports are usually not reproducible.
 Locking the channel: It may be inefficient for the operating
system to simply lock the channel and prevents its use by other
processes.
Advantages of Concurrency
 Running of multiple applications: It enable to run multiple
applications at the same time.
 Better resource utilization: It enables that the resources that
are unused by one application can be used for other applications.
 Better average response time: Without concurrency, each
application has to be run to completion before the next one can
be run.
 Better performance: It enables the better performance by the
operating system. When one application uses only the processor
and another application uses only the disk drive then the time to
run both applications concurrently to completion will be shorter
than the time to run each application consecutively.
Drawbacks of Concurrency
 It is required to protect multiple applications from one another.
 It is required to coordinate multiple applications through
additional mechanisms.
 Additional performance overheads and complexities in operating
systems are required for switching among applications.
 Sometimes running too many applications concurrently leads to
severely degraded performance.
Issues of Concurrency
 Non-atomic: Operations that are non-atomic but interruptible by
multiple processes can cause problems.
 Race conditions: A race condition occurs of the outcome
depends on which of several processes gets to a point first.
 Blocking: Processes can block waiting for resources. A process
could be blocked for long period of time waiting for input from a
terminal. If the process is required to periodically update some
data, this would be very undesirable.
 Starvation: It occurs when a process does not obtain service to
progress.
 Deadlock: It occurs when two processes are blocked and hence
neither can proceed to execute.
Mutual Exclusion in Synchronization


During concurrent execution of processes, processes need to enter


the critical section (or the section of the program shared across
processes) at times for execution. It might happen that because of
the execution of multiple processes at once, the values stored in the
critical section become inconsistent. In other words, the values
depend on the sequence of execution of instructions – also known as
a race condition. The primary task of process synchronization is to
get rid of race conditions while executing the critical section.
What is Mutual Exclusion?
Mutual Exclusion is a property of process synchronization that
states that “no two processes can exist in the critical section
at any given point of time“. The term was first coined
by Dijkstra. Any process synchronization technique being used
must satisfy the property of mutual exclusion, without which it
would not be possible to get rid of a race condition.
The need for mutual exclusion comes with concurrency. There are
several kinds of concurrent execution:
 Interrupt handlers
 Interleaved, preemptively scheduled processes/threads
 Multiprocessor clusters, with shared memory
 Distributed systems
Mutual exclusion methods are used in concurrent programming to
avoid the simultaneous use of a common resource, such as a global
variable, by pieces of computer code called critical sections.
The requirement of mutual exclusion is that when process P1 is
accessing a shared resource R1, another process should not be able
to access resource R1 until process P1 has finished its operation
with resource R1.
Examples of such resources include files, I/O devices such as
printers, and shared data structures.
Conditions Required for Mutual Exclusion
According to the following four criteria, mutual exclusion is
applicable:
 When using shared resources, it is important to ensure mutual
exclusion between various processes. There cannot be two
processes running simultaneously in either of their critical
sections.
 It is not advisable to make assumptions about the relative speeds
of the unstable processes.
 For access to the critical section, a process that is outside of it
must not obstruct another process.
 Its critical section must be accessible by multiple processes in a
finite amount of time; multiple processes should never be kept
waiting in an infinite loop.
Approaches To Implementing Mutual
Exclusion
 Software Method: Leave the responsibility to the processes
themselves. These methods are usually highly error-prone and
carry high overheads.
 Hardware Method: Special-purpose machine instructions are
used for accessing shared resources. This method is faster but
cannot provide a complete solution. Hardware solutions cannot
give guarantee the absence of deadlock and starvation.
 Programming Language Method: Provide support through the
operating system or through the programming language.
Requirements of Mutual Exclusion
 At any time, only one process is allowed to enter its critical
section.
 The solution is implemented purely in software on a machine.
 A process remains inside its critical section for a bounded time
only.
 No assumption can be made about the relative speeds of
asynchronous concurrent processes.
 A process cannot prevent any other process from entering into a
critical section.
 A process must not be indefinitely postponed from entering its
critical section.
In order to understand mutual exclusion, let’s take an example.
What is a Need of Mutual Exclusion?
An easy way to visualize the significance of mutual exclusion is to
imagine a linked list of several items, with the fourth and fifth items
needing to be removed. By changing the previous node’s next
reference to point to the succeeding node, the node that lies
between the other two nodes is deleted.
To put it simply, whenever node “i” wants to be removed, node
“with – 1″‘s subsequent reference is changed to point to node “ith +
1” at that time. Two distinct nodes can be removed by two threads
at the same time when a shared linked list is being used by many
threads. This occurs when the first thread modifies node “ith – 1”
next reference, pointing towards the node “ith + 1,” and the second
thread modifies node “ith” next reference, pointing towards the
node “ith + 2.” Although both nodes have been removed, the linked
list’s required state has not yet been reached because node “i + 1”
still exists in the list because node “ith – 1” next reference still
points to it.
Now, this situation is called a race condition. Race conditions can be
prevented by mutual exclusion so that updates at the same time
cannot happen to the very bit about the list.
Example:
In the clothes section of a supermarket, two people are shopping for
clothes.

Boy, A decides upon some clothes to buy and heads to the changing
room to try them out. Now, while boy A is inside the changing room,
there is an ‘occupied’ sign on it – indicating that no one else can
come in. Boy B has to use the changing room too, so she has to wait
till boy A is done using the changing room.
Once boy A comes out of the changing room, the sign on it changes
from ‘occupied’ to ‘vacant’ – indicating that another person can use
it. Hence, boy B proceeds to use the changing room, while the sign
displays ‘occupied’ again.
The changing room is nothing but the critical section, boy A and boy
B are two different processes, while the sign outside the changing
room indicates the process synchronization mechanism being used.
Introduction of Process Synchronization


Process Synchronization is the coordination of execution of multiple


processes in a multi-process system to ensure that they access
shared resources in a controlled and predictable manner. It aims to
resolve the problem of race conditions and other synchronization
issues in a concurrent system.
The main objective of process synchronization is to ensure that
multiple processes access shared resources without interfering with
each other and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization
techniques such as semaphores, monitors, and critical sections are
used.
In a multi-process system, synchronization is necessary to ensure
data consistency and integrity, and to avoid the risk of deadlocks
and other synchronization problems. Process synchronization is an
important aspect of modern operating systems, and it plays a
crucial role in ensuring the correct and efficient functioning of multi-
process systems.
What is Process?
A process is a program that is currently running or a program under
execution is called a process. It includes the program’s code and all
the activity it needs to perform its tasks, such as using the CPU,
memory, and other resources. Think of a process as a task that the
computer is working on, like opening a web browser or playing a
video.
Types of Process
On the basis of synchronization, processes are categorized as one of
the following two types:
 Independent Process: The execution of one process does not
affect the execution of other processes.
 Cooperative Process: A process that can affect or be affected
by other processes executing in the system.
Process synchronization problem arises in the case of Cooperative
processes also because resources are shared in Cooperative
processes.
What is Race Condition?
When more than one process is executing the same code or
accessing the same memory or any shared variable in that condition
there is a possibility that the output or the value of the shared
variable is wrong so for that all the processes doing the race to say
that my output is correct this condition known as a race condition.
Several processes access and process the manipulations over the
same data concurrently, and then the outcome depends on the
particular order in which the access takes place. A race condition is
a situation that may occur inside a critical section. This happens
when the result of multiple thread execution in the critical section
differs according to the order in which the threads execute. Race
conditions in critical sections can be avoided if the critical section is
treated as an atomic instruction. Also, proper thread synchronization
using locks or atomic variables can prevent race conditions.
Concurrency in OS
By Srinjoy Ray

6 mins read

Last updated: 4 Feb 2024

274 views

Topics Covered

Overview
In the world of modern computing, operating systems (OS) play a critical role in ensuring that a
computer can perform multiple tasks simultaneously. One of the key techniques used to achieve
this is concurrency. Concurrency in OS allows multiple tasks or processes to run concurrently,
providing simultaneous execution and significantly improving system efficiency. However, the
implementation of concurrency in operating systems brings its own set of challenges and
complexities. In this article, we will explore the concept of concurrency in OS, exploring its
principles, advantages, limitations, and the problems it presents.

What is Concurrency in OS?


Concurrency in operating systems refers to the ability of an OS to manage and execute multiple
tasks or processes simultaneously. It allows multiple tasks to overlap in execution, giving the
appearance of parallelism even on single-core processors. Concurrency is achieved through
various techniques such as multitasking, multithreading, and multiprocessing.

Multitasking involves the execution of multiple tasks by rapidly switching between them. Each
task gets a time slot, and the OS switches between them so quickly that it seems as if they are
running simultaneously.

Multithreading takes advantage of modern processors with multiple cores. It allows different
threads of a process to run on separate cores, enabling true parallelism within a single process.

Multiprocessing goes a step further by distributing multiple processes across multiple physical
processors or cores, achieving parallel execution at a higher level.

Why Allow Concurrent Execution?

The need for concurrent execution arises from the desire to utilize computer resources
efficiently. Here are some key reasons why concurrent execution is essential:

 Resource Utilization:
Concurrency ensures that the CPU, memory, and other resources are used optimally. Without
concurrency, a CPU might remain idle while waiting for I/O operations to complete, leading to
inefficient resource utilization.
 Responsiveness:
Concurrent systems are more responsive. Users can interact with multiple applications
simultaneously, and the OS can switch between them quickly, providing a smoother user
experience.
 Throughput:
Concurrency increases the overall throughput of the system. Multiple tasks can progress
simultaneously, allowing more work to be done in a given time frame.
 Real-Time Processing:
Certain applications, such as multimedia playback and gaming, require real-time processing.
Concurrency ensures that these applications can run without interruptions, delivering a
seamless experience.

Principles of Concurrency in Operating Systems


To effectively implement concurrency, OS designers adhere to several key principles:

 Process Isolation:
Each process should have its own memory space and resources to prevent interference between
processes. This isolation is critical to maintain system stability.
 Synchronization:
Concurrency introduces the possibility of data races and conflicts. Synchronization mechanisms
like locks, semaphores, and mutexes are used to coordinate access to shared resources and
ensure data consistency.
 Deadlock Avoidance:
OSs implement algorithms to detect and avoid deadlock situations where processes are stuck
waiting for resources indefinitely. Deadlocks can halt the entire system.
 Fairness:
The OS should allocate CPU time fairly among processes to prevent any single process from
monopolizing system resources.

Problems in Concurrency
While concurrency offers numerous benefits, it also introduces a range of challenges and
problems:

 Race Conditions:
They occur when multiple threads or processes access shared resources simultaneously without
proper synchronization. In the absence of synchronization mechanisms, race conditions can lead
to unpredictable behavior and data corruption. This can result into data inconsistencies,
application crashes, or even security vulnerabilities if sensitive data is involved.
 Deadlocks:
A deadlock arises when two or more processes or threads become unable to progress as they
are mutually waiting for resources that are currently held by each other. This situation can bring
the entire system to a standstill, causing disruptions and frustration for users.
 Priority Inversion:
Priority inversion occurs when a lower-priority task temporarily holds a resource that a higher-
priority task needs. This can lead to delays in the execution of high-priority tasks, reducing
system efficiency and responsiveness.
 Resource Starvation:
Resource starvation occurs when some processes are unable to obtain the resources they need,
leading to poor performance and responsiveness for those processes. This can happen if the OS
does not manage resource allocation effectively or if certain processes monopolize resources.

Advantages of Concurrency
Concurrency in operating systems offers several distinct advantages:

 Improved Performance:
Concurrency significantly enhances system performance by effectively utilizing available
resources. With multiple tasks running concurrently, the CPU, memory, and I/O devices are
continuously engaged, reducing idle time and maximizing overall throughput.
 Responsiveness:
Concurrency ensures that users enjoy fast response times, even when juggling multiple
applications. The ability of the operating system to swiftly switch between tasks gives the
impression of seamless multitasking and enhances the user experience.
 Scalability:
Concurrency allows systems to scale horizontally by adding more processors or cores, making it
suitable for both single-core and multi-core environments.
 Fault Tolerance:
Concurrency contributes to fault tolerance, a critical aspect of system reliability. In
multiprocessor systems, if one processor encounters a failure, the remaining processors can
continue processing tasks. This redundancy minimizes downtime and ensures uninterrupted
system operation.

Limitations of Concurrency
Despite its advantages, concurrency has its limitations:

 Complexity:
Debugging and testing concurrent code is often more challenging than sequential code. The
potential for hard-to-reproduce bugs necessitates careful design and thorough testing.
 Overhead:
Synchronization mechanisms introduce overhead, which can slow down the execution of
individual tasks, especially in scenarios where synchronization is excessive.
 Race Conditions:
Dealing with race conditions requires careful consideration during design and rigorous testing to
prevent data corruption and erratic behavior.
 Resource Management:
Balancing resource usage to prevent both resource starvation and excessive contention is a
critical task. Careful resource management is vital to maintain system stability.
Issues of Concurrency
Concurrency introduces several critical issues that OS designers and developers must address:

 Security:
Concurrent execution may inadvertently expose data to unauthorized access or data leaks.
Managing access control and data security in a concurrent environment is a non-trivial task, that
demands thorough consideration.
 Compatibility:
Compatibility issues can arise when integrating legacy software into concurrent environments,
potentially limiting their performance.
 Testing and Debugging:
Debugging concurrent code is a tough task. Identifying and reproducing race conditions and
other concurrency-related bugs can be difficult.
 Scalability:
While concurrency can improve performance, not all applications can be easily parallelized.
Identifying tasks that can be parallelized and those that cannot is crucial in optimizing system
performance.

What is Deadlock in OS?


Deadlock is a scenario in operating systems where two or more processes are
unable to proceed because each process is waiting for a resource that is
being held by another process. This situation creates a standstill where no
progress can be made until the deadlock is resolved. Deadlocks can occur in
various systems, including computer networks, distributed systems, and multi-
threaded applications.

In a deadlock, each process is stuck in a waiting state, unable to proceed with


its execution. This occurs when a process requests a resource held by
another process, which is waiting for a resource held by the first process. The
result is a deadlock cycle, where the processes are stuck in a circular
dependency, waiting indefinitely for the resources they need to continue.

Deadlocks can have severe consequences, leading to system crashes, frozen


applications, and unresponsive user interfaces. Therefore, it is crucial to
understand the causes of deadlock and implement strategies to prevent and
handle them effectively.

Necessary Conditions for Deadlock in OS


Deadlocks occur due to the fulfillment of four necessary conditions for
deadlock in operating system known as the Coffman conditions. Let's explore
each of these conditions in detail:

Mutual Exclusion

The mutual exclusion condition states that some resources can only be
accessed by one process at a time. This means that once a process acquires
a resource, other processes are denied access until it is released. For
example, in a multi-threaded application, a critical section of code may be
protected by a lock that can only be held by one thread at a time.

Hold and Wait

The hold and wait condition arises when a process is holding at least one
resource and is waiting to acquire additional resources. In this scenario, a
process may acquire a resource but cannot proceed with its execution
because it requires other resources that are currently held by other processes.
This leads to a situation where processes are stuck, waiting for resources to
be released.

No Preemption

The no preemption condition states that resources cannot be forcibly taken


away from a process. Once a process acquires a resource, it can only release
it voluntarily. This means that a process cannot be interrupted or preempted to
free up its resources and allow other processes to use them. Preemption
could potentially lead to data corruption and inconsistencies if a process is
forcefully interrupted during its execution.

Circular Wait

The circular wait condition occurs when a set of processes is circularly waiting
for resources. In other words, each process waits for a resource that another
process holds in the set, creating a circular dependency. This circular wait
cycle prevents any process from acquiring the necessary resources to
continue its execution.

By fulfilling these four conditions simultaneously, a deadlock in OS can occur.


It is important to note that all four conditions must be present for a deadlock to
happen. If any one of these conditions is not met, a deadlock cannot occur.
What is Deadlock Detection?
Detecting a deadlock is essential to handle and resolve the issue effectively.
Deadlock detection involves periodically monitoring the system's state to
identify if a deadlock has occurred. Several algorithms, such as the resource-
allocation graph and the banker's algorithm, can be used to detect deadlocks.

The resource-allocation graph is a graphical representation of the resources


and processes in the system. It consists of nodes representing processes and
resources and edges representing resource requests and allocations. By
analyzing the graph, it is possible to identify cycles that indicate the
occurrence of a deadlock in OS.

The banker's algorithm is a resource allocation and deadlock avoidance


algorithm. It ensures that a state is safe by simulating the execution of
processes and checking if they can complete their execution without entering
a deadlock state. If a safe state cannot be reached, a deadlock is detected.

Once a deadlock is detected, appropriate measures can be taken to resolve it


and allow the processes to proceed with their execution. Deadlock detection is
an important aspect of deadlock management, as it provides insights into the
system's current state and helps make informed decisions for deadlock
resolution.

Deadlock Prevention
Preventing deadlocks is a proactive approach that focuses on eliminating one
or more of the Coffman conditions to ensure that deadlocks cannot occur.
Let's explore some strategies for preventing characteristics of deadlock in
operating system:

Mutual Exclusion Elimination

Mutual exclusion is a necessary condition for deadlock, as it restricts


resources to be accessed only one process at a time. The possibility of
deadlocks can be reduced by eliminating mutual exclusion, such as allowing
multiple processes to access a resource simultaneously. However, this
strategy may not be feasible in all scenarios, especially when resources are
not shareable.
Hold and Wait Avoidance

The hold and wait condition can be avoided by adopting a strategy where a
process requests all the required resources before starting its execution. This
ensures that a process does not hold any resources while waiting for
additional resources, reducing the chances of a deadlock. However, this
approach may result in resource underutilization and decreased system
efficiency.

No Preemption

To prevent deadlocks, resources can be preempted or forcefully taken away


from a process when required by another process. This approach allows the
system to reallocate resources dynamically and prioritize processes based on
their needs. However, preemption can introduce complexity and potential data
integrity issues, making it challenging to implement in certain contexts.

Circular Wait Resolution

By enforcing a total ordering of resource requests, circular waits can be


eliminated. This means that processes must request resources in a
predetermined order, preventing the formation of circular dependency chains.
However, this approach requires careful resource management and
coordination to ensure the correct order of resource requests.

Preventing deadlocks can be challenging, as it involves carefully analyzing the


system's resource allocation policies and making changes to the design and
implementation of the system. Prevention strategies should be evaluated
based on their feasibility, impact on system performance, and ability to
eliminate the conditions necessary for deadlock.

Read our latest blogs "List of Operating Systems" and "Booting in Operating
System".

Deadlock Avoidance: Prevention of Deadlock in OS


Deadlock avoidance is a more flexible approach to dealing with deadlocks. It
focuses on dynamically granting resource requests based on the predicted
behavior of processes. By analyzing the resource needs of processes and
predicting future resource requests, the system can avoid granting resources
that may lead to a deadlock. You will learn more about methods of handling
deadlock in os further.

One commonly used algorithm for deadlock avoidance is the banker's


algorithm. This algorithm uses a mathematical model to determine if a
resource allocation will result in a deadlock in OS. It simulates the execution
of processes and checks if they can complete their execution without entering
a deadlock state. If a safe state can be reached, the resource allocation is
considered safe and can be granted.

The banker's algorithm requires information about the maximum resource


requirements of each process, the currently allocated resources, and the
available resources in the system. Based on this information, the algorithm
analyzes whether a deadlock is possible. If a deadlock is detected, the system
can choose to delay granting resources until a safe state can be achieved.

Deadlock avoidance algorithms can be complex and require substantial


computational resources. They need to balance the need for resource
allocation with the prevention of deadlocks. While avoidance strategies can be
effective in certain scenarios, they may not be suitable for all systems due to
the overhead of predicting resource needs and ensuring a safe state.

Deadlock Recovery - Methods For Handling


Deadlock in OS
Deadlock recovery becomes necessary when deadlock prevention or
avoidance strategies are not applicable or unsuccessful. Deadlock recovery
involves taking corrective actions to resolve the deadlock and allow the
processes to continue their execution. There are two primary approaches for
deadlock recovery:

1. Process Termination

In this approach, the operating system terminates one or more processes


involved in the deadlock. The resources held by those processes are released
by terminating the processes, allowing other processes to proceed. However,
process termination can result in data loss or inconsistency, as the terminated
processes may have made partial progress before the deadlock occurred.

When selecting processes for termination, several factors can be considered,


such as the process's priority, the progress made by the process, and the
resources consumed by the process. By carefully selecting the processes to
terminate, the impact on the system can be minimized.

2. Resource Preemption

Resource preemption involves forcibly taking resources from one or more


processes involved in the deadlock. The operating system can break the
circular dependency by preempting resources and allowing the remaining
processes to proceed. However, resource preemption can be complex and
may require the system to roll back the affected processes to a safe state
before resuming their execution.

The selection of processes and resources for preemption requires careful


consideration to minimize the impact on system performance and ensure fair
resource allocation. Factors such as the number of resources held by a
process and the amount of time the process has consumed can be
considered when deciding which processes and resources to preempt.

Deadlock recovery strategies aim to resolve deadlocks and restore the system
to a consistent state. However, both process termination and resource
preemption strategies have their limitations and may result in performance
degradation or data loss. Therefore, careful planning and consideration are
necessary when implementing deadlock recovery mechanisms.

Difference Between Deadlock and Starvation


While deadlock and starvation are both issues that can occur in operating
systems, they have distinct characteristics and implications. Here are the key
differences between deadlock and starvation:

Deadlock
 Deadlock occurs when two or more processes are unable to proceed because each
process is waiting for a resource that is being held by another process.
 Deadlock is an infinite waiting situation where processes are stuck in a circular
dependency and cannot progress.
 All the necessary conditions for deadlock, including mutual exclusion, hold and wait, no
preemption, and circular wait, must be fulfilled for a deadlock to occur.
 Deadlock can result in system crashes, frozen applications, and unresponsive user
interfaces.
Starvation
 Starvation occurs when a low-priority process is continuously denied access to a
resource while high-priority processes are granted access.
 Starvation is a long waiting situation but is not infinite like a deadlock in OS.
 Starvation can occur due to uncontrolled priority and resource management, where
certain processes are consistently prioritized over others.
 Starvation does not necessarily lead to a system crash or frozen applications, but it can
result in decreased system performance and unfair resource allocation.

It is important to distinguish between deadlock and starvation as they require


different approaches for resolution. Deadlock in operating system must be
detected and resolved to allow processes to proceed, while starvation may
require resource allocation policies and priority management adjustments.

Advantages and Disadvantages of Deadlock in OS


Like any concept or strategy, deadlocks have both advantages and
disadvantages. Let's explore the pros and cons of deadlocks:

Advantages of Deadlock in OS
 Deadlocks can be useful in certain scenarios, such as processes that perform a single
burst of activity and do not require resource sharing.
 Deadlocks can provide simplicity and efficiency in systems where the correctness of
data is more important than overall system performance.
 Deadlocks can be enforced via compile-time checks, eliminating the need for runtime
computation and reducing the chances of unexpected system behavior.

Disadvantages of Deadlock in OS
 Deadlocks can cause system crashes, frozen applications, and unresponsive user
interfaces, leading to a poor user experience.
 Deadlocks may result in delays in process initiation and overall system performance
degradation.
 Deadlocks can preclude incremental resource requests and disallow processes from
making progress.
 Deadlocks may require inherent preemption, resulting in losses and potential data
integrity issues.

It is important to consider the advantages and disadvantages of deadlocks


when designing and implementing operating systems. While principles of
deadlock in operating system can offer simplicity and correctness in certain
scenarios, they require careful resource management and significantly
negatively impact system performance and user experience.
What exactly does starvation mean in operating
systems?
In the context of operating systems, "starvation" refers to a situation where a process or
a resource is unable to make progress or access a particular resource it needs due to
the allocation of resources to other processes or tasks. This can lead to unfairness and
inefficiency in resource allocation, and it's a problem that can occur in multi-tasking or
multi-threaded environments where multiple processes or threads compete for
resources.
Starvation can occur in various scenarios, and the most common type is "resource
starvation," where a process is unable to obtain a necessary resource, such as CPU
time, memory, or I/O resources, for an extended period. This typically occurs because
other processes or threads are monopolizing the resource, preventing fair access for
others. Resource allocation mechanisms within an operating system, like scheduling
algorithms, are responsible for managing and preventing starvation.
What Causes Starvation in OS?
One of the main causes of starvation is an unfair scheduling policy. Some scheduling
algorithms, such as the Priority Scheduling algorithm, favor high-priority processes over
low-priority ones. If the system is busy with high-priority processes, the low-priority
processes might be left waiting indefinitely.
Another cause could be resource allocation issues. If a certain process holds a resource
that another process needs to continue execution, and it doesn't release it, the waiting
process can starve
Example of Starvation In OS
An example of starvation in an operating system is when a low-priority background task,
such as a file backup process, constantly loses access to CPU resources because a
high-priority real-time process, like an emergency response system, is given
precedence. This can lead to the backup process making very slow progress or being
unable to complete its task, which is a form of starvation.
How to Prevent Starvation in OS?
Preventing starvation involves ensuring fair allocation of CPU time and other resources.
Various techniques can be used for this purpose:
Aging
Aging is a technique where the priorities of processes are gradually increased the
longer they wait. This way, even low-priority processes will eventually have their priority
elevated enough to be executed, preventing starvation.
Round Robin Scheduling
Round Robin is a scheduling algorithm where each process gets a small unit of CPU
time (time quantum). It cycles through the queue of processes, giving each one its fair
share of CPU time, thus preventing any one process from starving.
Resource Reservation
Resource Reservation is a strategy where each process is guaranteed a minimum
amount of resources. By ensuring each process receives a certain minimum share, no
process goes indefinitely without resources, thus preventing starvation.
Banker's Algorithm in Operating System (OS)
It is a banker algorithm used to avoid deadlock and allocate
resources safely to each process in the computer system. The 'S-
State' examines all possible tests or activities before deciding whether the
allocation should be allowed to each process. It also helps the operating
system to successfully share the resources between all the processes. The
banker's algorithm is named because it checks whether a person should be
sanctioned a loan amount or not to help the bank system safely simulate
allocation resources. In this section, we will learn the Banker's Algorithm in
detail. Also, we will solve problems based on the Banker's Algorithm. To
understand the Banker's Algorithm first we will see a real word example of it.

Suppose the number of account holders in a particular bank is 'n', and the
total money in a bank is 'T'. If an account holder applies for a loan; first, the
bank subtracts the loan amount from full cash and then estimates the cash
difference is greater than T to approve the loan amount. These steps are
taken because if another person applies for a loan or withdraws some
amount from the bank, it helps the bank manage and operate all things
without any restriction in the functionality of the banking system.

Similarly, it works in an operating system. When a new process is created


in a computer system, the process must provide all types of information to
the operating system like upcoming processes, requests for their resources,
counting them, and delays. Based on these criteria, the operating system
decides which process sequence should be executed or waited so that no
deadlock occurs in a system. Therefore, it is also known as deadlock
avoidance algorithm or deadlock detection in the operating system.

Advantages
Following are the essential characteristics of the Banker's algorithm:

1. It contains various resources that meet the requirements of each process.


2. Each process should provide information to the operating system for
upcoming resource requests, the number of resources, and how long the
resources will be held.
3. It helps the operating system manage and control process requests for each
type of resource in the computer system.
4. The algorithm has a Max resource attribute that represents indicates each
process can hold the maximum number of resources in a system.

Disadvantages
1. It requires a fixed number of processes, and no additional processes can be
started in the system while executing the process.
2. The algorithm does no longer allows the processes to exchange its maximum
needs while processing its tasks.
3. Each process has to know and state their maximum resource requirement in
advance for the system.
4. The number of resource requests can be granted in a finite time, but the time
limit for allocating the resources is one year.

Memory Management in Operating System




The term memory can be defined as a collection of data in a specific


format. It is used to store instructions and process data. The
memory comprises a large array or group of words or bytes, each
with its own location. The primary purpose of a computer system is
to execute programs. These programs, along with the information
they access, should be in the main memory during execution.
The CPU fetches instructions from memory according to the value of
the program counter.
To achieve a degree of multiprogramming and proper utilization of
memory, memory management is important. Many memory
management methods exist, reflecting various approaches, and the
effectiveness of each algorithm depends on the situation.
Here, we will cover the following memory management topics:
 What is Main Memory?
 What is Memory Management?
 Why Memory Management is Required?
 Logical Address Space and Physical Address Space
 Static and Dynamic Loading
 Static and Dynamic Linking
 Swapping
 Contiguous Memory Allocation
o Memory Allocation
o First Fit
o Best Fit
o Worst Fit
o Fragmentation
o Internal Fragmentation
o External Fragmentation
o Paging
Before we start Memory management, let us know what is main
memory is.
What is Main Memory?
The main memory is central to the operation of a Modern Computer.
Main Memory is a large array of words or bytes, ranging in size from
hundreds of thousands to billions. Main memory is a repository of
rapidly available information shared by the CPU and I/O devices.
Main memory is the place where programs and information are kept
when the processor is effectively utilizing them. Main memory is
associated with the processor, so moving instructions and
information into and out of the processor is extremely fast. Main
memory is also known as RAM (Random Access Memory). This
memory is volatile. RAM loses its data when a power interruption
occurs.
Main Memory

What is Memory Management?


In a multiprogramming computer, the Operating System resides in a
part of memory, and the rest is used by multiple processes. The task
of subdividing the memory among different processes is called
Memory Management. Memory management is a method in the
operating system to manage operations between main memory and
disk during process execution. The main aim of memory
management is to achieve efficient utilization of memory.
Why Memory Management is Required?
 Allocate and de-allocate memory before and after process
execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Now we are discussing the concept of Logical Address Space
and Physical Address Space
Logical and Physical Address Space
 Logical Address Space: An address generated by the CPU is
known as a “Logical Address”. It is also known as a Virtual
address. Logical address space can be defined as the size of the
process. A logical address can be changed.
 Physical Address Space: An address seen by the memory unit
(i.e the one loaded into the memory address register of the
memory) is commonly known as a “Physical Address”. A Physical
address is also known as a Real address. The set of all physical
addresses corresponding to these logical addresses is known as
Physical address space. A physical address is computed by MMU.
The run-time mapping from virtual to physical addresses is done
by a hardware device Memory Management Unit(MMU). The
physical address always remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There
are two different types of loading :
 Static Loading: Static Loading is basically loading the entire
program into a fixed address. It requires more memory space.
 Dynamic Loading: The entire program and all data of a process
must be in physical memory for the process to execute. So, the
size of a process is limited to the size of physical memory. To gain
proper memory utilization, dynamic loading is used. In dynamic
loading, a routine is not loaded until it is called. All routines are
residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that the unused routine is
never loaded. This loading is useful when a large amount of code
is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that
takes one or more object files generated by a compiler and
combines them into a single executable file.
 Static Linking: In static linking, the linker combines all
necessary program modules into a single executable program. So
there is no runtime dependency. Some operating systems support
only static linking, in which system language libraries are treated
like any other object module.
 Dynamic Linking: The basic concept of dynamic linking is
similar to dynamic loading. In dynamic linking, “Stub” is included
for each appropriate library routine reference. A stub is a small
piece of code. When the stub is executed, it checks whether the
needed routine is already in memory or not. If not available then
the program loads the routine into memory.
Swapping
When a process is executed it must have resided in
memory. Swapping is a process of swapping a process temporarily
into a secondary memory from the main memory, which is fast
compared to secondary memory. A swapping allows more processes
to be run and can be fit into memory at one time. The main part of
swapping is transferred time and the total time is directly
proportional to the amount of memory swapped. Swapping is also
known as roll-out, or roll because if a higher priority process arrives
and wants service, the memory manager can swap out the lower
priority process and then load and execute the higher priority
process. After finishing higher priority work, the lower priority
process swapped back in memory and continued to the execution
process.
swapping in memory management

Memory Management with Monoprogramming (Without


Swapping)
This is the simplest memory management approach the memory is
divided into two sections:
 One part of the operating system
 The second part of the user program

Fence Register

operating system user program

 In this approach, the operating system keeps track of the first and
last location available for the allocation of the user program
 The operating system is loaded either at the bottom or at top
 Interrupt vectors are often loaded in low memory therefore, it
makes sense to load the operating system in low memory
 Sharing of data and code does not make much sense in a single
process environment
 The Operating system can be protected from user programs with
the help of a fence register.
Advantages of Memory Management
 It is a simple management approach
Disadvantages of Memory Management
 It does not support multiprogramming
 Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
 A memory partition scheme with a fixed number of partitions was
introduced to support multiprogramming. this scheme is based on
contiguous allocation
 Each partition is a block of contiguous memory
 Memory is partitioned into a fixed number of partitions.
 Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the
region is reserved for updating the system the remaining four
partitions are for the user program.
Fixed Size Partitioning
Operating System

p1

p2

p3

p4

Partition Table
Once partitions are defined operating system keeps track of the
status of memory partitions it is done through a data structure
called a partition table.
Sample Partition Table
Starting Address of Partition Size of Partition Status

0k 200k allocated

200k 100k free

300k 150k free

450k 250k allocated

Logical vs Physical Address


An address generated by the CPU is commonly referred to as a
logical address. the address seen by the memory unit is known as
the physical address. The logical address can be mapped to a
physical address by hardware with the help of a base register this is
known as dynamic relocation of memory references.
Contiguous Memory Allocation
The main memory should accommodate both the operating
system and the different client processes. Therefore, the allocation
of memory becomes an important task in the operating system. The
memory is usually divided into two partitions: one for the
resident operating system and one for the user processes. We
normally need several user processes to reside in memory
simultaneously. Therefore, we need to consider how to allocate
available memory to the processes that are in the input queue
waiting to be brought into memory. In adjacent memory allotment,
each process is contained in a single contiguous segment of
memory.
Contiguous Memory Allocation

Memory Allocation
To gain proper memory utilization, memory allocation must be
allocated efficient manner. One of the simplest methods for
allocating memory is to divide memory into several fixed-sized
partitions and each partition contains exactly one process. Thus, the
degree of multiprogramming is obtained by the number of
partitions.
 Multiple partition allocation: In this method, a process is
selected from the input queue and loaded into the free partition.
When the process terminates, the partition becomes available for
other processes.
 Fixed partition allocation: In this method, the operating
system maintains a table that indicates which parts of memory
are available and which are occupied by processes. Initially, all
memory is available for user processes and is considered one
large block of available memory. This available memory is known
as a “Hole”. When the process arrives and needs memory, we
search for a hole that is large enough to store this process. If the
requirement is fulfilled then we allocate memory to process,
otherwise keeping the rest available to satisfy future requests.
While allocating a memory sometimes dynamic storage allocation
problems occur, which concerns how to satisfy a request of size n
from a list of free holes. There are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of
the process allocated.

First Fit

Here, in this diagram, a 40 KB memory block is the first available


free hole that can store process A (size of 25 KB), because the first
two blocks did not have sufficient memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to
process requirements. For this, we search the entire list, unless the
list is ordered by size.
Best Fit

Here in this example, first, we traverse the complete list and find the
last hole 25KB is the best suitable hole for Process A(size 25KB). In
this method, memory utilization is maximum as compared to other
memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This
method produces the largest leftover hole.

Worst Fit

Here in this example, Process A (Size 25 KB) is allocated to the


largest available memory block which is 60KB. Inefficient memory
utilization is a major issue in the worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and
removed after execution from memory, it creates a small free hole.
These holes can not be assigned to new processes because holes
are not combined or do not fulfill the memory requirement of the
process. To achieve a degree of multiprogramming, we must
reduce the waste of memory or fragmentation problems. In the
operating systems two types of fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when
memory blocks are allocated to the process more than their
requested size. Due to this some unused space is left over and
creating an internal fragmentation problem.Example: Suppose
there is a fixed partitioning used for memory allocation and the
different sizes of blocks 3MB, 6MB, and 7MB space in memory.
Now a new process p4 of size 2MB comes and demands a block of
memory. It gets a memory block of 3MB but 1MB block of
memory is a waste, and it can not be allocated to other processes
too. This is called internal fragmentation.
2. External fragmentation: In External Fragmentation, we have a
free memory block, but we can not assign it to a process because
blocks are not contiguous. Example: Suppose (consider the
above example) three processes p1, p2, and p3 come with sizes
2MB, 4MB, and 7MB respectively. Now they get memory blocks of
size 3MB, 6MB, and 7MB allocated respectively. After allocating
the process p1 process and the p2 process left 1MB and 2MB.
Suppose a new process p4 comes and demands a 3MB block of
memory, which is available, but we can not assign it because free
memory space is not contiguous. This is called external
fragmentation.
Both the first-fit and best-fit systems for memory allocation are
affected by external fragmentation. To overcome the external
fragmentation problem Compaction is used. In the compaction
technique, all free memory space combines and makes one large
block. So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow
the logical address space of the processes to be noncontiguous,
thus permitting a process to be allocated physical memory wherever
the latter is available.
Paging
Paging is a memory management scheme that eliminates the need
for a contiguous allocation of physical memory. This scheme permits
the physical address space of a process to be non-contiguous.
 Logical Address or Virtual Address (represented in
bits): An address generated by the CPU.
 Logical Address Space or Virtual Address Space
(represented in words or bytes): The set of all logical
addresses generated by a program.
 Physical Address (represented in bits): An address actually
available on a memory unit.
 Physical Address Space (represented in words or
bytes): The set of all physical addresses corresponding to the
logical addresses.
The address generated by the CPU is divided into:
 Page Number(p): Number of bits required to represent the
pages in Logical Address Space or Page number
 Page Offset(d): Number of bits required to represent a
particular word in a page or page size of Logical Address Space or
word number of a page or page offset.

Operating System - Virtual


Memory
A computer can address more memory than the amount physically installed on
the system. This extra memory is actually called virtual memory and it is a
section of a hard disk that's set up to emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to
extend the use of physical memory by using disk. Second, it allows us to have
memory protection, because each virtual address is translated to a physical
address.

Following are the situations, when entire program is not required to be loaded
fully in main memory.

 User written error handling routines are used only when an error occurred
in the data or computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though
only a small amount of the table is actually used.
 The ability to execute a program that is only partially in memory would
counter many benefits.
 Less number of I/O would be needed to load or swap each user program
into memory.
 A program would no longer be constrained by the amount of physical
memory that is available.
 Each user program could take less physical memory, more programs
could be run the same time, with a corresponding increase in CPU
utilization and throughput.

Modern microprocessors intended for general-purpose use, a memory


management unit, or MMU, is built into the hardware. The MMU's job is to
translate virtual addresses into physical addresses. A basic example is given
below −

Virtual memory is commonly implemented by demand paging. It can also be


implemented in a segmentation system. Demand segmentation can also be
used to provide virtual memory.

Demand Paging
A demand paging system is quite similar to a paging system with swapping
where processes reside in secondary memory and pages are loaded only on
demand, not in advance. When a context switch occurs, the operating system
does not copy any of the old program’s pages out to the disk or any of the new
program’s pages into the main memory Instead, it just begins executing the new
program after loading the first page and fetches that program’s pages as they
are referenced.

While executing a program, if the program references a page which is not


available in the main memory because it was swapped out a little ago, the
processor treats this invalid memory reference as a page fault and transfers
control from the program to the operating system to demand the page back into
the memory.
Advantages
Following are the advantages of Demand Paging −

 Large virtual memory.


 More efficient use of memory.
 There is no limit on degree of multiprogramming.
Disadvantages
 Number of tables and the amount of processor overhead for handling
page interrupts are greater than in the case of the simple paged
management techniques.

Page Replacement Algorithm


Page replacement algorithms are the techniques using which an Operating
System decides which memory pages to swap out, write to disk when a page of
memory needs to be allocated. Paging happens whenever a page fault occurs
and a free page cannot be used for allocation purpose accounting to reason that
pages are not available or the number of free pages is lower than required
pages.

When the page that was selected for replacement and was paged out, is
referenced again, it has to read in from disk, and this requires for I/O
completion. This process determines the quality of the page replacement
algorithm: the lesser the time waiting for page-ins, the better is the algorithm.

A page replacement algorithm looks at the limited information about accessing


the pages provided by hardware, and tries to select which pages should be
replaced to minimize the total number of page misses, while balancing it with
the costs of primary storage and processor time of the algorithm itself. There
are many different page replacement algorithms. We evaluate an algorithm by
running it on a particular string of memory reference and computing the number
of page faults,

CPU Scheduling in Operating Systems




Scheduling of processes/work is done to finish the work on


time. CPU Scheduling is a process that allows one process to use
the CPU while another process is delayed (in standby) due to
unavailability of any resources such as I / O etc, thus making full use
of the CPU. The purpose of CPU Scheduling is to make the system
more efficient, faster, and fairer.
CPU scheduling is a key part of how an operating system works. It
decides which task (or process) the CPU should work on at any given
time. This is important because a CPU can only handle one task at a
time, but there are usually many tasks that need to be processed. In
this article, we are going to discuss CPU scheduling in detail.
Whenever the CPU becomes idle, the operating system must select
one of the processes in the line ready for launch. The selection
process is done by a temporary (CPU) scheduler. The Scheduler
selects between memory processes ready to launch and assigns the
CPU to one of them.
Table of Content
 What is a Process?
 How is Process Memory Used For Efficient Operation?
 What is Process Scheduling?
 Why do We Need to Schedule Processes?
 What is The Need For CPU Scheduling Algorithm?
 Terminologies Used in CPU Scheduling
 Things to Take Care While Designing a CPU Scheduling Algorithm
 What are the different types of CPU Scheduling Algorithms?
 1. First Come First Serve
 2. Shortest Job First(SJF)
 3. Longest Job First(LJF)
 4. Priority Scheduling
 5. Round robin
 6. Shortest Remaining Time First
 7. Longest Remaining Time First
 8. Highest Response Ratio Next
 9. Multiple Queue Scheduling
 10. Multilevel Feedback Queue Scheduling
 Comparison between various CPU Scheduling algorithms
What is a Process?
In computing, a process is the instance of a computer program
that is being executed by one or many threads. It contains the
program code and its activity. Depending on the operating
system (OS), a process may be made up of multiple threads of
execution that execute instructions concurrently.
How is Process Memory Used For Efficient
Operation?
The process memory is divided into four sections for efficient
operation:
 The text category is composed of integrated program code,
which is read from fixed storage when the program is launched.
 The data class is made up of global and static variables,
distributed and executed before the main action.
 Heap is used for flexible, or dynamic memory allocation and is
managed by calls to new, delete, malloc, free, etc.
 The stack is used for local variables. The space in the stack is
reserved for local variables when it is announced.

To know further, you can refer to our detailed article on States of a


Process in Operating system.
What is Process Scheduling?
Process Scheduling is the process of the process manager handling
the removal of an active process from the CPU and selecting
another process based on a specific strategy.
Process Scheduling is an integral part of Multi-programming
applications. Such operating systems allow more than one process
to be loaded into usable memory at a time and the loaded shared
CPU process uses repetition time.
There are three types of process schedulers:
 Long term or Job Scheduler
 Short term or CPU Scheduler
 Medium-term Scheduler
Why do We Need to Schedule Processes?
 Scheduling is important in many different computer
environments. One of the most important areas is scheduling
which programs will work on the CPU. This task is handled by the
Operating System (OS) of the computer and there are many
different ways in which we can choose to configure programs.
 Process Scheduling allows the OS to allocate CPU time for each
process. Another important reason to use a process scheduling
system is that it keeps the CPU busy at all times. This allows you
to get less response time for programs.
 Considering that there may be hundreds of programs that need to
work, the OS must launch the program, stop it, switch to another
program, etc. The way the OS configures the system to run
another in the CPU is called “context switching”. If the OS keeps
context-switching programs in and out of the provided CPUs, it
can give the user a tricky idea that he or she can run any
programs he or she wants to run, all at once.
 So now that we know we can run 1 program at a given CPU, and
we know we can change the operating system and remove
another one using the context switch, how do we choose which
programs we need. run, and with what program?
 That’s where scheduling comes in! First, you determine the
metrics, saying something like “the amount of time until the
end”. We will define this metric as “the time interval between
which a function enters the system until it is completed”. Second,
you decide on a metrics that reduces metrics. We want our tasks
to end as soon as possible.
What is The Need For CPU Scheduling
Algorithm?
CPU scheduling is the process of deciding which process will own
the CPU to use while another process is suspended. The main
function of the CPU scheduling is to ensure that whenever the CPU
remains idle, the OS has at least selected one of the processes
available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I /
O binding processes then most of the time, the CPU remains an idle.
The function of an effective program is to improve resource
utilization.
If most operating systems change their status from performance to
waiting then there may always be a chance of failure in the system.
So in order to minimize this excess, the OS needs to schedule tasks
in order to make full use of the CPU and avoid the possibility of
deadlock.
Objectives of Process Scheduling Algorithm
 Utilization of CPU at maximum level. Keep CPU as busy as
possible.
 Allocation of CPU should be fair.
 Throughput should be Maximum. i.e. Number of processes
that complete their execution per time unit should be maximized.
 Minimum turnaround time, i.e. time taken by a process to
finish execution should be the least.
 There should be a minimum waiting time and the process
should not starve in the ready queue.
 Minimum response time. It means that the time when a
process produces the first response should be as less as possible.
Terminologies Used in CPU Scheduling
 Arrival Time: Time at which the process arrives in the ready
queue.
 Completion Time: Time at which process completes its
execution.
 Burst Time: Time required by a process for CPU execution.
 Turn Around Time: Time Difference between completion time
and arrival time.
o Turn Around Time = Completion Time – Arrival Time
 Waiting Time(W.T): Time Difference between turn around time
and burst time.
o Waiting Time = Turn Around Time – Burst Time
Things to Take Care While Designing a CPU
Scheduling Algorithm
Different CPU Scheduling algorithms have different structures
and the choice of a particular algorithm depends on a variety of
factors. Many conditions have been raised to compare CPU
scheduling algorithms.
The criteria include the following:
 CPU Utilization: The main purpose of any CPU algorithm is to
keep the CPU as busy as possible. Theoretically, CPU usage can
range from 0 to 100 but in a real-time system, it varies from 40 to
90 percent depending on the system load.
 Throughput: The average CPU performance is the number of
processes performed and completed during each unit. This is
called throughput. The output may vary depending on the length
or duration of the processes.
 Turn Round Time: For a particular process, the important
conditions are how long it takes to perform that process. The time
elapsed from the time of process delivery to the time of
completion is known as the conversion time. Conversion time is
the amount of time spent waiting for memory access, waiting in
line, using CPU, and waiting for I / O.
 Waiting Time: The Scheduling algorithm does not affect the
time required to complete the process once it has started
performing. It only affects the waiting time of the process i.e. the
time spent in the waiting process in the ready queue.
 Response Time: In a collaborative system, turn around time is
not the best option. The process may produce something early
and continue to computing the new results while the previous
results are released to the user. Therefore another method is the
time taken in the submission of the application process until the
first response is issued. This measure is called response time.
What Are The Different Types of CPU
Scheduling Algorithms?
There are mainly two types of scheduling methods:
 Preemptive Scheduling: Preemptive scheduling is used when a
process switches from running state to ready state or from the
waiting state to the ready state.
 Non-Preemptive Scheduling: Non-Preemptive scheduling is
used when a process terminates , or when a process switches
from running state to waiting state.
Different types of CPU Scheduling Algorithms

Let us now learn about these CPU scheduling algorithms in


operating systems one by one:
1. First Come First Serve
FCFS considered to be the simplest of all operating system
scheduling algorithms. First come first serve scheduling algorithm
states that the process that requests the CPU first is allocated the
CPU first and is implemented by using FIFO queue.
Characteristics of FCFS
 FCFS supports non-preemptive and preemptive CPU scheduling
algorithms.
 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait
time is quite high.
Advantages of FCFS
 Easy to implement
 First come, first serve method
Disadvantages of FCFS
 FCFS suffers from Convoy effect.
 The average waiting time is much higher than the other
algorithms.
 FCFS is very simple and easy to implement and hence not much
efficient.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on First come, First serve
Scheduling.
2. Shortest Job First(SJF)
Shortest job first (SJF) is a scheduling process that selects the
waiting process with the smallest execution time to execute next.
This scheduling method may or may not be preemptive.
Significantly reduces the average waiting time for other processes
waiting to be executed. The full form of SJF is Shortest Job First.

Characteristics of SJF
 Shortest Job first has the advantage of having a minimum
average waiting time among all operating system scheduling
algorithms.
 It is associated with each task as a unit of time to complete.
 It may cause starvation if shorter processes keep coming. This
problem can be solved using the concept of ageing.
Advantages of SJF
 As SJF reduces the average waiting time thus, it is better than the
first come first serve scheduling algorithm.
 SJF is generally used for long term scheduling
Disadvantages of SJF
 One of the demerit SJF has is starvation.
 Many times it becomes complicated to predict the length of the
upcoming CPU request
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on Shortest Job First.
3. Longest Job First(LJF)
Longest Job First(LJF) scheduling process is just opposite of
shortest job first (SJF), as the name suggests this algorithm is based
upon the fact that the process with the largest burst time is
processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF
 Among all the processes waiting in a waiting queue, CPU is
always assigned to the process having largest burst time.
 If two processes have the same burst time then the tie is broken
using FCFS i.e. the process that arrived first is processed first.
 LJF CPU Scheduling can be of both preemptive and non-
preemptive types.
Advantages of LJF
 No other task can schedule until the longest job or process
executes completely.
 All the jobs or processes finish at the same time approximately.
Disadvantages of LJF
 Generally, the LJF algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
 This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the Longest job first
scheduling.
4. Priority Scheduling
Preemptive Priority CPU Scheduling Algorithm is a pre-
emptive method of CPU scheduling algorithm that works based on
the priority of a process. In this algorithm, the editor sets the
functions to be as important, meaning that the most important
process must be done first. In the case of any conflict, that is, where
there is more than one process with equal value, then the most
important CPU planning algorithm works on the basis of the FCFS
(First Come First Serve) algorithm.
Characteristics of Priority Scheduling
 Schedules tasks based on priority.
 When the higher priority work arrives and a task with less priority
is executing, the higher priority proess will takes the place of the
less priority proess and
 The later is suspended until the execution is complete.
 Lower is the number assigned, higher is the priority level of a
process.
Advantages of Priority Scheduling
 The average waiting time is less than FCFS
 Less complex
Disadvantages of Priority Scheduling
 One of the most common demerits of the Preemptive priority CPU
scheduling algorithm is the Starvation Problem. This is the
problem in which a process has to wait for a longer amount of
time to get scheduled into the CPU. This condition is called the
starvation problem.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on Priority Preemptive Scheduling
algorithm.
5. Round Robin
Round Robin is a CPU scheduling algorithm where each process is
cyclically assigned a fixed time slot. It is the preemptive version
of First come First Serve CPU Scheduling algorithm. Round Robin
CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin
 It’s simple, easy to use, and starvation-free as all processes get
the balanced CPU allocation.
 One of the most widely used methods in CPU scheduling as a
core.
 It is considered preemptive as the processes are given to the CPU
for a very limited time.
Advantages of Round robin
 Round robin seems to be fair as every process gets an equal
share of CPU.
 The newly created process is added to the end of the ready
queue.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the Round robin Scheduling
algorithm.
6. Shortest Remaining Time First(SRTF)
Shortest remaining time first is the preemptive version of the
Shortest job first which we have discussed earlier where the
processor is allocated to the job closest to completion. In SRTF the
process with the smallest amount of time remaining until completion
is selected to execute.
Characteristics of SRTF
 SRTF algorithm makes the processing of the jobs faster than SJF
algorithm, given it’s overhead charge


 s are not counted.
 The context switch is done a lot more times in SRTF than in SJF
and consumes the CPU’s valuable time for processing. This adds
up to its processing time and diminishes its advantage of fast
processing.
Advantages of SRTF
 In SRTF the short processes are handled very fast.
 The system also requires very little overhead since it only makes
a decision when a process completes or a new process is added.
Disadvantages of SRTF
 Like the shortest job first, it also has the potential for process
starvation.
 Long processes may be held off indefinitely if short processes are
continually added.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the shortest remaining time
first.
7. Longest Remaining Time First(LRTF)
The longest remaining time first is a preemptive version of the
longest job first scheduling algorithm. This scheduling algorithm is
used by the operating system to program incoming processes for
use in a systematic way. This algorithm schedules those processes
first which have the longest processing time remaining for
completion.
Characteristics of LRTF
 Among all the processes waiting in a waiting queue, the CPU is
always assigned to the process having the largest burst time.
 If two processes have the same burst time then the tie is broken
using FCFS i.e. the process that arrived first is processed first.
 LRTF CPU Scheduling can be of both preemptive and non-
preemptive.
 No other process can execute until the longest task executes
completely.
 All the jobs or processes finish at the same time approximately.
Advantages of LRTF
 Maximizes Throughput for Long Processes.
 Reduces Context Switching.
 Simplicity in Implementation.
Disadvantages of LRTF
 This algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
 This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the longest remaining time
first.
8. Highest Response Ratio Next(HRRN)
Highest Response Ratio Next is a non-preemptive CPU
Scheduling algorithm and it is considered as one of the most optimal
scheduling algorithms. The name itself states that we need to find
the response ratio of all available processes and select the one with
the highest Response Ratio. A process once selected will run till
completion.
Characteristics of HRRN
 The criteria for HRRN is Response Ratio, and
the mode is Non-Preemptive.
 HRRN is considered as the modification of Shortest Job First to
reduce the problem of starvation.
 In comparison with SJF, during the HRRN scheduling algorithm,
the CPU is allotted to the next process which has the highest
response ratio and not to the process having less burst time.
Scheduling in Real Time Systems


Real-time systems are systems that carry real-time tasks. These


tasks need to be performed immediately with a certain degree of
urgency. In particular, these tasks are related to control of certain
events (or) reacting to them. Real-time tasks can be classified as
hard real-time tasks and soft real-time tasks.
A hard real-time task must be performed at a specified time which
could otherwise lead to huge losses. In soft real-time tasks, a
specified deadline can be missed. This is because the task can be
rescheduled (or) can be completed after the specified time,
In real-time systems, the scheduler is considered as the most
important component which is typically a short-term task scheduler.
The main focus of this scheduler is to reduce the response time
associated with each of the associated processes instead of
handling the deadline.
If a preemptive scheduler is used, the real-time task needs to wait
until its corresponding tasks time slice completes. In the case of a
non-preemptive scheduler, even if the highest priority is allocated to
the task, it needs to wait until the completion of the current task.
This task can be slow (or) of the lower priority and can lead to a
longer wait.
A better approach is designed by combining both preemptive and
non-preemptive scheduling. This can be done by introducing time-
based interrupts in priority based systems which means the
currently running process is interrupted on a time-based interval
and if a higher priority process is present in a ready queue, it is
executed by preempting the current process.
Based on schedulability, implementation (static or dynamic), and
the result (self or dependent) of analysis, the scheduling algorithm
are classified as follows.

1. Static table-driven approaches:


These algorithms usually perform a static analysis associated with
scheduling and capture the schedules that are advantageous.
This helps in providing a schedule that can point out a task with
which the execution must be started at run time.

2. Static priority-driven preemptive approaches:


Similar to the first approach, these type of algorithms also uses
static analysis of scheduling. The difference is that instead of
selecting a particular schedule, it provides a useful way of
assigning priorities among various tasks in preemptive
scheduling.
Types Real Time Scheduling
Soft real-time is when a system continues to function
even if it’s unable to execute within an allotted time. If
the system has missed its deadline, it will not result in
critical consequences. The system can continue to
function, though with undesirable lower quality of output.
However, there are certain industries, such as robotics,
automotive, utilities, and healthcare, where use cases have
higher requirements for synchronization, time lines, and
worst-case execution time guarantee. Those examples fall
within the hard real-time classification.
Hard real-time is when a system will cease to function if
a deadline is missed, which can result in catastrophic
consequences.

You might also like