0% found this document useful (0 votes)
3 views35 pages

OS UNIT I and II

An operating system (OS) is a crucial software that manages computer hardware and provides an interface for users and application programs. It performs essential functions such as process management, memory management, file management, and I/O device management, while also offering services like program execution and error detection. Various types of operating systems exist, including batch, time-sharing, distributed, network, and real-time systems, each serving different user needs and operational environments.

Uploaded by

sangeethagv00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views35 pages

OS UNIT I and II

An operating system (OS) is a crucial software that manages computer hardware and provides an interface for users and application programs. It performs essential functions such as process management, memory management, file management, and I/O device management, while also offering services like program execution and error detection. Various types of operating systems exist, including batch, time-sharing, distributed, network, and real-time systems, each serving different user needs and operational environments.

Uploaded by

sangeethagv00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

UNIT I

OPERATING SYSTEM BSICS

“An operating system is a program that manages a computer’s hardware. It also provides a basis for
application programs and acts as an intermediary between the computer user and the computer hardware”.

1.1 Basic Concepts of Operating System

An Operating System (OS) is a collection of software that manages computer hardware resources and provides
common services for computer programs. When you start using a Computer System then it's the Operating System
(OS) which acts as an interface between you and the computer hardware. The operating system is really a low
level Software which is categorised as a System Software and supports a computer's basic functions, such as memory
management, tasks scheduling and controlling peripherals etc.
This simple and easy tutorial will take you through step by step approach while learning Operating System concepts
in detail.
What is Operating System?
An Operating System (OS) is an interface between a computer user and computer hardware. An operating system
is a software which performs all the basic tasks like file management, memory management, process management,
handling input and output, and controlling peripheral devices such as disk drives and printers.

Generally, a Computer System consists of the following components:

 Computer Users are the users who use the overall computer system.
 Application Softwares are the softwares which users use directly to perform different activities. These
softwares are simple and easy to use like Browsers, Word, Excel, different Editors, Games etc. These are
usually written in high-level languages, such as Python, Java and C++.
 System Softwares are the softwares which are more complex in nature and they are more near to computer
hardware. These software are usually written in low-level languages like assembly language and
includes Operating Systems (Microsoft Windows, macOS, and Linux), Compiler, and Assembler etc.
 Computer Hardware includes Monitor, Keyboard, CPU, Disks, Memory, etc.

Some Operating System - Examples


There are plenty of Operating Systems available in the market which include paid and unpaid (Open Source).

 Windows
 Linux
 MacOS
 iOS
 Android
Operating System - Functions

 Process Management
 I/O Device Management
 File Management
 Network Management
 Main Memory Management
 Secondary Storage Management
 Security Management
 Command Interpreter System
 Control over system performance
 Job Accounting
 Error Detection and Correction

1.2 Services of Operating System

An Operating System provides services to both the users and to the programs.
 It provides programs an environment to execute.
 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −
 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection

Program execution
Operating systems handle many kinds of activities from user programs to system programs like printer
spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate, registers, OS
resources in use). Following are the major activities of an operating system with respect to program management −
 Loads a program into memory.
 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the
peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

File system manipulation


A file represents a collection of related information. Computers can store files on the disk (secondary storage),
for long-term storage purpose. Examples of storage media include magnetic tape, magnetic disk and optical disk
drives like CD, DVD. Each of these media has its own properties like speed, capacity, data transfer rate and data
access methods.
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions. Following are the major activities of an operating system with respect to file
management −
 Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

Communication
In case of distributed systems which are a collection of processors that do not share memory, peripheral
devices, or a clock, the operating system manages communications between all the processes. Multiple processes
communicate with one another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and security. Following are
the major activities of an operating system with respect to communication −
 Two processes often require data to be transferred between them
 Both the processes can be on one computer or on different computers, but are connected through a computer
network.
 Communication may be implemented by two methods, either by Shared Memory or by Message Passing.

Error Detection (or) Error handling


Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory hardware.
Following are the major activities of an operating system with respect to error handling −
 The OS constantly checks for possible errors.
 The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management (or) Resource Allocation


In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files
storage are to be allocated to each user or job. Following are the major activities of an operating system with respect to
resource management −
 The OS manages all kinds of resources using schedulers.
 CPU scheduling algorithms are used for better utilization of CPU.

Protection
Considering a computer system having multiple users and concurrent execution of multiple processes, the
various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users to the resources
defined by a computer system. Following are the major activities of an operating system with respect to protection −
 The OS ensures that all access to system resources is controlled.
 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.
1.3 Operating System Types

Batch operating system


The users of a batch operating system do not interact with the computer directly. Each user prepares his job on an
off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs with similar
needs are batched together and run as a group. The programmers leave their programs with the operator and the
operator then sorts the programs with similar requirements into batches.
The problems with Batch Systems are as follows −
 Lack of interaction between the user and the job.
 CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
 Difficult to provide the desired priority.

Time-sharing operating systems


Time-sharing is a technique which enables many people, located at various terminals, to use a particular computer
system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's time
which is shared among multiple users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of
Multiprogrammed batch systems, the objective is to maximize processor use, whereas in Time-Sharing Systems, the
objective is to minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently.
Thus, the user can receive an immediate response. For example, in a transaction processing, the processor executes
each user program in a short burst or quantum of computation. That is, if n users are present, then each user can get a
time quantum. When the user submits the command, the response time is in few seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide each user with a small portion
of a time. Computer systems that were designed primarily as batch systems have been modified to time-sharing
systems.
Advantages of Timesharing operating systems are as follows −
 Provides the advantage of quick response.
 Avoids duplication of software.
 Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
 Problem of reliability.
 Question of security and integrity of user programs and data.
 Problem of data communication.

Distributed operating System


Distributed systems use multiple central processors to serve multiple real-time applications and multiple users.
Data processing jobs are distributed among the processors accordingly.
The processors communicate with one another through various communication lines (such as high-speed
buses or telephone lines). These are referred as loosely coupled systems or distributed systems. Processors in a
distributed system may vary in size and function. These processors are referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −
 With resource sharing facility, a user at one site may be able to use the resources available at another.
 Speedup the exchange of data with one another via electronic mail.
 If one site fails in a distributed system, the remaining sites can potentially continue operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.

Network operating System


A Network Operating System runs on a server and provides the server the capability to manage data, users,
groups, security, applications, and other networking functions. The primary purpose of the network operating system
is to allow shared file and printer access among multiple computers in a network, typically a local area network
(LAN), a private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft Windows Server 2008,
UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
The advantages of network operating systems are as follows −
 Centralized servers are highly stable.
 Security is server managed.
 Upgrades to new technologies and hardware can be easily integrated into the system.
 Remote access to servers is possible from different locations and types of systems.
The disadvantages of network operating systems are as follows −
 High cost of buying and running a server.
 Dependency on a central location for most operations.
 Regular maintenance and updates are required.

Real Time operating System


A real-time system is defined as a data processing system in which the time interval required to process and
respond to inputs is so small that it controls the environment. The time taken by the system to respond to an input and
display of required updated information is termed as the response time. So in this method, the response time is very
less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of a processor or the flow
of data and real-time systems can be used as a control device in a dedicated application. A real-time operating system
must have well-defined, fixed time constraints, otherwise the system will fail. For example, Scientific experiments,
medical imaging systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems, secondary storage is
limited or missing and the data is stored in ROM. In these systems, virtual memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks and retains the priority
until it completes. Soft real-time systems have limited utility than hard real-time systems.
1.4 Computer System Operation

There are five basic types of computer operations: inputting, processing, outputting, storing and controlling.
Computer operations are executed by the five primary functional units that make up a computer system. The units
correspond directly to the five types of operations.
Input: This is the process of entering data and programs into the computer system. Input devices are
Keyboard, Image scanner, Microphone, Pointing device, Graphics tablet, Joystick, Light pen, Mouse,
Optical, Pointing stick, Touchpad, Touchscreen, Trackball, Webcam, Softcam etc.

Control Unit (CU): The process of input, output, processing and storage are performed under the
supervision of a unit called Control Unit’. It decides when to start receiving data, when to stop it, where
to store data, etc.

Memory Unit: Computer is used to store data and Instructions.

Arithmetic Logic Unit (ALU): The major operations performed by the ALU are addition, subtraction,
multiplication, division, logic and comparison.

Output: This is the process of producing results from the data for getting useful information. Output
devices are monitors (LED, LCD, CRT, etc), Printers (all types), Plotters, projectors, LCD Projection
Panels, Computer Output Microfilm (COM), Speaker(s), Head Phone, etc.

1.5 I/O Structure

I/O Structure consists of Programmed I/O, Interrupt driven I/O, DMS, CPU, Memory, External
devices, these are all connected with the help of Peripheral I/O Buses and General I/O Buses.

Programmed I/O
In the programmed I/O when we write the input then the device should be ready to take the data
otherwise the program should wait for some time so that the device or buffer will be free then it can take
the input.
Once the input is taken then it will be checked whether the output device or output buffer is free
then it will be printed. This process is continued every time in transferring of the data.
I/O Interrupts
To initiate any I / O operation, the CPU first loads the registers to the device controller. Then the device
controller checks the contents of the registers to determine what operation to perform.
There are two possibilities if I / O operations want to be executed. These are as follows −
 Synchronous I / O − The control is returned to the user process after the I/O
process is completed.
 Asynchronous I/O − The control is returned to the user process without
waiting for the I/O process to finish. Here, I/O process and the user process
run simultaneously.
DMA Structure
Direct Memory Access (DMA) is a method of handling I / O. Here the device controller directly
communicates with memory without CPU involvement.
After setting the resources of I/O devices like buffers, pointers, and counters, the device
controller transfers blocks of data directly to storage without CPU intervention.

1.6 Storage Structure


Computer Storage contains many computer components that are used to store data. It is
traditionally divided into primary storage, secondary storage and tertiary storage. Details about these
storage types and devices used in them are as follows −
Primary Storage
Primary storage is also known as the main memory and is the memory directly accessible by the
CPU. Some primary storage devices are −
ROM
ROM is read only memory. This memory cannot be changed, it can only be read as required.
Since ROM is unchangeable memory, it is used by data and programs that are frequently required and
seldom changed, like the system boot program.
RAM
RAM is random access memory. It is volatile i.e. the data in RAM is lost when the power is
switched off. RAM is the major form of primary memory as it is quite fast. However, it is also quite
expensive.

Cache Memory
Cache is used to store data and instructions that are frequently required by the CPU so it doesn't
have to search them in the main memory. This is a small memory that is also very fast.
Secondary Storage
Secondary or external storage is not directly accessible by the CPU. The data from secondary
storage needs to be brought into the primary storage before the CPU can use it. Secondary storage
contains a large amount of data permanently. The different types of secondary storage devices are −
Hard Disk
Hard disks are the most famously used secondary storage devices. They are round, flat pieces of
metal covered with magnetic oxide. They are available in many sizes ranging from 1 to 14 inch
diameter.
Floppy Disk
They are flexible plastic discs which can bend, coated with magnetic oxide and are covered with
a plastic cover to provide protection. Floppy disks are also known as floppies and diskettes.
Memory Card
This has similar functionality to a flash drive but is in a card shape. It can easily plug into a port
and removed after its work is done. A memory card is available in various sizes such as 8MB, 16MB,
64MB, 128MB, 256MB etc.
Flash Drive
This is also known as a pen drive. It helps in easy transportation of data from one system to
another. A pen drive is quite compact and comes with various features and designs.
CD-ROM
This is short for compact disk - read only memory. A CD is a shiny metal disk of silver colour. It
is already pre recorded and the data on it cannot be altered. It usually has a storage capacity of 700 MB.

Tertiary Storage
This provides a third level of storage. Most of the rarely used data is archived in tertiary storage
as it is even slower than primary storage. Tertiary storage stores a large amount of data that is handled
and retrieved by machines, not humans. The different tertiary storage devices are −
Tape Libraries
These may contain one or more tape drives, a barcode reader for the tapes and a robot to load the
tapes. The capacity of these tape libraries is more than a thousand times that of hard drives and so they
are useful for storing large amounts of data.
Optical Jukeboxes
These are storage devices that can handle optical disks and provide tertiary storage ranging from
terabytes to petabytes. They can also be called optical disk libraries, robotic drives, etc.

1.7 Memory Hierarchy

Memory Hierarchy is one of the most required things in Computer Memory as it helps in
optimizing the memory available in the computer.

There are multiple levels present in the memory, each one having a different size, different cost,
etc. Some types of memory like cache, and main memory are faster as compared to other types of
memory but they are having a little less size and are also costly whereas some memory has a little higher
storage value, but they are a little slower.
Accessing of data is not similar in all types of memory, some have faster access whereas some
have slower access.

Types of Memory Hierarchy


This Memory Hierarchy Design is divided into 2 main types:
 External Memory or Secondary Memory: Comprising of Magnetic Disk,
Optical Disk, and Magnetic Tape i.e. peripheral storage devices which are
accessible by the processor via an I/O Module.
 Internal Memory or Primary Memory: Comprising of Main Memory, Cache
Memory & CPU registers. This is directly accessible by the processor.
Memory Hierarchy Design
1. Registers
Registers are small, high-speed memory units located in the CPU. They are used to store
the most frequently used data and instructions. Registers have the fastest access time and the
smallest storage capacity, typically ranging from 16 to 64 bits.

2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently
used data and instructions that have been recently accessed from the main memory. Cache
memory is designed to minimize the time it takes to access data by providing the CPU with
quick access to frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary memory of
a computer system. It has a larger storage capacity than cache memory, but it is slower. Main
memory is used to store data and instructions that are currently in use by the CPU.

Types of Main Memory


 Static RAM: Static RAM stores the binary information in flip flops and
information remains valid until power is supplied. It has a faster access time and
is used in implementing cache memory.
 Dynamic RAM: It stores the binary information as a charge on the capacitor. It
requires refreshing circuitry to maintain the charge on the
capacitors after a few milliseconds. It contains more memory cells per unit area as
compared to SRAM.

4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a
non-volatile memory unit that has a larger storage capacity than main memory. It is used to
store data and instructions that are not currently in use by the CPU. Secondary storage has
the slowest access time and is typically the least expensive type of memory in the memory
hierarchy.

5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a
plastic or a magnetized material. The Magnetic disks work at a high speed inside the computer
and these are frequently used.

6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic
film. It is generally used for the backup of data. In the case of a magnetic tape, the access
time for a computer is a little slower and therefore, it requires some amount of time for
accessing the strip.
1.8 System Components

An Operating system is an interface between users and the hardware of a computer system.
It is a system software that is viewed as an organized collection of software consisting of
procedures and functions, providing an environment for the execution of programs.
The operating system manages resources of system software and computer hardware resources.
It allows computing resources to be used in an efficient way.
Programs interact with computer hardware with the help of operating system. A user can interact
with the operating system by making system calls or using OS commands.

Important Components of the Operating System:


 Process management
 Files management
 Command Interpreter
 System calls
 Signals
 Network management
 Security management
 I/O device management
 Secondary storage management
 Main memory management

Process Management:
A process is a program in execution. It consists of the followings:
 Executable program
 Program’s data
 Stack and stack pointer
 Program counter and other CPU registers
 Details of opened files
A process can be suspended temporarily and the execution of another process can be taken up. A
suspended process can be restarted later. Before suspending a process, its details are saved in a table
called the process table so that it can be executed later on. An operating system supports two system calls
to manage processes Create and Kill –
 Create a system call used to create a new process.
 Kill system call used to delete an existing process.
A process can create a number of child processes. Processes can communicate among themselves
either using shared memory or by message-passing techniques. Two processes running on two different
computers can communicate by sending messages over a network.

Files Management:
Files are used for long-term storage. Files are used for both input and output. Every operating
system provides a file management service. This file management service can also be treated as an
abstraction as it hides the information about the disks from the user. The operating system also provides a
system call for file management. The system call for file management includes

 File creation
 File deletion
 Read and Write operations
Files are stored in a directory. System calls provide to put a file in a directory or to remove a
file from a directory. Files in the system are protected to maintain the privacy of the user. Below shows
the Hierarchical File Structure directory.

Command Interpreter:
There are several ways for users to interface with the operating system. One of the approaches to
user interaction with the operating system is through commands. Command interpreter provides a
command-line interface.
It allows the user to enter a command on the command line prompt (cmd). The command
interpreter accepts and executes the commands entered by a user. For example, a shell is a command
interpreter under UNIX. The commands to be executed are implemented in two ways:
 The command interpreter itself contains code to be executed.
 The command is implemented through a system file. The necessary system file is
loaded into memory and executed.

1.9 System Calls

System calls provide an interface to the services made by an operating system. The user interacts
with the operating system programs through System calls. It provides a level of abstraction as the user is
not aware of the implementation or execution of the call made.
System calls are available for the following operations:
 Process Management
 Memory Management
 File Operations
 Input / Output Operations

Signals
Signals are used in the operating systems to notify a process that a particular event has occurred.
Signals are the software or hardware interrupts that suspend the current execution of the task. Signals are
also used for inter-process communication. A signal follows the following pattern :
 A signal is generated by the occurrence of a particular event it can be the clicking
of the mouse, the execution of the program successfully or an error notifying,
etc.
 A generated signal is delivered to a process for further execution.
 Once delivered, the signal must be handled.
 A signal can be synchronous and asynchronous which is handled by a default
handler or by the user-defined handler.
The signal causes temporarily suspends the current task it was processing, saves its registers on
the stack, and starts running a special signal handling procedure, where the signal is assigned to it.

Network Management:
Network management is a set of processes and procedures that help organizations to optimize
their computer networks. Mainly, it ensures that users have the best possible experience while using
network applications and services.

Security Management:
The security mechanisms in an operating system ensure that authorized programs have access to
resources, and unauthorized programs have no access to restricted resources. Security management
refers to the various processes where the user changes the file, memory, CPU, and other hardware
resources that should have authorization from the operating system.

I/O Device Management:


The I/O device management component is an I/O manager that hides the details of hardware
devices and manages the main memory for devices using cache and spooling. This component provides a
buffer cache and general device driver code that allows the system to manage the main memory and the
hardware devices connected to it.

Secondary Storage Management:


Secondary storage for data on the computer. The computer’s main memory (RAM) is a volatile
storage device in which all programs reside, it provides only temporary storage space for performing
tasks.
Secondary storage refers to the media devices other than RAM (e.g. CDs, DVDs, or hard disks)
that provide additional space for permanent storing of data and software programs which is also called
non-volatile storage.

Main memory management:


Main memory is a flexible and volatile type of storage device. It is a large sequence of bytes and
addresses used to store volatile data. Main memory is also called Random Access Memory (RAM),
which is the fastest computer storage available on PCs.

1.10 System Programs

System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, one which comes at last is Hardware. Then
it is Operating System, System Programs, and finally Application Programs.

1. File Management
A file is a collection of specific information stored in the memory of a computer system. File
management is defined as the process of manipulating files in the computer system, its management
includes the process of creating, modifying and deleting files.
1. It helps to create new files in the computer system and placing them at specific
locations.
2. It helps in easily and quickly locating these files in the computer system.
3. It makes the process of sharing files among different users very easy and user-
friendly.
2. Status Information
Information like date, time amount of available memory, or disk space is asked by some users.
Others providing detailed performance, logging, and debugging information which is more complex.
All this information is formatted and displayed on output devices or printed.
 File Modification
For modifying the contents of files we use this. For Files stored on disks or other storage
devices, we used different types of editors. For searching contents of files or perform transformations
of files we use special commands.
 Program Loading and Execution –
When the program is ready after Assembling and compilation, it must be loaded into
memory for execution. A loader is part of an operating system that is responsible for loading
programs and libraries. It is one of the essential stages for starting a program.
 Communications –
Virtual connections among processes, users, and computer systems are provided by
programs. Users can send messages to another user on their screen, User can send e-mail, browsing
on web pages, remote login, the transformation of files from one user to another.

1.11 System Design and Implementation

An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an atmosphere
in which different applications and programs can do useful work.
There are many problems that can occur while designing and implementing an operating system.
These are covered in operating system design and implementation.

Operating System Design Goals


It is quite complicated to define all the goals and specifications of the operating system while
designing it.The design changes depending on the type of the operating system i.e if it is batch system,
time shared system, single user system, multi user system, distributed system etc.

There are basically two types of goals while designing an operating system. These are −
User Goals
The operating system should be convenient, easy to use, reliable, safe and fast according to the
users. However, these specifications are not very useful as there is no set method to achieve these goals.

System Goals
The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system. But there is not
specific method to achieve these goals as well.
Operating System Implementation
The operating system needs to be implemented after it is designed. Earlier they were written in
assembly language but now higher level languages are used. The first system not written in assembly
language was the Master Control Program (MCP) for Burroughs Computers.
Advantages of Higher Level Language
There are multiple advantages to implementing an operating system using a higher level
language such as: the code is written more fast, it is compact and also easier to debug and understand.
Disadvantages of Higher Level Language
Using high level language for implementing an operating system leads to a loss in speed and
increase in storage requirements. However in modern systems only a small amount of code is needed for
high performance, such as the CPU scheduler and memory manager. Also, the bottleneck routines in the
system can be replaced by assembly language equivalents if required.

1.12 Introduction to Process

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.

A process is defined as an entity which represents the basic unit of work to be


implemented in the system.
S.N. Component & Description

1
Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.

2
Heap
This is dynamically allocated memory to a process during its run time.

3
Text
This includes the current activity represented by the value of Program Counter and the contents of
the processor's registers.

4
Data
This section contains the global and static variables.

1.13 Process State

When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1
Start
This is the initial state when a process is first started/created.

2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come
into this state after Start state or while running it by but interrupted by the scheduler to assign
CPU to some other process.

3
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.

5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved
to the terminated state where it waits to be removed from main memory.

S.N. Information & Description

1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2
Process privileges
This is required to allow/disallow access to system resources.

3
Process ID
Unique identification for each of the process in the operating system.

4
Pointer
A pointer to parent process.

5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.

6
CPU registers
Various CPU registers where process need to be stored for execution for running state.

7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
8
Memory management information
This includes the information of page table, memory limits, Segment table depending on memory used by the
operating system.

9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.

10
IO status information
This includes a list of I/O devices allocated to the process.

1.14 Process Control Block

A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed
to keep track of a process as listed below in the table −
1.15 Process Scheduling

The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the loaded
process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with the
running process.

Process Scheduling Queues


 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
Two-State Process Model
Two-state process model refers to running and non-running states which are described below

S.N. State & Description

1
Running
When a new process is created, it enters into the system as in the running state.

2
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the
queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher
is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a
process from the queue to execute.

Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the average
departure rate of processes leaving the system.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the process.
CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one
of them.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the
swapped out-processes.
Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short term Speed is fastest among other Speed is in between both short
scheduler two and long term scheduler.

3 It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.

4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing


in time sharing system sharing system systems.

5 It selects processes from pool It selects those processes which It can re-introduce the process into
and loads them into memory are ready to execute memory and execution can be
for execution continued.

1.16 Operations on Process

There are many operations that can be performed on processes. Some of these are process
creation, process pre-emption, process blocking, and process termination.
Process Creation
Processes need to be created in the system for different operations.

 User request for process creation


 System initialization
 Execution of a process creation system call by a running process
 Batch job initialization
Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing currently and
the next process to execute is determined by the short- term scheduler. Preemption makes sure that all
processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows −

Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O
events are executed in the main memory and don't require the processor. After the event is complete, the
process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −

Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant. The child
process sends its status information to the parent process before it terminates. Also, when a parent
process is terminated, its child processes are terminated as well as the child processes cannot run if the
parent processes are terminated.

1.17 Interprocess Communication

"Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."
Processes executing concurrently in the operating system may be either independent processes
or cooperating processes.

 Shared Memory
 Message Passing
 Naming
 Synchronization
 Buffering

1.18 Communication in Client/Server Systems

Client/Server communication involves two components, namely a client and a server. They are
usually multiple clients in communication with a single server. The clients send requests to the server
and the server responds to the client requests.
There are three main methods to client/server communication. These are given as follows −
Sockets
Sockets facilitate communication between two processes on the same machine or different
machines.

Remote Procedure Calls

These are interprocess communication techniques that are used for client-server based
applications. A remote procedure call is also known as a subroutine call or a function call.

Pipes

These are interprocess communication methods that contain two end points. Data is entered from
one end of the pipe by a process and consumed from the other end by the other process.
The two different types of pipes are ordinary pipes and named pipes.
Ordinary pipes only allow one way communication. For two-way communication, two pipes
are required. Ordinary pipes have a parent child relationship between the processes as the pipes can only
be accessed by processes that created or inherited them.
Named pipes are more powerful than ordinary pipes and allow two way communication. These
pipes exist even after the processes using them have terminated. They need to be explicitly deleted when
not required anymore.
1.19 Threads

A thread is a single sequential flow of execution of tasks of a process so it is also known as


thread of execution or thread of control. There is a way of thread execution inside the process of any
operating system.
Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks. Thread is often referred to as a lightweight process.

Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a
new process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.

Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.

User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. The kernel level thread does not know nothing about the
user level thread.

Kernel level thread


There is a thread control block and process control block in the system for each thread and
process in the kernel- level thread.The kernel-level thread offers a system call to create and manage the
threads from user-space.

IMPORTANT QUESTION

2 MARKS
1. Define Operating System.
2. Write the services of OS.
3. What are the Memory Hierarchy.
4. What are all the operations on process?
5. Define threads.

5 MARKS
1. Briefly explain the computer system operation.
2. Explain the I/O Structure and Storage Structure.
3. Explain system design and implementation.
4. Discuss about communication in client/server system.
5. Discuss the system components.

10 MARKS
1. Explain the types of Operating System.
2. Describe about system calls and sytem program.
3. Discuss about process state and PCB.
4. Explain process scheduling
5. Explain IPC.
UNIT II
CPU SCHEDULING ALGORITHM AND PREVENTION

Scheduling of this kind is a fundamental operating-system function. Almost all computer resources are
scheduled before use.
.
2.1 CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to
be executed. The selection process is carried out by the short- term scheduler (or CPU scheduler). The scheduler
selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.
Selection of process by CPU follows the scheduling algorithm.

CPU scheduling decisions may take place when a process:


1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is non preemptive, All other scheduling is preemptive

Preemptive
When a process switches from running to a waiting state (due to unavailability of I/O) or terminates.

Non preemptive
Once the resource allocated to a process, the process holds the CPU till it gets terminated or it reaches a
waiting state

2.2 Scheduling Criteria


There are many different criteria’s to check when considering the "best" scheduling algorithm, they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the
time(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly loaded) to 90%
(heavily loaded.)

Throughput
It is the total number of processes completed per unit time or rather say total amount of work done in a unit of
time. This may range from 10/second to 1/hour depending on the specific processes.

Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of the
process to the time of completion of the process(Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the
ready queue to acquire get control on the CPU.

Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into the CPU.

Response Time
Amount of time it takes from when a request was submitted until the first response is produced. Remember, it
is the time till the first response and not the completion of process execution (final response).
In general CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.

2.3 Scheduling Algorithms

FIRST COME FIRST SERVE

First Come First Serve (FCFS) scheduling algorithm simply schedules the jobs according to their arrival
time. The job which comes first in the ready queue will get the CPU first. The lesser the arrival time of the job, the
sooner will the job get the CPU. FCFS scheduling may cause the problem of starvation if the burst time of the first
process is the longest among all the jobs.
 First Come First Serve is just like FIFO (First in First out) Queue data structure,where the data element
which is added to the queue first, is the one who leaves the queue first.
 This is used in Batch Systems.
 It's easy to understand and implement programmatically, using a Queue data structure,where a new process
enters through the tail of the queue, and the scheduler selects process from the head of the queue.
 A perfect real life example of FCFS scheduling is buying tickets at ticket counter.

Advantages
1. Suitable for batch system
2. FCFS is pretty simple and easy to implement.
3. Eventually, every process will get a chance to run, so starvation doesn't occur.
Disadvantages
1. The scheduling method is non preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.
3. Although it is easy to implement, but it is poor in performance since the average waiting time is higher as
compare to other scheduling algorithms.

SHORTEST JOB FIRST


A diverse approach to CPU scheduling is the technique of shortest-job-first (SJF) scheduling algorithm which
links with each process the length of the process's next CPU burst. If the CPU is available, it is assigned to the process
that has the minimum next CPU burst. If the subsequent CPU bursts of two processes become the same, then FCFS
scheduling is used to break the tie.
 SJF scheduling algorithm, schedules the processes according to their burst time.
 In SJF scheduling, the process with the lowest burst time, among the list of available processes in the ready
queue, is going to be scheduled next.
 However, it is very difficult to predict the burst time needed for a process hence this algorithm is very
difficult to implement in the system.

Advantages
 short processes are executed first and then followed by longer processes.
 The throughput is increased because more processes can be executed in less amount
of time.
Disadvantages:
 The time taken by a process must be known by the CPU beforehand, which is not possible.
 Longer processes will have more waiting time, eventually they'll suffer starvation.

ROUND-ROBIN
The round-robin (RR) scheduling technique is intended mainly for time-sharing systems. This algorithm is
related to FCFS scheduling, but pre-emption is included to toggle among processes. A small unit of time which is
termed as a time quantum or time slice has to be defined.
A 'time quantum' is usually from 10 to 100 milliseconds. The ready queue gets treated with a circular queue.
The CPU scheduler goes about the ready queue, allocating the CPU with each process for the time interval which is at
least 1-time quantum.
 A fixed time is allotted to each process, called quantum, for execution.
 Once a process is executed for given time period that process is preempted and other process executes for
given time period.
 Context switching is used to save states of preempted processes.
 If time quantum is very large than RR scheduling algorithm treat as FCFS and if time quantum is small than
RR called processor sharing. Processor sharing show to each process that they have their own processor.
 The central concept is time switching in RR scheduling. If the context switch time is 10 percent of the time
quantum then about 10 percent time will be spent in context switching.
 The ready queue is maintained as a circular queue, so when all processes have had a turn, then the
scheduler gives the first process another turn, and so on.

Advantages
1. It can be actually implementable in the system because it is not depending on theburst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU.
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task in the system.

PRIORITY SCHEDULING
Scheduler consider the priority of processes. The priority assigned to each process and CPU allocated to
highest priority process. Equal priority processes scheduled in FCFS order.Priority can be discussed regarding Lower
and higher priority. Numbers denote it. We can use 0 for lower priority as well as more top priority. There is not a
hard and fast rule to assign numbers to preferences.
Priority Scheduling suffers from a starvation problem. The starvation problem leads to
indefinite blocking of a process due to low priority. Every time higher priority process acquires CPU, and Low
priority process is still waiting in the waiting queue. The aging technique gives us a solution to overcome this
starvation problem in this technique; we increased the priority of process that was waiting in the system for a long
time.

Advantages
 The priority of a process can be selected based on memory requirement, time requirement or user
preference. For example, a high end game will have better graphics, that means the process which updates the screen
in a game will have higher priority so as to achieve better graphics performance.

Disadvantages:
 A second scheduling algorithm is required to schedule the processes which have same priority.
 In preemptive priority scheduling, a higher priority process can execute ahead of an already executing
lower priority process. If lower priority process keeps waiting for higher priority processes, starvation occurs.
MULTILEVEL QUEUE SCHEDULING
This Scheduling algorithm has been created for situations in which processes are easily classified into
different groups.
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes

MULTILEVEL FEEDBACK QUEUE SCHEDULING


In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to the
system. Processes do not move between queues. This setup has the advantage of low scheduling overhead, but the
disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, allows a process to move between queues. The idea is to
separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to
a lower-priority queue. Similarly, a process that waits too long in a lower-priority queue may be moved to a higher-
priority queue. This form of aging prevents starvation.
 The number of queues.
 The scheduling algorithm for each queue.
 The method used to determine when to upgrade a process to a higher-priority queue.
 The method used to determine when to demote a process to a lower-priority queue.
 The method used to determine which queue a process will enter when that process
needs service.
The definition of a multilevel feedback queue scheduler makes it the most general CPUscheduling algorithm.
It can be configured to match a specific system under design.

2.4 Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows
 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no
operation is performed.

wait(S)
{
while (S<=0);

S--;
}

 Signal
The signal operation increments the value of its argument S.

signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these
are given as follows
 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are used to
coordinate the resource access, where the semaphore count is the number of available resources. If the resources
are added, semaphore count automatically incremented and if the resources are removed, the count is
decremented.
 Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation
only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes
easier to implement binary semaphores than counting semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows


 Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly
and are much more efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So they are machine
independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −


 Semaphores are complicated so the wait and signal operations must be implemented in the correct order to
prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens because the
wait and signal operations prevent the creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes may access the critical section first
and high priority processes later.

2.5 Classic Problems of Synchronization


In our solutions to the problems, we use semaphores for synchronization, since that is the traditional way to
present such solutions. However, actual implementations of these solutions could use mutex locks in place of binary
semaphores.
These problems are used for testing nearly every newly proposed synchronization scheme. The following
problems of synchronization are considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
These are summarized, for detailed explanation, you can view the linked articles for each.
 Bounded-buffer (or Producer-Consumer) Problem :
Bounded Buffer problem is also called producer consumer problem. This problem is generalized in terms
of the Producer-Consumer problem. Solution to this problem is, creating two counting semaphores “full” and
“empty” to keep track of the current number of full and empty buffers respectively. Producers produce a product
and consumers consume the product, but both use of one of the containers each time.

 Dining-Philosophers Problem :
The Dining Philosopher Problem states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher
may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be picked up by any one of its
adjacent followers but not both. This problem involves the allocation of limited resources to a group of
processes in a deadlock-free and starvation-free manner.

 Readers and Writers Problem :


Suppose that a database is to be shared among several concurrent processes. Some of these processes
may want only to read the database, whereas others may want to update (that is, to read and write) the database.
We distinguish between these two types of processes by referring to the former as readers and to the latter as
writers. Precisely in OS we call this situation as the readers-writers problem. Problem parameters:
 One set of data is shared among a number of processes.

 Once a writer is ready, it performs its write. Only one writer may write at a time.

 If a process is writing, no other process can read it.

 If at least one reader is reading, no other process can write.

 Readers may not write and only read.

 Sleeping Barber Problem :


Barber shop with one barber, one barber chair and N chairs to wait in. When no customers the barber
goes to sleep in barber chair and must be woken when a customer comes in. When barber is cutting hair new
customers take empty seats to wait, or leave if no vacancy.
2.6 Basic Concept of Deadlocks
In a multiprogramming system, numerous processes get competed for a finite number of resources. Any
process requests resources, and as the resources aren't available at that time,the process goes into a waiting state. At
times, a waiting process is not at all able again to change its state as other waiting processes detain the resources it has
requested.
That condition is termed as deadlock. Every process needs some resources to complete its execution.
1. The process requests for some resource.
2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.
A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to
some another process. In this situation, none of the process gets executed since the resource it needs, is held by some
other process which is also waiting for some other resource to be released.

Definition: A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.

Under the standard mode of operation, any process may use a resource in only the below mentioned
sequence:
1. Request: When the request can't be approved immediately (where the case may be when
another process is utilizing the resource), then the requesting job must remain waited until it
can obtain the resource.
2. Use: The process can run on the resource (like when the resource is a printer, its job/process
is to print on the printer).
3. Release: The process releases the resource (like, terminating or exiting any specific
process).

2.7 Deadlock Characterization


A deadlock happens in operating system when two or more processes need some resource to complete their
execution that is held by the other process.
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not mutually exclusive.
They are given as follows −
Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram below, there is a single
instance of Resource 1 and it is held by Process 1 only.

Hold and Wait


A process can hold multiple resources and still request more resources from other processes which are holding them.
In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting the Resource 1 which is
held by Process 1.

No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource voluntarily. In the
diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.

Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource held by the third
process and so on, till the last process is waiting for a resource held by the first process. This forms a circular chain.
For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated
Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
2.8 Deadlock Prevention
Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual exclusion because some resources, such
as the tape drive and printer, are inherently non-shareable.
Eliminate Hold and wait: Allocate all required resources to the process before the start of its execution,
this way hold and wait condition is eliminated but it will lead to low device utilization. for example, if a process
requires a printer at a later time and we have allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new request for resources after releasing the
current set of resources. This solution may lead to starvation.

Eliminate No Preemption : Preempt resources from the process when resources are required by other high-priority
processes.
Eliminate Circular Wait : Each resource will be assigned a numerical number. A process can request the resources
to increase/decrease. order of numbering. For Example, if the P1 process is allocated R5 resources, now next time if
P1 asks for R4, R3 lesser than R5 such a request will not be granted, only a request for resources more than R5 will
be granted.
Detection and Recovery: Another approach to dealing with deadlocks is to detect and recover from them when
they occur. This can involve killing one or more of the processes involved in the deadlock or releasing some of the
resources they hold.

2.9 Deadlock Avoidance

Resource Allocation Graph


The resource allocation graph (RAG) is used to visualize the system’s current state as a graph. Sometimes,
if there are fewer processes, we can quickly spot a deadlock in the system by looking at the graph rather than the
tables we use in Banker’s algorithm. Deadlock avoidance can also be done with Banker’s Algorithm.

Banker’s Algorithm
Bankers’s Algorithm is a resource allocation and deadlock avoidance algorithm which test all the request
made by processes for resources, it checks for the safe state, and after granting a request system remains in the safe
state it allows the request, and if there is no safe state it doesn’t allow the request made by the process.

Inputs to Banker’s Algorithm


1. Max needs of resources by each process.
2. Currently, allocated resources by each process.
3. Max free available resources in the system.

The request will only be granted under the below condition


1. If the request made by the process is less than equal to the max needed for that process.
2. If the request made by the process is less than equal to the freely available resource in the system.

Timeouts: To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used to limit the amount
of time a process can wait for a resource. If the help is unavailable within the timeout period, the process can be
forced to release its current resources and try again later.
Example:
Total resources in system:
ABCD
6576
The total number of resources are
Available system resources are:
ABCD
3112
Available resources are
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Maximum resources we have for a process
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0

Need = Maximum Resources Requirement – Currently Allocated Resources.


Need = maximum resources - currently allocated resources.
Processes (need resources):
ABCD
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0

2.10 Deadlock Detection


A deadlock occurs when two or more processes are blocked, waiting for each other to release the resources
they need. This can lead to a system-wide stall, where no process can make progress.
Deadlock Detection :
1. If resources have a single instance
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the Resource Allocation
Graph. The presence of a cycle in the graph is a sufficient condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 → P1 → R2 → P2. So,
Deadlock is Confirmed.

2. If there are multiple instances of resources


Detection of the cycle is necessary but not a sufficient condition for deadlock detection, in this case, the system
may or may not be in deadlock varies according to different situations.
3. Wait-For Graph Algorithm
The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in a system where
resources can have multiple instances. The algorithm works by constructing a Wait-For Graph, which is a directed
graph that represents the dependencies between processes and resources.

2.11 Recovery of Deadlock.


A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a time and
space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process
Killing all the processes involved in the deadlock. Killing process one by one. After killing each process
check for deadlock again and keep repeating the process till the system recovers from deadlock. Killing all the
processes one by one helps a system to break circular wait conditions.
2. Resource Preemption
Resources are preempted from the processes involved in the deadlock, and preempted resources are
allocated to other processes so that there is a possibility of recovering the system from the deadlock. In this case,
the system goes into starvation.
3. Concurrency Control
Concurrency control mechanisms are used to prevent data inconsistencies in systems with multiple
concurrent processes. These mechanisms ensure that concurrent processes do not access the same data at the
same time, which can lead to inconsistencies and errors. Deadlocks can occur in concurrent systems when two or
more processes are blocked, waiting for each other to release the resources they need. This can result in a
system-wide stall, where no process can make progress. Concurrency control mechanisms can help prevent
deadlocks by managing access to shared resources and ensuring that concurrent processes do not interfere with
each other.

Detection and Recovery: If deadlocks do occur, the operating system must detect and resolve them. Deadlock
detection algorithms, such as the Wait-For Graph, are used to identify deadlocks, and recovery algorithms, such
as the Rollback and Abort algorithm, are used to resolve them. The recovery algorithm releases the resources
held by one or more processes, allowing the system to continue to make progress.

Advantages of Deadlock Detection and Recovery in Operating Systems:

1. Improved System Stability: Deadlocks can cause system-wide stalls, and detecting and resolving deadlocks
can help to improve the stability of the system.
2. Better Resource Utilization: By detecting and resolving deadlocks, the operating system can ensure that
resources are efficiently utilized and that the system remains responsive to user requests.
3. Better System Design: Deadlock detection and recovery algorithms can provide insight into the behavior of the
system and the relationships between processes and resources, helping to inform and improve the design of the
system.

Disadvantages of Deadlock Detection and Recovery in Operating Systems:

1. Performance Overhead: Deadlock detection and recovery algorithms can introduce a significant overhead in
terms of performance, as the system must regularly check for deadlocks and take appropriate action to resolve
them.
2. Complexity: Deadlock detection and recovery algorithms can be complex to implement, especially if they use
advanced techniques such as the Resource Allocation Graph or Timestamping.
3. False Positives and Negatives: Deadlock detection algorithms are not perfect and may produce false positives
or negatives, indicating the presence of deadlocks when they do not exist or failing to detect deadlocks that do
exist.
4. Risk of Data Loss: In some cases, recovery algorithms may require rolling back the state of one or more
processes, leading to data loss or corruption.

IMPORTANT QUESTION

2 MARKS
1. Define deadlock.
2. What are the characteristics of deadlock?
3. Define semaphores.
4. What is turnaround time?
5. Difference between preemptive and non preemptive scheduling.

5 MARKS
1. Explain scheduling criteria.
2. Discuss the deadlock characteristics.
3. Briefly explain the CPU Scheduler.

10 MARKS
1. Discuss about the types of scheduling algorithms.
2. Briefly explain the semaphores.
3. Explain the classic problems of synchronization.
4. Explain deadlock prevention and avoidance.
5. Discuss about deadlock detection and recovery.

You might also like