OS UNIT I and II
OS UNIT I and II
“An operating system is a program that manages a computer’s hardware. It also provides a basis for
application programs and acts as an intermediary between the computer user and the computer hardware”.
An Operating System (OS) is a collection of software that manages computer hardware resources and provides
common services for computer programs. When you start using a Computer System then it's the Operating System
(OS) which acts as an interface between you and the computer hardware. The operating system is really a low
level Software which is categorised as a System Software and supports a computer's basic functions, such as memory
management, tasks scheduling and controlling peripherals etc.
This simple and easy tutorial will take you through step by step approach while learning Operating System concepts
in detail.
What is Operating System?
An Operating System (OS) is an interface between a computer user and computer hardware. An operating system
is a software which performs all the basic tasks like file management, memory management, process management,
handling input and output, and controlling peripheral devices such as disk drives and printers.
Computer Users are the users who use the overall computer system.
Application Softwares are the softwares which users use directly to perform different activities. These
softwares are simple and easy to use like Browsers, Word, Excel, different Editors, Games etc. These are
usually written in high-level languages, such as Python, Java and C++.
System Softwares are the softwares which are more complex in nature and they are more near to computer
hardware. These software are usually written in low-level languages like assembly language and
includes Operating Systems (Microsoft Windows, macOS, and Linux), Compiler, and Assembler etc.
Computer Hardware includes Monitor, Keyboard, CPU, Disks, Memory, etc.
Windows
Linux
MacOS
iOS
Android
Operating System - Functions
Process Management
I/O Device Management
File Management
Network Management
Main Memory Management
Secondary Storage Management
Security Management
Command Interpreter System
Control over system performance
Job Accounting
Error Detection and Correction
An Operating System provides services to both the users and to the programs.
It provides programs an environment to execute.
It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs like printer
spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate, registers, OS
resources in use). Following are the major activities of an operating system with respect to program management −
Loads a program into memory.
Executes the program.
Handles program's execution.
Provides a mechanism for process synchronization.
Provides a mechanism for process communication.
Provides a mechanism for deadlock handling.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the
peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
Communication
In case of distributed systems which are a collection of processors that do not share memory, peripheral
devices, or a clock, the operating system manages communications between all the processes. Multiple processes
communicate with one another through communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and security. Following are
the major activities of an operating system with respect to communication −
Two processes often require data to be transferred between them
Both the processes can be on one computer or on different computers, but are connected through a computer
network.
Communication may be implemented by two methods, either by Shared Memory or by Message Passing.
Protection
Considering a computer system having multiple users and concurrent execution of multiple processes, the
various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users to the resources
defined by a computer system. Following are the major activities of an operating system with respect to protection −
The OS ensures that all access to system resources is controlled.
The OS ensures that external I/O devices are protected from invalid access attempts.
The OS provides authentication features for each user by means of passwords.
1.3 Operating System Types
There are five basic types of computer operations: inputting, processing, outputting, storing and controlling.
Computer operations are executed by the five primary functional units that make up a computer system. The units
correspond directly to the five types of operations.
Input: This is the process of entering data and programs into the computer system. Input devices are
Keyboard, Image scanner, Microphone, Pointing device, Graphics tablet, Joystick, Light pen, Mouse,
Optical, Pointing stick, Touchpad, Touchscreen, Trackball, Webcam, Softcam etc.
Control Unit (CU): The process of input, output, processing and storage are performed under the
supervision of a unit called Control Unit’. It decides when to start receiving data, when to stop it, where
to store data, etc.
Arithmetic Logic Unit (ALU): The major operations performed by the ALU are addition, subtraction,
multiplication, division, logic and comparison.
Output: This is the process of producing results from the data for getting useful information. Output
devices are monitors (LED, LCD, CRT, etc), Printers (all types), Plotters, projectors, LCD Projection
Panels, Computer Output Microfilm (COM), Speaker(s), Head Phone, etc.
I/O Structure consists of Programmed I/O, Interrupt driven I/O, DMS, CPU, Memory, External
devices, these are all connected with the help of Peripheral I/O Buses and General I/O Buses.
Programmed I/O
In the programmed I/O when we write the input then the device should be ready to take the data
otherwise the program should wait for some time so that the device or buffer will be free then it can take
the input.
Once the input is taken then it will be checked whether the output device or output buffer is free
then it will be printed. This process is continued every time in transferring of the data.
I/O Interrupts
To initiate any I / O operation, the CPU first loads the registers to the device controller. Then the device
controller checks the contents of the registers to determine what operation to perform.
There are two possibilities if I / O operations want to be executed. These are as follows −
Synchronous I / O − The control is returned to the user process after the I/O
process is completed.
Asynchronous I/O − The control is returned to the user process without
waiting for the I/O process to finish. Here, I/O process and the user process
run simultaneously.
DMA Structure
Direct Memory Access (DMA) is a method of handling I / O. Here the device controller directly
communicates with memory without CPU involvement.
After setting the resources of I/O devices like buffers, pointers, and counters, the device
controller transfers blocks of data directly to storage without CPU intervention.
Cache Memory
Cache is used to store data and instructions that are frequently required by the CPU so it doesn't
have to search them in the main memory. This is a small memory that is also very fast.
Secondary Storage
Secondary or external storage is not directly accessible by the CPU. The data from secondary
storage needs to be brought into the primary storage before the CPU can use it. Secondary storage
contains a large amount of data permanently. The different types of secondary storage devices are −
Hard Disk
Hard disks are the most famously used secondary storage devices. They are round, flat pieces of
metal covered with magnetic oxide. They are available in many sizes ranging from 1 to 14 inch
diameter.
Floppy Disk
They are flexible plastic discs which can bend, coated with magnetic oxide and are covered with
a plastic cover to provide protection. Floppy disks are also known as floppies and diskettes.
Memory Card
This has similar functionality to a flash drive but is in a card shape. It can easily plug into a port
and removed after its work is done. A memory card is available in various sizes such as 8MB, 16MB,
64MB, 128MB, 256MB etc.
Flash Drive
This is also known as a pen drive. It helps in easy transportation of data from one system to
another. A pen drive is quite compact and comes with various features and designs.
CD-ROM
This is short for compact disk - read only memory. A CD is a shiny metal disk of silver colour. It
is already pre recorded and the data on it cannot be altered. It usually has a storage capacity of 700 MB.
Tertiary Storage
This provides a third level of storage. Most of the rarely used data is archived in tertiary storage
as it is even slower than primary storage. Tertiary storage stores a large amount of data that is handled
and retrieved by machines, not humans. The different tertiary storage devices are −
Tape Libraries
These may contain one or more tape drives, a barcode reader for the tapes and a robot to load the
tapes. The capacity of these tape libraries is more than a thousand times that of hard drives and so they
are useful for storing large amounts of data.
Optical Jukeboxes
These are storage devices that can handle optical disks and provide tertiary storage ranging from
terabytes to petabytes. They can also be called optical disk libraries, robotic drives, etc.
Memory Hierarchy is one of the most required things in Computer Memory as it helps in
optimizing the memory available in the computer.
There are multiple levels present in the memory, each one having a different size, different cost,
etc. Some types of memory like cache, and main memory are faster as compared to other types of
memory but they are having a little less size and are also costly whereas some memory has a little higher
storage value, but they are a little slower.
Accessing of data is not similar in all types of memory, some have faster access whereas some
have slower access.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently
used data and instructions that have been recently accessed from the main memory. Cache
memory is designed to minimize the time it takes to access data by providing the CPU with
quick access to frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary memory of
a computer system. It has a larger storage capacity than cache memory, but it is slower. Main
memory is used to store data and instructions that are currently in use by the CPU.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a
non-volatile memory unit that has a larger storage capacity than main memory. It is used to
store data and instructions that are not currently in use by the CPU. Secondary storage has
the slowest access time and is typically the least expensive type of memory in the memory
hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a
plastic or a magnetized material. The Magnetic disks work at a high speed inside the computer
and these are frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic
film. It is generally used for the backup of data. In the case of a magnetic tape, the access
time for a computer is a little slower and therefore, it requires some amount of time for
accessing the strip.
1.8 System Components
An Operating system is an interface between users and the hardware of a computer system.
It is a system software that is viewed as an organized collection of software consisting of
procedures and functions, providing an environment for the execution of programs.
The operating system manages resources of system software and computer hardware resources.
It allows computing resources to be used in an efficient way.
Programs interact with computer hardware with the help of operating system. A user can interact
with the operating system by making system calls or using OS commands.
Process Management:
A process is a program in execution. It consists of the followings:
Executable program
Program’s data
Stack and stack pointer
Program counter and other CPU registers
Details of opened files
A process can be suspended temporarily and the execution of another process can be taken up. A
suspended process can be restarted later. Before suspending a process, its details are saved in a table
called the process table so that it can be executed later on. An operating system supports two system calls
to manage processes Create and Kill –
Create a system call used to create a new process.
Kill system call used to delete an existing process.
A process can create a number of child processes. Processes can communicate among themselves
either using shared memory or by message-passing techniques. Two processes running on two different
computers can communicate by sending messages over a network.
Files Management:
Files are used for long-term storage. Files are used for both input and output. Every operating
system provides a file management service. This file management service can also be treated as an
abstraction as it hides the information about the disks from the user. The operating system also provides a
system call for file management. The system call for file management includes
File creation
File deletion
Read and Write operations
Files are stored in a directory. System calls provide to put a file in a directory or to remove a
file from a directory. Files in the system are protected to maintain the privacy of the user. Below shows
the Hierarchical File Structure directory.
Command Interpreter:
There are several ways for users to interface with the operating system. One of the approaches to
user interaction with the operating system is through commands. Command interpreter provides a
command-line interface.
It allows the user to enter a command on the command line prompt (cmd). The command
interpreter accepts and executes the commands entered by a user. For example, a shell is a command
interpreter under UNIX. The commands to be executed are implemented in two ways:
The command interpreter itself contains code to be executed.
The command is implemented through a system file. The necessary system file is
loaded into memory and executed.
System calls provide an interface to the services made by an operating system. The user interacts
with the operating system programs through System calls. It provides a level of abstraction as the user is
not aware of the implementation or execution of the call made.
System calls are available for the following operations:
Process Management
Memory Management
File Operations
Input / Output Operations
Signals
Signals are used in the operating systems to notify a process that a particular event has occurred.
Signals are the software or hardware interrupts that suspend the current execution of the task. Signals are
also used for inter-process communication. A signal follows the following pattern :
A signal is generated by the occurrence of a particular event it can be the clicking
of the mouse, the execution of the program successfully or an error notifying,
etc.
A generated signal is delivered to a process for further execution.
Once delivered, the signal must be handled.
A signal can be synchronous and asynchronous which is handled by a default
handler or by the user-defined handler.
The signal causes temporarily suspends the current task it was processing, saves its registers on
the stack, and starts running a special signal handling procedure, where the signal is assigned to it.
Network Management:
Network management is a set of processes and procedures that help organizations to optimize
their computer networks. Mainly, it ensures that users have the best possible experience while using
network applications and services.
Security Management:
The security mechanisms in an operating system ensure that authorized programs have access to
resources, and unauthorized programs have no access to restricted resources. Security management
refers to the various processes where the user changes the file, memory, CPU, and other hardware
resources that should have authorization from the operating system.
System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, one which comes at last is Hardware. Then
it is Operating System, System Programs, and finally Application Programs.
1. File Management
A file is a collection of specific information stored in the memory of a computer system. File
management is defined as the process of manipulating files in the computer system, its management
includes the process of creating, modifying and deleting files.
1. It helps to create new files in the computer system and placing them at specific
locations.
2. It helps in easily and quickly locating these files in the computer system.
3. It makes the process of sharing files among different users very easy and user-
friendly.
2. Status Information
Information like date, time amount of available memory, or disk space is asked by some users.
Others providing detailed performance, logging, and debugging information which is more complex.
All this information is formatted and displayed on output devices or printed.
File Modification
For modifying the contents of files we use this. For Files stored on disks or other storage
devices, we used different types of editors. For searching contents of files or perform transformations
of files we use special commands.
Program Loading and Execution –
When the program is ready after Assembling and compilation, it must be loaded into
memory for execution. A loader is part of an operating system that is responsible for loading
programs and libraries. It is one of the essential stages for starting a program.
Communications –
Virtual connections among processes, users, and computer systems are provided by
programs. Users can send messages to another user on their screen, User can send e-mail, browsing
on web pages, remote login, the transformation of files from one user to another.
An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an atmosphere
in which different applications and programs can do useful work.
There are many problems that can occur while designing and implementing an operating system.
These are covered in operating system design and implementation.
There are basically two types of goals while designing an operating system. These are −
User Goals
The operating system should be convenient, easy to use, reliable, safe and fast according to the
users. However, these specifications are not very useful as there is no set method to achieve these goals.
System Goals
The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system. But there is not
specific method to achieve these goals as well.
Operating System Implementation
The operating system needs to be implemented after it is designed. Earlier they were written in
assembly language but now higher level languages are used. The first system not written in assembly
language was the Master Control Program (MCP) for Burroughs Computers.
Advantages of Higher Level Language
There are multiple advantages to implementing an operating system using a higher level
language such as: the code is written more fast, it is compact and also easier to debug and understand.
Disadvantages of Higher Level Language
Using high level language for implementing an operating system leads to a loss in speed and
increase in storage requirements. However in modern systems only a small amount of code is needed for
high performance, such as the CPU scheduler and memory manager. Also, the bottleneck routines in the
system can be replaced by assembly language equivalents if required.
1
Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
2
Heap
This is dynamically allocated memory to a process during its run time.
3
Text
This includes the current activity represented by the value of Program Counter and the contents of
the processor's registers.
4
Data
This section contains the global and static variables.
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
1
Start
This is the initial state when a process is first started/created.
2
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come
into this state after Start state or while running it by but interrupted by the scheduler to assign
CPU to some other process.
3
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
5
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved
to the terminated state where it waits to be removed from main memory.
1
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2
Process privileges
This is required to allow/disallow access to system resources.
3
Process ID
Unique identification for each of the process in the operating system.
4
Pointer
A pointer to parent process.
5
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.
6
CPU registers
Various CPU registers where process need to be stored for execution for running state.
7
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
8
Memory management information
This includes the information of page table, memory limits, Segment table depending on memory used by the
operating system.
9
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
10
IO status information
This includes a list of I/O devices allocated to the process.
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed
to keep track of a process as listed below in the table −
1.15 Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the loaded
process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with the
running process.
1
Running
When a new process is created, it enters into the system as in the running state.
2
Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the
queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher
is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a
process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
2 Speed is lesser than short term Speed is fastest among other Speed is in between both short
scheduler two and long term scheduler.
3 It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.
5 It selects processes from pool It selects those processes which It can re-introduce the process into
and loads them into memory are ready to execute memory and execution can be
for execution continued.
There are many operations that can be performed on processes. Some of these are process
creation, process pre-emption, process blocking, and process termination.
Process Creation
Processes need to be created in the system for different operations.
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O
events are executed in the main memory and don't require the processor. After the event is complete, the
process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −
Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant. The child
process sends its status information to the parent process before it terminates. Also, when a parent
process is terminated, its child processes are terminated as well as the child processes cannot run if the
parent processes are terminated.
Shared Memory
Message Passing
Naming
Synchronization
Buffering
Client/Server communication involves two components, namely a client and a server. They are
usually multiple clients in communication with a single server. The clients send requests to the server
and the server responds to the client requests.
There are three main methods to client/server communication. These are given as follows −
Sockets
Sockets facilitate communication between two processes on the same machine or different
machines.
These are interprocess communication techniques that are used for client-server based
applications. A remote procedure call is also known as a subroutine call or a function call.
Pipes
These are interprocess communication methods that contain two end points. Data is entered from
one end of the pipe by a process and consumed from the other end by the other process.
The two different types of pipes are ordinary pipes and named pipes.
Ordinary pipes only allow one way communication. For two-way communication, two pipes
are required. Ordinary pipes have a parent child relationship between the processes as the pipes can only
be accessed by processes that created or inherited them.
Named pipes are more powerful than ordinary pipes and allow two way communication. These
pipes exist even after the processes using them have terminated. They need to be explicitly deleted when
not required anymore.
1.19 Threads
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a
new process.
o Threads can share the common data, they do not need to use Inter- Process
communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. The kernel level thread does not know nothing about the
user level thread.
IMPORTANT QUESTION
2 MARKS
1. Define Operating System.
2. Write the services of OS.
3. What are the Memory Hierarchy.
4. What are all the operations on process?
5. Define threads.
5 MARKS
1. Briefly explain the computer system operation.
2. Explain the I/O Structure and Storage Structure.
3. Explain system design and implementation.
4. Discuss about communication in client/server system.
5. Discuss the system components.
10 MARKS
1. Explain the types of Operating System.
2. Describe about system calls and sytem program.
3. Discuss about process state and PCB.
4. Explain process scheduling
5. Explain IPC.
UNIT II
CPU SCHEDULING ALGORITHM AND PREVENTION
Scheduling of this kind is a fundamental operating-system function. Almost all computer resources are
scheduled before use.
.
2.1 CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to
be executed. The selection process is carried out by the short- term scheduler (or CPU scheduler). The scheduler
selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.
Selection of process by CPU follows the scheduling algorithm.
Preemptive
When a process switches from running to a waiting state (due to unavailability of I/O) or terminates.
Non preemptive
Once the resource allocated to a process, the process holds the CPU till it gets terminated or it reaches a
waiting state
Throughput
It is the total number of processes completed per unit time or rather say total amount of work done in a unit of
time. This may range from 10/second to 1/hour depending on the specific processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of the
process to the time of completion of the process(Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the
ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced. Remember, it
is the time till the first response and not the completion of process execution (final response).
In general CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.
First Come First Serve (FCFS) scheduling algorithm simply schedules the jobs according to their arrival
time. The job which comes first in the ready queue will get the CPU first. The lesser the arrival time of the job, the
sooner will the job get the CPU. FCFS scheduling may cause the problem of starvation if the burst time of the first
process is the longest among all the jobs.
First Come First Serve is just like FIFO (First in First out) Queue data structure,where the data element
which is added to the queue first, is the one who leaves the queue first.
This is used in Batch Systems.
It's easy to understand and implement programmatically, using a Queue data structure,where a new process
enters through the tail of the queue, and the scheduler selects process from the head of the queue.
A perfect real life example of FCFS scheduling is buying tickets at ticket counter.
Advantages
1. Suitable for batch system
2. FCFS is pretty simple and easy to implement.
3. Eventually, every process will get a chance to run, so starvation doesn't occur.
Disadvantages
1. The scheduling method is non preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.
3. Although it is easy to implement, but it is poor in performance since the average waiting time is higher as
compare to other scheduling algorithms.
Advantages
short processes are executed first and then followed by longer processes.
The throughput is increased because more processes can be executed in less amount
of time.
Disadvantages:
The time taken by a process must be known by the CPU beforehand, which is not possible.
Longer processes will have more waiting time, eventually they'll suffer starvation.
ROUND-ROBIN
The round-robin (RR) scheduling technique is intended mainly for time-sharing systems. This algorithm is
related to FCFS scheduling, but pre-emption is included to toggle among processes. A small unit of time which is
termed as a time quantum or time slice has to be defined.
A 'time quantum' is usually from 10 to 100 milliseconds. The ready queue gets treated with a circular queue.
The CPU scheduler goes about the ready queue, allocating the CPU with each process for the time interval which is at
least 1-time quantum.
A fixed time is allotted to each process, called quantum, for execution.
Once a process is executed for given time period that process is preempted and other process executes for
given time period.
Context switching is used to save states of preempted processes.
If time quantum is very large than RR scheduling algorithm treat as FCFS and if time quantum is small than
RR called processor sharing. Processor sharing show to each process that they have their own processor.
The central concept is time switching in RR scheduling. If the context switch time is 10 percent of the time
quantum then about 10 percent time will be spent in context switching.
The ready queue is maintained as a circular queue, so when all processes have had a turn, then the
scheduler gives the first process another turn, and so on.
Advantages
1. It can be actually implementable in the system because it is not depending on theburst time.
2. It doesn't suffer from the problem of starvation or convoy effect.
3. All the jobs get a fare allocation of CPU.
Disadvantages
1. The higher the time quantum, the higher the response time in the system.
2. The lower the time quantum, the higher the context switching overhead in the system.
3. Deciding a perfect time quantum is really a very difficult task in the system.
PRIORITY SCHEDULING
Scheduler consider the priority of processes. The priority assigned to each process and CPU allocated to
highest priority process. Equal priority processes scheduled in FCFS order.Priority can be discussed regarding Lower
and higher priority. Numbers denote it. We can use 0 for lower priority as well as more top priority. There is not a
hard and fast rule to assign numbers to preferences.
Priority Scheduling suffers from a starvation problem. The starvation problem leads to
indefinite blocking of a process due to low priority. Every time higher priority process acquires CPU, and Low
priority process is still waiting in the waiting queue. The aging technique gives us a solution to overcome this
starvation problem in this technique; we increased the priority of process that was waiting in the system for a long
time.
Advantages
The priority of a process can be selected based on memory requirement, time requirement or user
preference. For example, a high end game will have better graphics, that means the process which updates the screen
in a game will have higher priority so as to achieve better graphics performance.
Disadvantages:
A second scheduling algorithm is required to schedule the processes which have same priority.
In preemptive priority scheduling, a higher priority process can execute ahead of an already executing
lower priority process. If lower priority process keeps waiting for higher priority processes, starvation occurs.
MULTILEVEL QUEUE SCHEDULING
This Scheduling algorithm has been created for situations in which processes are easily classified into
different groups.
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
2.4 Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using two atomic
operations, wait and signal that are used for process synchronization.
The definitions of wait and signal are as follows
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no
operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these
are given as follows
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are used to
coordinate the resource access, where the semaphore count is the number of available resources. If the resources
are added, semaphore count automatically incremented and if the resources are removed, the count is
decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation
only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes
easier to implement binary semaphores than counting semaphores.
Advantages of Semaphores
Disadvantages of Semaphores
Dining-Philosophers Problem :
The Dining Philosopher Problem states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher
may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be picked up by any one of its
adjacent followers but not both. This problem involves the allocation of limited resources to a group of
processes in a deadlock-free and starvation-free manner.
Once a writer is ready, it performs its write. Only one writer may write at a time.
Definition: A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.
Under the standard mode of operation, any process may use a resource in only the below mentioned
sequence:
1. Request: When the request can't be approved immediately (where the case may be when
another process is utilizing the resource), then the requesting job must remain waited until it
can obtain the resource.
2. Use: The process can run on the resource (like when the resource is a printer, its job/process
is to print on the printer).
3. Release: The process releases the resource (like, terminating or exiting any specific
process).
No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource voluntarily. In the
diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.
Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the resource held by the third
process and so on, till the last process is waiting for a resource held by the first process. This forms a circular chain.
For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated
Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
2.8 Deadlock Prevention
Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual exclusion because some resources, such
as the tape drive and printer, are inherently non-shareable.
Eliminate Hold and wait: Allocate all required resources to the process before the start of its execution,
this way hold and wait condition is eliminated but it will lead to low device utilization. for example, if a process
requires a printer at a later time and we have allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new request for resources after releasing the
current set of resources. This solution may lead to starvation.
Eliminate No Preemption : Preempt resources from the process when resources are required by other high-priority
processes.
Eliminate Circular Wait : Each resource will be assigned a numerical number. A process can request the resources
to increase/decrease. order of numbering. For Example, if the P1 process is allocated R5 resources, now next time if
P1 asks for R4, R3 lesser than R5 such a request will not be granted, only a request for resources more than R5 will
be granted.
Detection and Recovery: Another approach to dealing with deadlocks is to detect and recover from them when
they occur. This can involve killing one or more of the processes involved in the deadlock or releasing some of the
resources they hold.
Banker’s Algorithm
Bankers’s Algorithm is a resource allocation and deadlock avoidance algorithm which test all the request
made by processes for resources, it checks for the safe state, and after granting a request system remains in the safe
state it allows the request, and if there is no safe state it doesn’t allow the request made by the process.
Timeouts: To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used to limit the amount
of time a process can wait for a resource. If the help is unavailable within the timeout period, the process can be
forced to release its current resources and try again later.
Example:
Total resources in system:
ABCD
6576
The total number of resources are
Available system resources are:
ABCD
3112
Available resources are
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Maximum resources we have for a process
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0
Detection and Recovery: If deadlocks do occur, the operating system must detect and resolve them. Deadlock
detection algorithms, such as the Wait-For Graph, are used to identify deadlocks, and recovery algorithms, such
as the Rollback and Abort algorithm, are used to resolve them. The recovery algorithm releases the resources
held by one or more processes, allowing the system to continue to make progress.
1. Improved System Stability: Deadlocks can cause system-wide stalls, and detecting and resolving deadlocks
can help to improve the stability of the system.
2. Better Resource Utilization: By detecting and resolving deadlocks, the operating system can ensure that
resources are efficiently utilized and that the system remains responsive to user requests.
3. Better System Design: Deadlock detection and recovery algorithms can provide insight into the behavior of the
system and the relationships between processes and resources, helping to inform and improve the design of the
system.
1. Performance Overhead: Deadlock detection and recovery algorithms can introduce a significant overhead in
terms of performance, as the system must regularly check for deadlocks and take appropriate action to resolve
them.
2. Complexity: Deadlock detection and recovery algorithms can be complex to implement, especially if they use
advanced techniques such as the Resource Allocation Graph or Timestamping.
3. False Positives and Negatives: Deadlock detection algorithms are not perfect and may produce false positives
or negatives, indicating the presence of deadlocks when they do not exist or failing to detect deadlocks that do
exist.
4. Risk of Data Loss: In some cases, recovery algorithms may require rolling back the state of one or more
processes, leading to data loss or corruption.
IMPORTANT QUESTION
2 MARKS
1. Define deadlock.
2. What are the characteristics of deadlock?
3. Define semaphores.
4. What is turnaround time?
5. Difference between preemptive and non preemptive scheduling.
5 MARKS
1. Explain scheduling criteria.
2. Discuss the deadlock characteristics.
3. Briefly explain the CPU Scheduler.
10 MARKS
1. Discuss about the types of scheduling algorithms.
2. Briefly explain the semaphores.
3. Explain the classic problems of synchronization.
4. Explain deadlock prevention and avoidance.
5. Discuss about deadlock detection and recovery.