1.operating System Notes As Per Syllabus
1.operating System Notes As Per Syllabus
OBJE C TIV E
2
Course Content
Introduction to O S
Introduction to Li nux
Shel l P ro g ra m m i n g
Process m a n a g e m e n t
Signals
https://www.cdac.in/index.aspx?id=DAC&courseid=0# T h re a d s
Memor y m a n a g e m e n t
Virtual Memor y
Deadlock
S e m a p h o re
Inter p ro c e s s co m m u n i cat i o n
3
Text Books
G a g n e / Wiley
References:
4
Introduction to OS
Introduction to O S
▪ What i s O S Introduction to Linux
▪ H ow i s i t di f f e r e nt f r om ot he r appl i c ati on s of t w are ; Shell Programming
Process management
▪ Why i s i t har dw ar e de pe nde nt
Signals
▪ Di f f e r e nt c om pone nt s of O S
Threads
▪ Bas i c c om pute r or gani zat i on r e qui r e d f or O S Memory management
▪ E xam ples of w e l l know n O S i nc l udi ng m obi l e O S , e m be dde d Virtual Memory
s ys t e m O S , R e al T i m e O S , de s kt op O S s e r v e r m ac hi ne O S e t c . Deadlock
▪ H ow ar e t he s e di f f e r e nt f r om e ac h ot he r and w hy Semaphore
Inter process
▪ Func t i ons of O S
communication
▪ U s e r and K e r ne l s pac e and m ode
▪ I nt e r r upts and s ys t e m c al l s
5
W hat is Operating S ystem
It al s o p r o v i d e s a b a s i s for a p p l i c a t i o n p r o g r a m s a n d acts as an i nt e r m e di ar y
6
Different components
A c o m p u t e r s ys t e m c a n b e d i v i d e d r o u g h l y into four c o m p o n e n t s :
the har dw are , the o p e r a t i n g s ys t e m, the a p p l i c a t i o n p r o g r a m s , a n d a u s e r
7
How is it different from other application software
Application software Os
It’s examples are Photoshop, VLC player etc. It’s examples are Microsoft Windows, Linux, MAC
It is developed by using c++, c, assembly languages. It is developed by using virtual basic, c++, c, java.
8
B asic com puter organization required for O S Computer organization is
concerned with the way the
hardware components
Input Output
processor The central processing unit
devices devices
(CPU) that processes data
9
Types Of Os
Multitasking OS
T h e pr o c e s s o r t i m e ( C PU ) w h i c h i s s h a r e d a m o n g m u l t i pl e u s e r s i s t e r m e d a s
time sharing.
Mobile OS
10
Types Of Os
Real Time Os
A r e al - t i m e o p e r a t i n g s y s t e m ( R T O S ) i s a n o p e r a t i n g s y s t e m w i t h t w o k e y Air bag
deployment
f e a t ur e s : pr e d i c t a b i l i t y a nd d e t e r m i n i s m . I n a n R T O S , r e pe a t e d t a s k s a r e depends on
the rapid
p e r f o r m e d w i t h i n a t i g h t t i m e bo u n da r y . R e al - t i m e s y s t e m s a r e u s e d w h e n response time
t h e r e a r e t i m e r e q u i r e m e n t s t h at a r e v e r y s t r i c t l i k e m i s s i l e s ys t e m s , a i r of a real-time
embedded
traffic control systems, robots, etc. system
with Hard
RTOS
Hard RTOS
T h e s e o pe r a t i n g s ys t e m s g ua r a n t e e t ha t c r i t i c al t as k s b e c om pl e t e d w i t h i n a
range of time.
Soft RTOS
T h i s o p e r a t i n g s y s t e m p r ov i d e s s om e r e l ax a t i o n i n t h e t i m e l i m i t . F o r e x a m pl e
Multimedia systems, digital audio systems etc.
11
Different types of Os
EMBEDDED OS
An embedded system can be thought of as a computer hardware system having software embedded in it.
An embedded system can be an independent system or it can be a part of a large system.
An embedded system is a microcontroller or microprocessor based system which is designed to perform a specific task.
microwave ovens, washing machines and dishwashers,télévision sets, Vidéo Game consoles, set-top boxes, atms etc,Smart Tv
DESKTOP OS
The desktop OS is the environment where the user controls a personal computer .Ubuntu ,Windows 10
It is an operating system designed for usage on servers. It is utilized to give services to a large number of clients.
It is a very advanced operating system that can serve several clients simultaneously.
12
Important functions of an Operating System
Process M a n a g e m e n t
Memory M a n a g e m e n t
File Management
Device Management
Protection a n d Security
13
P rocess M anagem ent
In multiprogramming environment, the OS decides which process gets the processor when and for how
much time.
This function is called process scheduling.
An Operating System does the following activities for processor management −
• Keeps tracks of processor and status of process. The program responsible for this task is known as traffic
controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
• Providing mechanisms for process communication
14
M emory M anagement
• The operating system handles the responsibility of storing any data, system programs, and user
programs in memory. This function of the operating system is called memory management.
• Keeping track of which parts of memory are currently being used and by whom
15
File Management
• A file system is normally organized into directories for easy navigation and usage. These
• Al l ocat es t he resources.
16
D evice M anagement
• Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
• Decides which process gets the device when and for how much time.
• De-allocates devices.
17
P rotection and S ecurity
• The operating system uses password protection to protect user data and similar other
• Monitors overall system health to help improve performance. records the response time between service requests
and system response to having a complete view of the system health. This can help improve performance by
• The operating system uses password protection to protect user data and similar other techniques. it also prevents
18
User space and Kernel space
When the computer system is run by user applications like creating a text document or
using any application program, then the system is in user space
The system starts in kernel mode when it boots and after the operating
system is loaded, it executes applications in user mode.
19
D ual mode of operation
Kernel mode
User mode
The operating system puts the CPU in user mode when a user The operating system puts the CPU in kernel mode when it is
program is in execution executing in the kernel
When system runs a user application like creating a text At system boot time, the hardware starts in kernel mode.
document or using any application program, then the system is in on as well.
user mode. The operating system is then loaded and starts user applications
in user mode.
When CPU is in user mode, the programs don't have direct
access to memory and hardware resources. The CPU can execute certain instruction only when it is in the
kernel mode. These instruction are called privilege
instruction
To provide protection to the hardware, we have privileged
system will be in a safe state even if a program in user mode
instructions which execute only in kernel mode
crashes.
20
System Calls
A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running.
the ke r ne l system .
21
System Calls
fo l l o w i n g situations:
system.
C r e a t i n g a n d m a n a g i n g new p r o c e s s e s .
C r e a t i n g a co n n e c t i o n in the network, s e n d i n g a n d
re c e i v i n g p a c ket s .
R e q u e s t i n g a c c e s s to a ha rdwa re d ev i c e , like a m o u s e or a
printer.
22
Introduction
to Linux
Linux Topics
Introduction to Linux
▪Published under GNU General Public License Free and Open Source software
The users have the freedom to run,
copy, distribute, study, change and
improve the software
FOSS :computer software that can be classified as both free software and open-source softw are
• Free software
The freedom to run the software for any purpose Freedom to study
Freedom to change
Freedom to distribute the software
• Open-source software
Computer softw are with its source code made available with a license .
The copyright holder provides the rights to study, change, and distribute the
software to anyone and for any purpose.
GNU General Public License
• GNU Project
• A text-based interface that allows you to enter commands, execute them, and view the results.
• The linux command line is provided by a program called the shell.
Shell
prompt
Running Commands
Commands syntax:
–Example: ls --help
file filename Display file type of file with name filename file hello.txt
system
Powering off or
Command --help short explanation about how to use the command ls --help
and a list of available options.
cp [options] file1 file2 dest More than one file may be copied if cp –f hello.txt hello1.txt fold
dest is a directory
mv [options] file destination move and/or rename files mv old.txt new.txt -Rename
and directories mv [options] file1 file2 destination
groupadd, groupmod, groupdel Standard utilities for adding, modifying, and deleting groups.
changing identities
–chmod 640 myfile o permissions for 4 or r read access is granted to the user category defined , to read a
others file or list a directory's contents
–chmod ugo+r myfile 2 or w write permission is granted to the user category , to write to a
file or create and remove files from a directory
1 or x execute permission is granted to the user category , to execute a
program or change into a directory and do a long listing of the
directory
•To find all available shells in your system type following command:
•$ cat /etc/shells
•echo $SHELL : Show which shell is used.
Software installation
Installing new software
▪ software comes in packages
▪Extra software may be found on your installation CDs or on the
Internet.
▪Package formats
–RPM packages
–DEB (.deb) packages
▪RPM
–The redhat package manager, is a powerful package manager that can be used to
install, update and remove packages
▪DEB (.deb) packages
–package format is the default on Debian GNU/Linux
Installing package using RPM
entity.
• A single program can create many processes when run multiple times
How does a process look like in memory?
• process memory is divided into four sections ─ stack, heap, text and data.
Stack
Heap
• This contains dynamically allocated memory to a process during its run time.
Text
• Code segment, also known as text segment contains The text section
comprises the compiled program code
Data
• Ready: Process is Ready to run and is waiting for CPU to execute it.
Processes that are ready for execution by the CPU are
maintained in a queue for ready processes
• When Linux starts, it runs a single program, the prime ancestor and process number 1, init.
Process Table and Process Control Block (PCB)
loaded with.
of every process.
terminates.
Process representation
• The kernel stores the list of processes in a circular doubly linked list called the task list.
running process from the CPU and the selection of another process
• In a uniprogramming system , time spent waiting for I/O is wasted and CPU is free during this
time.
• In multiprogramming systems, one process can use CPU while another is waiting for I/O.
• Schedulers selects the jobs to be submitted into the system and decide which process to
run.
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Process schedulers
Long-term scheduler (Job scheduler)
Short-term scheduler (CPU scheduler)
• selects which processes should be brought into the ready queue
• selects which process should be executed next and
• Select a process from sec.memory to ready queue in memory
allocates CPU.
• Long-term scheduler is invoked infrequently (seconds, minutes)
• Allocates Cpu to a process in memory
• The long-term scheduler controls the degree of multiprogramming
• Short-term scheduler is invoked frequently
Medium-term scheduler
I/O-bound process –
If a process doesn't carry out many calculations, but it does do a lot of input/output operations, it's called an I/O-
bound process.
CPU-bound process –
If a process does a lot operations using CPU, it's called a CPU-bound process
• Dispatcher is responsible for loading the process selected by Short-term scheduler on the
1) Switching context.
• When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored
into the process control block. After this, the state for the process to
run next is loaded from its own PCB and used to set the PC, registers,
Preemptive scheduling
The scheduling in which a running process can be interrupted if a high priority process enters
non-preemptive scheduling
The scheduling in which a running process cannot be interrupted by any other process is
called non-preemptive scheduling. Any other process which enters the queue has to wait
• Priority Scheduling
• Throughput – # of processes that complete their execution per time unit. For long
processes, this rate may be one process per hour; for short transactions, it may be ten
processes per second
• Turnaround time – amount of time to execute a particular process. The interval from the
time of submission of a process to the time of completion is the turnaround time.
Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the
ready queue, executing on the CPU, and doing I/0.
• Waiting time – amount of time a process has been waiting in the ready queue
• Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)
4 3 2
5 4 5
PN AT BT CT TAT WT
O ? ? ?
TAT=CT-AT
1 0 4 4 4 0
WT = TAT - BT
2 1 3 7 6 3
3 2 1 8 6 5
4 3 2 10 7 5
5 4 5 15 11 6
Shortest-Job-Next (SJN) Scheduling
• Also called SJF
• From the process arrived it PNO AT BT CT ? TAT ? WT ?
chooses the shortest job.
1 1 7
• Both premptive and non TAT=CT-AT
premptive WT = TAT - BT
2 2 5
3 3 1
4 4 2
5 5 8
Non preemptibve
PNO AT BT CT TAT ? WT ?
?
1 1 7 8 7 0
2 2 5 16 14 9
3 3 1 9 6 5
4 4 2 11 7 5
5 5 8 24 19 11
Priority scheduling
P2 1 5
• Once a process is executed for a given time period, it is
P5 4 6
P6 6 3
Example of RR with Time Quantum = 2
Time quantum :2
Ready queue
PROCESS AT BT Process AT BT
p1 0 4
P1 0 4
P2 1 5
P3 2 2
P2 1 5 p1 2
p4 3 1
P3 2 2
p5 4 6
P4 3 1 p2 3
p6 6 3
P5 4 6 P1-P2-P3-P1-P4-P5-P2-P6-P5-P2-P6-P5 p5 4
READY QUEUE
P6 6 3
p2 1
p6 1
p5 2
Time Quantum and Context Switch
Time
Process
creation •creates a new process by duplicating the calling process .
fork() •The new process - child, is an exact duplicate of the calling process,
•Calling process- parent,
Return Value
•Negative Value: creation of a child process was unsuccessful.
•Zero: Returned to the newly created child process.
•Positive value: Returned to parent or caller.(returns the process identifier
(pid) of the child process in the parent,)
9/28/2021
Process creation
exec() exec #include <unistd.h>
• replaces the current running process with a new process.
• An exec function replaces the current process with a new process specified by the
pathor fileargument.
•process which is a copy of the one that calls it, while exec replaces the current
process image with another (different) one.
#include <sys/wait.h>
Wait() pid_t wait (int *status_ptr) or pid = wait(&status);
• The parent process wait for termination of a child process
• The call returns status information and the pid of the terminated process
–This status value is zero if the child process explicitly returns zero status.
wait(&status)
–If it is not zero, it can be analyzed with the status analysis macros.
WEXITSTATUS(stat)
–The status_ptr pointer may also be NULL, where, wait(NULL) ignores the
child's return status.
• If no parent waiting (did not invoke wait()) process is a zombie
• If parent terminated without invoking wait , process is an orphan
• A process whose parent process no more exists i.e. either finished or
terminated without waiting for its child process to terminate is called an
orphan process.
Zombie and orphan process
Zombie
A zombie process is a process whose execution is completed but it still has an entry in the process table. Zombie processes
usually occur for child processes, as the parent process still needs to read its child’s exit status. Once this is done using the
wait system call, the zombie process is eliminated from the process table.
Orphan
A process whose parent process no more exists i.e. either finished or terminated without waiting for its child process to
9/28/2021
End
9/28/2021
Operating System:
Signals &Threads
Threads
signal
• When a signal is sent, the operating system interrupts the target process’ normal flow
of execution to deliver the signal. If the process has previously registered a signal
handler, that routine is executed. Otherwise, the default signal handler is executed. 1
Signal Names and Values
• Kill -l
Sending Signals
When you press the Ctrl+C key, a SIGINT is sent to the process and as per defined default
• second is a reference to a handler function whose first argument is an int and returns void.
• The default action for a signal is the action that a process performs when it receives a signal.
threads.
⚫ MS Word uses multiple threads: one thread to format the text, another
• Threads are not independent of one another like processes are, and as a result threads
share with other threads their code section, data section, and OS resources (like open
files and signals). But, like process, a thread has its own program counter (PC), register
set, and stack space.
Advantages of Thread over Process
4. Resource sharing: Resources like code, data, and files can be shared
among all threads within a process.
Note: stack and registers can’t be shared among the threads. Each thread
has its own stack and registers.
Types of threads
⚫ User levelthread
⚫ management done by user-level threads library
⚫ Three primary thread libraries:
⚫ POSIX Pthreads posix-portable operating system interface
⚫ Windows threads
⚫ Java threads
⚫ Kernel level thread
⚫ kernel threads are implemented by OS.
Multi threading models
Many operating systems support kernel thread and user thread
in a combined way
⚫ Many to many model.
⚫ Many to one model.
⚫ one to one model
Many to many
⚫ In this model, we have multiple user threads multiplex to
same or lesser number of kernel level threads.
⚫ Number of kernel level threads are specific to the machine.
⚫ advantage of this model is if a user thread is blocked we can
schedule others user thread to other kernel thread.
⚫ Thus, System doesn’t block if a particular thread is blocked.
Many to One Model
multiple user threads mapped to one kernel thread.
In this model when a user thread makes a blocking system call entire process
blocks.
As we have only one kernel thread and only one user thread can access kernel
at a time, so multiple threads are not able access multiprocessor at the same
time.
One to One Model
⚫ In this model, one to one relationship between kernel and user thread.
⚫ In this model multiple thread can run on multiple processor.
⚫ Problem with this model is that creating a user thread requires the
corresponding kernel thread.
Thread usage
executing
⚫ Eg: pthread_create(&tid2,NULL,thread2,NULL);
Thread termination and waiting
9/29/2021
Operating System: Process Synchronisation
Semaphores
Process Synchronization
• The Critical section code can be accessed by only one process at a time
• In the Critical section, there are many variables and functions that are shareable among
different processes.
Any solution to the critical section problem must satisfy three
requirements
• Progress : If a process is not executing its own critical section, then it should not stop any other
• Bounded Waiting : Bounded waiting means that each process must have a limited waiting time. It
• Counting semaphores.
s=0 S=1
S=0
Counting semaphore
Semaphore implementation
POSIX Semaphore
Semaphores in Linux
#include <semaphore.h> POSIX-semaphore:
sem_init(),sem_wait(),sem_post(),sem_getvalue(),
Semaphore datatype sem t sem_destroy()
SystemV -semaphore
semget(),semop(),semctl()
Semaphore implementation
sem_init() initialize an semaphore
int sem_init(sem_t *sem, int pshared, unsigned int
value);
#include <semaphore.h>
pshared :
• indicates whether this semaphore is to be
shared between the threads of a process, or
between processes
• 0, semaphore is shared between the threads of
a process, else nonzero
Value :specifies the initial value for the semaphore.
sem_init() returns 0 on success; on error, -1 is returned,
sem_wait(sem_t *sem); lock a semaphore
If the semaphore's value is greater than zero, then the
decrement and lock semaphore. If the semaphore
currently has the value zero, then the call blocks until
becomes grater than 0
Mutexes are used to protect data or other resources from concurrent access
Lock a mutex.
• If the buffer is full, then the producer shouldn't be allowed to put any data into the
buffer.
• If the buffer is empty, then the consumer shouldn't be allowed to take any data from
the
buffer.
• The producer and consumer should not access the buffer at the same time.
Solution problems can be solved with the
help of semaphores
Threads
Threads
threads.
MS Word uses multiple threads: one thread to format the text, another
• Threads are not independent of one another like processes are, and as a result threads share with
other threads their code section, data section, and OS resources (like open files and signals).
• But, like process, a thread has its own program counter (PC),stack space.
Advantages of Thread over Process
4.Resource sharing: Resources like code, data, and files can be shared among all threads within a
process.
Note: stack and registers can’t be shared among the threads. Each thread
has its own stack and registers.
• Three primary thread libraries:
POSIX Pthreads
Windows threads
Types of threads
Java threads
Kernel level thread
kernel threads are implemented by OS.
User level thread
Multi threading models
• All the threads must have a relationship between them (i.e., user
threads and kernel threads).
• Many operating systems support kernel thread and user thread in a
combined way
As we have only one kernel thread and only one user thread can access kernel
at a time, so multiple threads are not able access multiprocessor at the same
time.
One to One Model
In this model, one to one relationship between kernel and user thread.
In this model multiple thread can run on multiple processor.
Problem with this model is that creating a user thread requires the
corresponding kernel thread.
Thread Implementation
To create a thread
#include <pthread.h>
pthread_create (thread, attr, start_routine, arg)
To compile in Linux
thread the thread ID of the new thread is stored
9/29/2021
Operating System:
Memory
management
Memory management
Functionality of memory management
Keep track of every memory allocation
Track whether a memory is allocated or not
Track how much memory is allocated.
It takes the decision which process will get memory and when.
Updates the status of memory location when it is freed or allocated.
Protection : ensuring that user address space should not use kernel address space.
Memory management
How the process in secondary memory will allocated to main memory .
The address translation.
logical address is genearated by cpu can be used to access sec
memory, but to access main memory it require
physical address.
Goal of memory management
Space utilization
Fragmentation : there is chance of loss memory due to fragmentation
Keep fragmentation as small as possible.
How to run larger program in smaller memory
area Assume pgm 1000kb -run process in 500kb.
Done by using virtual memory.
How the process in secmemory will allocated to main memory
For memory allocation ,memory is divided into fixed size partitions of
different size
Fixed size will be fixed which cannot be changed
When a partition is free, a process is selected from the input queue and is
loaded into the free partition. When the process terminates, the partition
becomes available for another process
Degree of multiprogramming is bound by no of partitions.
1 process= 1 partition
Variable size partitioning
large block of memory is available as the process arrives its allocated
The entire partition is not divided into fixed size.so it does not suffers from
internal fragmentation.
Hole – block of available memory; holes of various size are scattered
throughout memory
When a process arrives, it is allocated memory from a hole large enough to
accommodate it
Dynamic Storage-Allocation
FIRST | BEST | WORST | algorithms
•
In both fixed size and variable partition schemes there are 3 algorithms which can be used to
allocation.
They are first fit,best fit,worst fit.
• First-fit:
•Allocate the first hole that is big enough
• Best-fit:
• Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
•Produces the smallest leftover hole
• Worst-fit:
•Allocate the largest hole; must also search entire list
•Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
First fit
• First fit: searches the location starting from beginning and select
a hole that big enough to hold our request
P1-300
P2-25
50 25 125 300 300 50 600
P3-125 p2 P3 p1 p4
P4-50
Best fit
Searches the entire list . then choose the smallest hole that is big enough to
allocate the process
• It does that by moving all the processes towards one end of the memory and
• all the available free space towards the other end of the memory so that it becomes
contiguous.
• Compaction is one of the solutions to External Fragmentation.
Address translation in contiguous memory allocation
How the process in secondary memory will allocated to main memory .
The address translation.
logical address is generated by cpu can be used
to access sec memory, but to access main memory it require
physical address.
Address translation
The cpu access main memory, but the address it (cpu) generates is
logical address which can be used to access secondary memory.
The main memory can be accessed using physical address so
the logical address has to be converted to physical address.
In contiguous policy it is easy to know the address because we take a
process from sec memory in a contiguous fashion and place it in the
main memory so we know the base address we could calculate the rest
address.
In non contiguous it will be difficult ,since the entire process is divided
into different fragments and placed in the memory .
Logical vs. Physical Address Space
1
150
100
Numericals
LR RR
The cpu generates logical address which can be used to access sec.memory.
The logical address can be divided into page number (p)+
instruction offset(d).
the cpu generates address in such a format say it want to access
the pagenumber 2 instruction (offset )number 24,
We can find it from main memory in different way.
Method one
Method one
the base address of the process is given in this scenario (the address of
first page).
in this approach the first page contain pointer to sec.page and sec.page
contain pointer to next ,like link list.
so there is no need of knowing all the page number.
So by looking at page1 we can find where page 2 is stored and then find
instruction 24.
But this mechanism the access will be very slow.
•
Pagetable
Method 2:In this method we have a pagetable
which is a datastructure (not a hardware)
which contains the base address of each pages
of a process in main memory.
• Each process will be having separate page
table.
• The no of pages a process has that much
entry
will be there in page table.
• The page table is nothing but the index to
framenumber of page stored in main memory
• page
so when
number
cpu generates
2 and instruction
a logical offset
address35,it
saywill
i.e How does it knows the address of page
access the pagetable.
table.?from the PTBR.(a register holding the
• From page table at index p2 we get frame address of page table)it is stored in PCB
number(or base address where the page is
Process Control Block.
stored in mainmemory ) add it with d
(instruction offset) to get the physical address
to access the mainmemory.
Advantag
e
• The paging mechanism is actually
introduce to reduce external
fragmentation,
• Disadvantage
•The TLB contains only a few of the page- • If the page number is not in the TLB
table entries. (TLB miss)
• page table is accessed and
frame number is obtained and
memory is accessed .
• If the TLB is already full of entries, • In addition, add the page number
the operating system must select one and
for replacement. frame number to the TLB, so that they
• During the context switch the TLB will be found quickly on the next
will be cleared and a new process reference.
pages will be kept in TLB.
10/4/2021 KNOWLEDGE RESOURCE CENTRE
• When a logical address is
generated by the CPU, its page
number is presented to the TLB.
• If the page number is
found( TLB hit),its frame
number is immediately available
and is used to access memory
When the RAM is low,
With virtual memory, the system looks at RAM for areas that have not been
used recently and copy them onto the hard disk. This frees up space in RAM
to load the new application.
The area of the hard disk that stores the RAM image is called a page file. It
holds pages of RAM on the hard disk, and the operating system moves
data back and forth between the page file and RAM. On a Windows
machine, page files have a .SWP extension.
Demand Paging
The process of loading the page into memory on demand (whenever page fault occurs) is
known as demand paging.
1. If CPU try to refer a page that is currently not available in the main memory, it generates
an interrupt indicating memory access fault. Page fault
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the
OS must bring the required page into the memory.
cpu
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space.
The page replacement algorithms are used for the decision making of replacing the page
in physical address space.
6. The signal will be sent to the CPU to continue the program execution and it will place the
process back into ready state.
DirtyBi t
t
In
In order to reduce the page fault service time, a special bit called the dirty bit will be associated with each page .
The dirty bit is set to 1 whenever it is modified.while selecting a victim page using page replacement algorithm. This value is
tested ,if it is set to 1 means that page has modified after swapped in ,so the page have to be written into sec memory.
Page replacement algorithms
1. Find the location of the page requested by ongoing process on the disk.
3. If there is no free frame, use a page-replacement algorithm to select any existing frame to
4. Write the victim frame to disk. Change all related page tables to indicate that this page is no longer
in memory.
5. Move the required page and store it in the frame. Adjust all related page and frame tables to
• This is the simplest page replacement algorithm. In this algorithm, operating system
keeps track of all pages in the memory in a queue, oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected
for removal
3
frames
4
frames
• A process that is spending more time paging than executing is said to be thrashing. In other
words it means, that the process doesn't have enough frames to hold all the pages for its
execution, so it is swapping pages in and out very frequently to keep executing. Sometimes, the
pages which will be required in the near future have to be swapped out.
DEADLOCK
IN OS
DEADLOCK
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Deadlock
detection and Allow the system to enter a deadlocked state,
recovery detect it, and recover
ignore the
problem ignore the problem altogether and pretend that
altogether deadlocks never
occur in the system.
TEACH A COURSE 3
NECESSARY CONDITIONS FOR DEADLOCKS
▪ Mutual Exclusion
A resource can only be shared in mutually exclusive manner. Only one process can access
resource at a time
A process waits for some resources while holding another resource at the same time.
▪ No preemption
The process which once scheduled will be executed till the completion. A resource can be
released only voluntarily by the process holding it after that process has finished its task
▪ Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last process
is waiting for the resource which is being held by the first process
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait holds simultaneously
TEACH A COURSE 4
To ensure that deadlocks never occur, the system can use either a
deadlock prevention or a deadlock-avoidance scheme.
TEACH A COURSE 5
DEADLOCK PREVENTION
▪ Deadlock can be prevented by eliminating any of the necessary conditions for deadlock.
Eliminate Mutual Exclusion
▪ if we can be able to violate resources behaving in the mutually exclusive manner then the deadlock can be prevented.
Eliminate No Premption
▪ Preempt resources from the process when resources required by other high priority processes.
Eliminate Hold and Wait:
▪ In this condition, processes must be stopped from holding single or multiple resources while simultaneously waiting for one or
more others.
Eliminate Circular Wait
▪ This ensures that not a single process can request a resource which is being utilized by some other process and no cycle will be
formed
. Process can request for resource in an increasing order only. Set of numbers are assigned to resources.
TEACH A COURSE 6
ELIMINATE
CIRCULAR WAIT
▪ This ensures that not a single process can request a resource which is being utilized
by some other process and no cycle will be formed
• Process can request for resource in an increasing order only. Set of numbers are
assigned to resources.
TEACH A COURSE 7
Deadlock Avoidance
• The request for any resource will be granted only if the resulting state of the system doesn't cause any
deadlock in the system.
• For every resource allocation ,the safe state or unsafe state is checked
• Its done using bankers algorithm
TEACH A COURSE 8
BANKERS ALGORITHM
• Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the request made by
processes for resources, it checks for the safe state, if after granting request system remains in the safe state it
allows the request and if there is no safe state it doesn’t allow the request made by the process.
• Banker’s algorithm is named so because it is used in banking system to check whether loan can be sanctioned to a
person or not.
• the bank would never allocate its money in such a way that it can no longer satisfy the needs of all its customers. The
bank would try to be in safe state always
TEACH A COURSE 9
BANKERS ALGORITHM
TEACH A COURSE 10
Available
indicating the number of available resources of each type.
Allocation :
defines the number of resources of each type currently allocated to each process.
Max
defines the maximum demand of each process in a system
Need (Remaining):
indicates the remaining resource need of each process
TEACH A COURSE 11
BANKERS ALGORITHM:: TOTAL AVAILABLE IS A=10,B=5,C=7
proces Allocated Max.Need Available Need
s [maxneed – allocation ]
A B C A B C A B C A B C
p1 0 1 0 7 5 3 3 3 2 7 4 3
p2 2 0 0 3 2 2 5 3 2(2) 1 2 2
p3 3 0 2 9 0 2 7 4 3 (4) 6 0 0
p4 2 1 1 4 2 2 7 4 5 (5) 2 1 1
p5 0 0 2 5 3 3 7 5 5 (1) 5 3 1
P2→P4→P5→P1→P3 10 5 7 (3)
TEACH A COURSE 12
DEADLOCK DETECTION AND RECOVERY IN OS
• In order to get rid of deadlocks, The OS periodically checks the system for any deadlock.
• In case, it finds any of the deadlock then the OS will recover the system using some recovery
techniques.
• The OS can detect the deadlocks with the help of Resource allocation graph.
TEACH A COURSE 13
DEADLOCK
DETECTION
▪ In this case for Deadlock detection can be done by checking for cycle in the Resource Allocation
Graph. Presence of cycle in the graph is the sufficient condition for deadlock.
TEACH A COURSE 14
DEADLOCK RECOVERY
Recovery methods
1. Killing the process: killing all the process involved in the deadlock. Killing process one by one. After killing each process
check for deadlock again keep repeating the process till system recover from deadlock.
2. Resource Preemption: Resources are preempted from the processes involved in the deadlock, preempted resources are
allocated to other processes so that there is a possibility of recovering the system from deadlock.
TEACH A COURSE 15
STARVATION Deadlock Starvation
indefinite time
Resources are blocked by the Resources are continuously utilized by
processes high priority processes
▪ In starvation resources are continuously
is gradually increased.
TEACH A COURSE 16
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given as follows −
•Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create
a two-way data channel between two processes. This uses standard input and
output methods. Pipes are used in all POSIX systems as well as Windows
operating systems.
•Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
•Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication and
are used by most operating systems.
Named Pipe or FIFO