0% found this document useful (0 votes)
15 views107 pages

Operating System NOtes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views107 pages

Operating System NOtes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

DSOS (CS 201)

B.Tech 4th Sem (ECE)


System Software
System software is itself a software that provides platform
to other softwares for their execution. They coordinate and
control the functions and procedures of computer
hardware thereby enabling functional interaction between
hardware, software and the user.
In other words, systems software carries out the task of a
middleman that ensures communication between
hardware, software and user.

More commonly system softwares are built by the


computer manufacturers.
Some examples of system software are as follows:
Operating systems- Enables communication between
hardware, system programs, and other applications.
Utility softwares: Ensures optimum functionality of devices
and applications such as antivirus softwares, disk formating
softwares etc.
Computer language translators: Translates high-level
languages to low-level machine codes.
Device driver: Enables device communication with the OS
and other programs.
Firmware: Enables device control and identification.

.
Some important features of system software are:
• Written in low level code
• Close proximity to the hardware system
• Speed
• Tough to manipulate
Language Processors
Compiler : A software whose main job is to translate
the code written in one language to some another
language without changing the semantics or
meaning of the program.
The compiler is also said to make the target code
efficient and optimized in terms of time and
space.
A compiler performs the following operations
during compilation or translation of a program one
language to another.
• preprocessing
• lexical analysis
• parsing
• semantic analysis (syntax-directed translation)
• conversion of input programs to an intermediate
representation
• code optimization and code generation.
Some examples of compiler are gcc(C compiler), g++
(C++ Compiler ), javac (Java Compiler) etc.
Interpreter : A computer program that directly
executes instructions written in a programming or
scripting language without translating them into
another language or low-level machine code.
It do not require the program to be compiled into
a machine language program but translates high-
level instructions into an intermediate form and is
then executed. This characteristic makes
interpreter faster than compiler.
• Interpreter continuously translates the program
until the first error is met. If any error occurs it
stops executing. Hence debugging is easy but
increases the possibility of existence of bugs in a
program as codes that are not executed are never
checked for errors.
• Some examples of programming/scripting
language that use interpreters are PHP, Python,
Ruby etc.
Assembler : A program that converts assembly
language into low-level machine code.
Assembler takes basic commands and operations
of an assembly language and converts them into
binary code specific to a type of processor.
• Assemblers produce executable code that are
similar to the compilers.
• However, assemblers are much simpler as they only
convert low-level assembly language code to
machine code.
• Assembly languages are specific to a certain kind of
instructions of a processor, therefore assembly
languages for different processors may be different.
Linker: A computer program that links one or more
object files generated by a compiler and combines
them into one executable program.
Linker creates a single executable file by resolving
symbolic references found as it reads a object file.
Loader: A program that loads executable files or
machine codes of a program into the system
memory for executing them.
Part of the Operating Systems (O/S) of the
computer.
Loading a program involves calculating the size of
a program (instructions and data) and creating
memory space for it.
Initializes various registers to initiate execution.
Read the contents of executable file into memory.
Once the contents of the executable file or
program is loaded the operating system starts the
program by passing control to the loaded program
code.
All OS that support loading have loader to load
the programs in the main memory.
Operating Systems
• A program that acts as an intermediary between a
user of a computer and the computer hardware.
• A set of programs that coordinates all activities
among computer hardware resources.
• Operating system goals:
– Execute user programs and make solving user
problems easier.
– Make the computer system convenient to use.
– Use the computer hardware in an efficient
manner.
Main functions of an Operating
System
• Start up the computer
• Administrator security
• Control network
• Access the web
• Monitor performance and provide
housekeeping services
• Schedule jobs and configure devices
• Memory management
• Process management
• Provide user interface
Operating System Goals
Main goals are
– Efficiency
– Robustness
– Scalability
– Extensibility
– Portability
– Security
– Protection
– Interactivity
– Usability
Location of Operating System
• Resides on ROM chip in handhelds devices
like PDA, Mobile Phone.

• Resides on hard disk in most computer


cases.
Types of Operating System
Real-Time OS: Installed in special purpose
embedded systems like robots, cars, and modems.
Single-user and single-task OS: Installed on single-
user devices like phones.
Single-user and multitask OS: Installed on
contemporary personal or desktop computers
such as MS-Windows 9X, MS-Windows NT
workstation, MS-Windows 2000 Professional, MS-
Windows XP Professional.
Multi-user OS: Installed in network environments
where many users have to share resources. Server
OSs are examples of multi-user operating systems.
Network OS: Used to share resources such as files,
printers in a network setup such as MS-Windows
NT Server, Windows 2000 Server, MS-Windows
2000 Advance Server, MS-Windows 2003 Server.
Internet/Web OS: Designed to run on the browser
that is online.
Mobile OS: Designed to run on mobile phones,
tablets and other mobile devices.
Examples of Operating System
Popular OSs for computers are:
Windows 10
Mac OS X
Ubuntu
Linux
Popular network/server OSs are:
Ubuntu Server
Windows Server
Red Hat Enterprise
Popular internet/web OSs are:
Chrome OS
Club Linux
Remix OS
Popular mobile OSs are:
iPhone OS
Android OS
Windows Phone OS
Process Management
Process
A program in execution is known as Process.
A process requires certain resources — such as
CPU time, memory, files, and I/O devices —to
complete a task.
These resources are allocated to the process
either when it is created or while it is executing.
Systems consist of a collection of processes:
System code are executed by Operating-system
processes execute system code, and user code are
executed by user processes.
Generally, a process contains only a single thread
during its execution, however almost all modern
operating systems processes are nowadays
multiple threaded.
A thread is a path of execution within a
process. A process can contain multiple threads.
All processes may execute concurrently(parallel).
An operating system is responsible for the
creation and deletion of both user and system
processes; the scheduling of processes; and the
provision of mechanisms for synchronization,
communication, and deadlock handling for
processes.
Difference between Program and
Process
A program is a passive entity which is a list of
instructions written within a file stored on a disk.
This file is known often as an executable file,
whereas a process is an active entity.
A program in execution in main memory is known
as process. A process contains text section, stack
which contains temporary data (such as function
parameters, return addresses, and local variables),
and a data section, which contains global
variables.
A process in execution is controlled according to
the value (memory address) in the program counter
and the contents of the processor’s registers.
A program counter points to the address of
the next instruction to be executed.
A process may also include a heap(mass), which is
memory area that is dynamically allocated during
process run time.
Process Layout
Programs (executable files)
are loaded into main
memory as process by
double-clicking an icon
representing the executable
file or entering the name of
the executable file on the
command line (as in
prog.exe or a.out).

Layout of a process in main memory


Process State

Process state diagram

A process changes its state during its execution.


The state of a process is defined in part by the
current activity of that process. A process may be
in one of the following states:
New - When a process is created.
Running - Instructions are in execution.
Waiting - Process is waiting for I/O completion or
reception of any signal.
Ready - Process is waiting to be assigned to a
processor.
Terminated - Process has finished execution.
Process Control Block
A process in the operating system is represented by
a process control block (PCB) aka task control block.
It contains many pieces of information associated
with a specific process, including these:

Process control block (PCB)


A process in the operating system is represented by
a process control block (PCB) aka task control block.
It contains many pieces of information associated
with a specific process, including these:
Process State: The state may be in new, ready,
running, waiting, halted states.
Program counter: The counter indicates the address
of the next instruction to be executed for a process.
CPU registers: The registers vary in number and
type, depending on the computer architecture. They
include accumulators, index registers, stack
pointers, and general-purpose registers and other
condition code information.
Besides program counter, state of a process must be
saved when an interrupt occurs, to allow the
process to be correctly continued later.

CPU switching between processes


CPU scheduling information: Include process
priority, pointers to scheduling queues and other
scheduling parameters.
Memory-management information: Includes
information such as the value of the base and limit
registers, the page tables, or the segment tables,
depending on the memory system used by
the operating system.
Accounting information: Includes information such
as the amount of CPU and real time used, time
limits, account numbers, job or process numbers,
and so on.
I/O status information: Includes information such
as the list of I/O devices allocated to a process, a list
of open files, and so on.
In summary, the PCB acts as the repository for any
information that may vary from process to process.
Threads
A program performs a single thread of execution.
A thread is a path of execution within a process.
Many modern operating systems have enhanced
to allow a process to be multi-threaded. This
allows a process to perform more than one task at
a time.
In a multi-threaded operating system, a PCB
includes information for each thread. Obviously,
other changes are also required to support
multi- threading.
Process Scheduling
By multiprogramming, multiple processes runs at
all times, thereby maximizing CPU utilization.
This is achieved time sharing technique where a
CPU switch between processes very frequently that
users can interact with each program while it is
running and thereby users not feeling any
disruption or gap during its execution.
This is achieved by a program known as process
scheduler that selects an available process from a
pool of available ready state processes to be
submitted to the CPU for execution.
Scheduling Queues
Processes submitted to the system are put into a
job/process queue that consists of all processes
in the system.
Processes that are residing in main memory and
are ready and waiting to be executed by the CPU
are kept on a ready queue.
These queues are generally implemented using
linked list.
The system also includes other queues for the
processes that are waiting for some events to occur
such as interrupt or completion of an I/O request.
For example a process
makes an I/O request to
a disk that is shared by
many processes. And at
any moment the disk
may be busy serving
the I/O request of other
process.
In that case, that process
The ready queue and various I/O
will have to wait for the device queues
disk to become free. Such list of processes waiting for
a particular I/O device are stored in device queue.
Each device has its own device queue.
Queueing-diagram representation of process scheduling.

Each rectangular boxes are queue. Two types of


queues are present: the ready queue and a set of
device queues. The circles represent the resources
that serve the queues, and the arrows indicate the
flow of processes in the system.
A new process is initially put in the ready queue
where it waits there until it is selected for execution,
or is dispatched.
Once the process is executed by the CPU, following
events could occur:
• The process could issue an I/O request and
then be placed in an I/O queue.
• The process could create a new sub-process
and wait for its termination.
• The process could be put back in the ready
queue as a result of an interrupt.
For the first two events, the process eventually
switches from the waiting state to the ready
state and is then put back in the ready queue.
When a process terminates after continuing this
cycle, it is removed from all queues and its PCB and
resources are de-allocated.
All such activities of a process and its selection and
removal from the various queues are done by the
process scheduler of the operating system.
Context Switch
It is the switching of the CPU from one process to
another process using interrupt.
It requires the kernel of the operating system to
save the state of the current process in its PCB and
restore the saved state of any other process and
then scheduled to run.
Context-switching speed varies from machine to
machine, depending on the memory speed, the
number of registers present and values that needs
to be copied, and the existence of special
instructions. Typical speeds are a few milliseconds.
Process Creation
A process may create many new processes using
create-process system call, during its course of
execution.
The creating process is called a parent process,
and the new processes that are created by the
parents process are called the children process.
Each new processes may in turn create other new
processes, forming a process tree.
Most most operating systems such as UNIX,
Windows etc. assigns and identifies processes
based on a unique process identifier (or pid).
pid is typically an integer number.
Summary
A program in execution is a process
Process changes its state during its execution.
The state of a process is defined by the values in
the PCB of that process.
A process may be in: new, ready, running, waiting,
or terminated states
A process is represented by its own process control
block (PCB).
A process in waiting state is placed in waiting
queue.
Two major classes of queues in an operating
system: I/O request queues and the ready queue.
Ready queue contain processes that are ready to
be executed by the CPU.
Thus PCB’s of such processes can be linked
together to form a ready queue.
Processes that are waiting for some events to
occur such as I/O request completion (e.g. disk
allocation) and are selected from the ready queue is
known as Long-term (job) scheduling.
Selection of process from the ready queue is
known as Short-term (CPU) scheduling.
Processes may create new child processes.
Parent processes may be in the waiting state for
its children process to terminate and continue its
execution.
Parent and child process may execute concurrently.
Such concurrent execution helps in gaining speed in
computation, information sharing, and modularity.
Processes may be either independent processes or
cooperating processes.
Cooperating processes communicate with each
other using an inter-process communication.
Communication can be either through shared
memory and message passing schemes.
The shared-memory method share variables to
support communication between processes.
In shared-memory system, the operating system
provides only the shared memory and
communication between the processes are designed
by the application programmers;
In message passing method, processes communicate
by exchanging messages which is handled by
operating systems.
Both these schemes can co-exists within a single
operating system.
Memory Management
Memory management is the functionality of an
operating system that manages primary memory
and moves processes in and out between main
memory and hard disk during execution.
The memory management keeps track of each
and every memory location, regardless of either it
is allocated to some process or it is free.
It checks the amount of memory to be allocated
to processes, decides which process will get
memory at what time and keep track of the status
of freed or unallocated memory.
Process Address Space
The set of all logical addresses that a process
references in its code is called the process address
space.
The mapping from the logical addresses to physical
addresses at the time of memory allocation to the
program is done by operating system.
Three types of addresses are used for a program
before and after its memory is allocated −
Symbolic addresses – These are the addresses used
in a source code. The variable names, constants, and
instruction labels forms the elements of symbolic
address space.
Relative addresses – During compilation, a compiler
converts symbolic addresses into relative addresses.
Physical addresses – These are absolute memory
addresses generated by the loader when a program
is loaded into main memory.
Virtual and physical addresses are same in compile-
time and load-time address-binding schemes. But
differ in execution-time address-binding scheme.
The set of all logical addresses generated by a
program is called logical address space.
The set of all physical addresses corresponding to
these logical addresses is called physical address
space.
The runtime mapping from virtual to physical
address is done by a hardware device known as
memory management unit (MMU).
MMU uses following mechanism to convert virtual
address to physical address.
•The value in the base register is added to every
address generated by a user process. For
example, if the base register value is 900, and
the user process generates 100 then it is
dynamically reallocated to location 1000.
•The user process only know about the virtual
addresses and never see the physical
addresses.
Static v/s Dynamic Loading
Static loading means an object program will be
compiled and linked with other necessary object
modules into a complete program without leaving
any external program or module dependency. This
complete program includes logical addresses
Dynamic loading means an object program will be
compiled but not linked with all the object modules.
Rather, the modules that are required to be
dynamically linked will be linked only during the
execution time. In that case, only the references for
such modules are provided during execution.
In static loading, the program (and data) is loaded
into memory in order for execution to start.
Whereas, in dynamic loading, dynamic routines of
the library are stored on a disk in relocatable form
and are loaded into memory only when they are
required by the program.
Static v/s Dynamic Linking
In static linking, the linker combines all other
modules needed by a program into a single
executable program to avoid any runtime
dependency.
In dynamic linking, actual module or library with
the program is linked during runtime based on the
reference provided to the dynamic module at the
time of compilation and linking.
Eg. Dynamic Link Libraries (DLL) files in Windows
and Shared Objects in Unix.
Swapping
A mechanism in which
a process can be
swapped temporarily
out of the memory
(or move) to secondary
storage (disk) and make
that memory available
to other processes.
The system swaps back
the process from the
secondary storage to
memory again after some time.
Performance of CPU can be affected by the
swapping of process but it helps in simultaneously
running multiple and big processes.
The total time taken by swapping process includes
the time it takes to move the entire process from
the main memory to a secondary disk and then to
copy another process back from the secondary disk
to the main memory, as well as the time that
process takes to regain main memory.
For example, let us assume that a user process is of
size 2048KB and on a standard hard disk where
swapping will take place has a data transfer rate of
around 1 MB per second. The actual transfer of that
process to or from memory will take
2048KB / 1024KB per second = 2 seconds
= 2000 milliseconds
Additional time is also taken to regain main
memory.
Memory Allocation
Main memory usually has two parGGons −
Low Memory − OperaGng system resides in this
memory.
High Memory − User processes are held in high
memory.
Operating system uses the following memory
allocation mechanism for the contiguous and non-
contiguous memory management technique.
For the contiguous memory management, operating
system uses -
Single-partition allocation - In this allocation,
relocation-register is used to protect user processes
from each other, and from changing operating-
system code and data. Relocation register contains
value of smallest physical address whereas limit
register contains range of logical addresses. Each
logical address must be less than the limit register.
Single Partition
Allocation A base and a limit register of
a process
Hardware address protection with base and limit
registers
Relocation registers
Multiple-partition allocation - In this allocation,
main memory is divided into a number of partitions
where each partition should contain only one
process. When a partition is free, a process is
selected from the input queue and is loaded into
the free partition. When the process terminates, the
partition becomes available for another process.

Multiple-partition allocation can be of two types


• Fixed partition
• Variable sized partition
Fixed Partition
More than one processes
in the main memory.
Number of partitions
(non-overlapping) in RAM
are fixed but size of each
partition may or may not
be same. Here partition
are made before
execution or during
system configuration.
Variable Partition

Main memory is not divided into partitions and the


process is allocated a chunk of free memory that is
big enough for it to fit. The space which is left is
considered as the free space which can be further
used by other processes.
Fragmentation
When processes are
loaded and removed
from memory, the
free memory space
is broken into little
pieces that are known
as holes.
Eventually, processes
cannot be allocated to
memory blocks due to
their small sizes and such memory blocks remains
unused. This problem is known as Fragmentation.
FragmentaGon are of two types −
External fragmentation
The total memory space
is enough to satisfy a
process request or to
bring in a process inside
main memory, but since
the free memory spaces
or holes are not
contiguous, therefore
they cannot be used.
Occurs in variable size External fragmentation

partitioning method.
External fragmentation problem can be solved using
the technique known as Compaction.

Compaction is used to shuffle memory contents to


place all free memory together in one large block.
Dynamic relocation of processes are must.
Internal fragmentation
Memory block assigned
to a process is bigger.
Therefore some portion
of memory is left unused,
as it cannot be used by
another process and is
internal to that partition.
Internal fragmentation
Occurs in fixed sized partitioning
method.
The internal fragmentation can be reduced by
partitioning memory into small sizes but large enough
for the process or by using technique such as Paging.
Paging
Paging is a memory management technique in
which process address space is broken into blocks of
the same size called pages (size is power of 2,
between 512 bytes and 8192 bytes). The size of the
process is measured in terms of the number of
pages.
Similarly, main memory is divided into small fixed-
sized blocks of (physical) memory called frames
similar to that of the size of pages. This is done in
order to have optimum utilization of the main
memory.
Paging
Paging
Paging
Address Translation
Page address is called logical address and is
represented by page number and the offset. Offset is
the value to be added to the base address of the
process.
Therefore,
Logical Address = Page number, page offset
Frame address is called physical address and
represented by a frame’s base address and the offset.
Therefore,
Physical Address = Frame’s base address + page offset
A data structure (typically 2D-array) called page
map table is used to keep track of the relation
between a page of a process to a frame in physical
memory.
Frames are allocated to the pages of a process and
an entry into the page table is made which is to be
used throughout execution of that process.
When any data within a page is accessed, it
translates the logical address into a physical
address.
Suppose a program of 8Kb is to be executed but the
system memory can accommodate only 5Kb then
paging can help here.
When a computer runs out of RAM, operating
system (OS) move idle or irrelevant pages of
memory to secondary memory to free up RAM
spaces for other processes and brings them back
when needed by the program. This process
continues during the whole execution of the
program.
Advantages of Paging
Reduces external fragmentation.
Simple to implement and an efficient memory
management technique.
Swapping becomes very easy, due to equal size of
the pages and frames.

Disadvantages of Paging
Suffers from internal fragmentation.
May not be good for a system having small RAM.
Segmentation
In Segmentation each job or process is divided into
several segments of different sizes, one for each
module that contains pieces that perform related
functions.
Each segment is actually a different logical address
space of the program.
When a process is to be executed, its corresponding
segments are loaded into the memory in a non-
contiguous manner.
Segmentation memory management works very
similar to paging but here segments are of variable-
length where as in paging pages are of fixed size.
A program segment contains the program's main
function, utility functions, data structures, and so on.

User’s View of a Program Logical View of Segmentation


The operating system maintains a segment map
table for every process and a list of free memory
blocks along with segment numbers, their size and
corresponding memory locations in main memory.
For each segment, the table stores the starting
address of the segment and the length of the
segment. Any reference to a memory location
includes a value that identifies a segment number and
an offset.
Address Translation in Segmentation
Segmentation Example 1
Segmentation Example 2
Segmentation Example 3
Virtual Memory
Virtual memory is a memory management
technique where it uses a section of a hard disk set up
to emulate the computer's RAM thereby increasing
the computer systems ability to address more
memory than the amount physically installed on the
system.
The advantages of virtual memory are
• It extends the use of physical memory by using
disk. Thus, even programs larger than the
physical memory can be loaded and executed.
• It allows to execute a program by partially
loading it into the main memory. The part of
the program that are not required are not
loaded and are loaded on demand.
• It allows to have memory protection as each
virtual address is translated into its
corresponding physical address.
An entire program that is not required to be fully
loaded in main memory can occur in the following
situations.
When many user processes are running inside main
memory, swapping entire program to main
memory from disk or from disk to main memory
requires less number of I/O.
Some options and features of a program may be
rarely used.
Some tables implemented using an array are
assigned a fixed amount of address space however
only a small amount of the table may be actually
used.
When a program is partially executed in memory
that characteristic can draw many benefits.
Execution of program will not be limited to the
amount of physical memory available. Thus, many
user programs can be executed at the same time
thus maximizing CPU utilization and throughput as
each user program would take less amount of
physical memory.
Error handling routines written by programmer are
used only when an error occurs in the data or
computation.
Modern microprocessors
have a memory
management unit (MMU),
in-built into the hardware.
MMU translates virtual
addresses into physical
addresses as shown beside.
Virtual memory is generally
implemented by a
technique known as
Demand paging. It is also implemented in a
segmentation system. Demand segmentation can
also be used to provide virtual memory.
Demand Paging
Demand paging is a technique that uses paging and is
used to implement virtual memory concept where
pages of processes in the secondary memory are
brought into the main memory only when demanded
by CPU and not in advance.
Pages of processes currently in the main memory are
selected based on some page replacement algorithm,
freed and moved onto the secondary memory thus
making space for new pages of new or existing
processes to be brought in from the secondary disk to
the main memory.
During program
execution, if any page
referenced and not
found in the main
memory then the
processor considers
this as invalid
memory reference
as page fault and
transfers the
Demand Paging
control from the program
to the operating system to demand the page back
from the disk into the memory.
Advantages of Demand Paging
More efficient use of memory.
Large virtual memory, therefore increases the
degree of multiprogramming.

Disadvantages of Demand Paging


Number of tables per process and the amount of
processor overhead for handling page faults are
greater than the simple paged management
techniques.
Page Replacement Algorithm
Page replacement algorithms are used by the
operating system to select the pages of a process to
swap out from the main memory and write to the
disk when the pages are required to be allocated to
other process.
Paging happens whenever a page fault occurs. If any
free page exists in the memory that page can be
easily allocated else a page has to be selected to be
freed and swap out of main memory for allocation.
If any page was selected for replacement and was
paged out, and is then again referenced. In that
case, it has to read in from disk to the memory, and
this requires for I/O completion.
A page replacement algorithm select which pages
should be replaced to minimize the total number of
page misses, while balancing it with the costs of
primary storage and processor time of the
algorithm itself.
Page replacement algorithm that takes less
time is obviously a better choice.

There are many different page replacement


algorithms. We use a string of memory reference
known as reference string and compute the number
of page faults. Let us look into some of the examples.
Reference String
The string of memory references is called reference
string.
Reference strings are artificially generated.
Or by, recording the address of each memory
reference. This produces a large number of data.
Instead, we consider only the page number and not
the entire address. The page may consists of many
memory addresses related to instructions or
variables etc. of a program.
If we have a reference to a page p, then any
immediate following references to page p will never
cause a page fault. Because, page p will be in the
memory after the first reference. Thus, any
immediate following references will not fault
unless that page p has been swapped out of the
memory in case there is any tremendous crisis of
free pages or if we increase the degree of
multiprogramming. Too much swapping in and out
of pages between memory and disk where CPU
does nothing except addressing the page faults is
known as Thrashing.
For example, consider the following sequence of
memory addresses generated by CPU such as below:
123, 215, 600, 1234, 76, 96
If page size is 100 and addresses are sequential inside
each page. Therefore each page will have 100
memory addresses and there will be 13 number of
pages as highest memory address referenced by CPU
is 1234. Therefore, the reference string is
1,2,6,12,0,0
The diagram in the next slide will clarify the problem.
. Address 0 . Address 100 . Address 200 . Address 300
. . . .
. . . .
. . . .
. Address 99 . Address 199 . Address 299 . Address 399
Page 0 Page 1 Page 2 Page 3
. Address 400 . Address 500 . Address 600 . Address 700
. . . .
. . . .
. . . .
. Address 499 . Address 599 . Address 699 . Address 799
Page 4 Page 5 Page 6 Page 7

. Address 800 . Address 900 . Address 1001 . Address 1100


. . . .
. . . .
. . . .
. Address 899 . Address 1000 . Address 1099 . Address 1199
Page 8 Page 9 Page 10 Page 11
. Address 1200 Memory address 123 Page referenced – Page 1
. Memory address 215 Page referenced – Page 2
. Memory address 600 Page referenced – Page 6
. Memory address 1234 Page referenced – Page 12
. Address 1299 Memory address 76 Page referenced – Page 0
Page 12 Memory address 96 Page referenced – Page 0
First In First Out (FIFO) algorithm
Oldest page in the main memory will be selected for
replacement.
Very easy to implement, keep a list, replace pages
from the tail and add new pages at the head.
Optimal Page algorithm
Replace the page that will not be used for the
longest period of time. It uses the time when a page
is to be used.
Has the lowest page-fault rate of all algorithms. Also
known as OPT or MIN page replacement algorithm.
Least Recently Used (LRU) algorithm
Page which has not been used for the longest period
of time in memory will be selected for replacement.
Easy to implement, keep a list, replace pages by
looking back into time.
Page Buffering algorithm
A pool of free frames are kept in reserve to start a
process quickly.
Whenever any page fault, a currently allocated page
is selected from memory to be replaced.
A free frame is allocated to the new page from the
frame of free pool and the necessary update in the
page table of that process is done.
The content in the selected page (page in the frame)
to be replaced is written in the disk and the record
in free frames pool for that empty frame is updated.
Least frequently Used(LFU) algorithm
The page with the smallest count of reference is
selected for replacement.
The algorithm suffers from the situation in which a
page is used heavily initially for a process, but then
is never used again.
Most frequently Used(MFU) algorithm
The page that was used most frequently or with
greater reference count is selected for replacement.
Assumes that the selected page will not be used
immediately.
References
• Process Management -Operating Systems I- Lecture By Dr. Sura Z. Alrashid
• https://www.tutorialspoint.com/operating_system/os_memory_management.htm
• Memory Management – Operating Systems Concept, Silberschatz, Galvin and Gagne
©2005
• https://www.tutorialspoint.com/operating_system/os_virtual_memory.htm
Thank You

You might also like