Operating System Imp Notes
Operating System Imp Notes
Operating System
The system put all of the jobs in a queue on the basis of first
come first serve and then executes the jobs one by one. The
users collect their respective output when all the jobs get
executed.
The purpose of this operating system was mainly to transfer
control from one job to another as soon as the job was
completed. It contained a small set of programs called the
resident monitor that always resided in one part of the main
memory. The remaining part is used for servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer
efficiency as it eliminates CPU time between two jobs.
Disadvantages of Batch OS
1. Starvation
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch.
If the execution time of J1 is very high, then the other four
jobs will never be executed, or they will have to wait for a very
long time. Hence the other processes get starved.
2. Not Interactive
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in
which various systems resources are used efficiently, but
they do not provide any user interaction with the
computer system.
What is Spooling
1. make sure that the CPU is not idle at any time. So, we can
say that Spooling is a combination of buffering and
queuing.
2. After the CPU generates some output, this output is first
saved in the main memory. This output is transferred to
the secondary memory from the main memory, and from
there, the output is sent to the respective output devices.
Example of Spooling
Advantages of Spooling
Disadvantages of Spooling
Defin A A
ition progr proce
am ss is
has a an
collec exam
tion ple of
of an
instru execu
ctions tion
desig progr
ned am.
to
acco
mplis
h a
certai
n
task.
Natur It is It is a
e an passiv
activ e
e entity
entity .
.
Reso It has It
urces a doesn
high 't
resou have
rces any
requi resour
reme ces
nt, requir
and it ement
requi s; it
res only
includ needs
ing memo
CPU, ry
mem space
ory to
addre store
ss, the
disk, instru
Input ctions
/outp .
ut
durin
g its
lifeti
me.
Creat The No
ion new much
proce duplic
ss ation
needs is
duplic neede
ation d.
of the
paren
t
proce
ss.
Requi It The
red holds progr
Proce resou am is
ss rces stored
includ on a
ing disk
CPU, in a
disk, file
mem and
ory doesn
addre 't
ss, need
Input any
/Outp additi
ut onal
etc. resour
ces.
Cach It It has
e may the
Data use instru
the ction
cache to use
to cache
store for its
the data.
retrie
ve
the
data
as it
uses
OS
pagin
g
sche
me
and
cache
repla
ceme
nt
policy
like
FCFS,
LRU,
RR,
LIFO.
Conclusion
Process and program are related
terms, but they are not the same.
A program is simply a script or
file that contains ordered and
sequential operations and is kept
on disk, just like a previous stage
of the process. The process is the
event produced by executing the
program, which is executed to
produce the process event.
Process States
State Diagram
The process, from its creation to
completion, passes through
various states. The minimum
number of states is five.
The names of the states are not
standardized although the process
may be in one of the following
states during execution.
1. New
A program which is going to be
picked up by the OS into the main
memory is called a new process.
2. Ready
Whenever a process is created, it
directly enters in the ready state,
in which, it waits for the CPU to
be assigned. The OS picks the
new processes from the
secondary memory and put all of
them in the main memory.
The processes which are ready for
the execution and reside in the
main memory are called ready
state processes. There can be
many processes present in the
ready state.
3. Running
One of the processes from the
ready state will be chosen by the
OS depending upon the
scheduling algorithm. Hence, if
we have only one CPU in our
system, the number of running
processes for a particular time will
always be one. If we have n
processors in the system then we
can have n processes running
simultaneously.
4. Block or wait
From the Running state, a
process can make the transition
to the block or wait state
depending upon the scheduling
algorithm or the intrinsic behavior
of the process.
When a process waits for a
certain resource to be assigned or
for the input from the user then
the OS move this process to the
block or wait state and assigns
the CPU to the other processes.
5. Completion or termination
When a process finishes its
execution, it comes in the
termination state. All the context
of the process (Process Control
Block) will also be deleted the
process will be terminated by the
Operating system.
6. Suspend ready
A process in the ready state,
which is moved to secondary
memory from the main memory
due to lack of the resources
(mainly primary memory) is
called in the suspend ready state.
If the main memory is full and a
higher priority process comes for
the execution then the OS have to
make the room for the process in
the main memory by throwing the
lower priority process out into the
secondary memory. The suspend
ready processes remain in the
secondary memory until the main
memory gets available.
7. Suspend wait
Instead of removing the process
from the ready queue, it's better
to remove the blocked process
which is waiting for some
resources in the main memory.
Since it is already waiting for
some resource to get available
hence it is better if it waits in the
secondary memory and make
room for the higher priority
process. These processes
complete their execution once the
main memory gets available and
their wait is finished.
PROCESS CREATION BY COPY
THREAD
1. New
2. Active
3. Blocked / Waiting
4. Timed Waiting
5. Terminated
When the main thread invokes the join() method then, it is said
that the main thread is in the waiting state. The main thread
then waits for the child threads to complete their tasks. When
the child threads complete their job, a notification is sent to the
main thread, which again moves the thread from waiting to the
active state.
The many to one model maps many user levels threads to one
kernel thread. This type of relationship facilitates an effective
context-switching environment, easily implemented even on
the simple kernel with no thread support.
In the above figure, the many to one model associates all user-
level threads to single kernel-level threads.
One to one multithreading model
A process is an Thread is a
instance of a segment of a
program that is process or a
being executed lightweight
or processed. process that is
managed by the
scheduler
independently.
CPU Scheduling
In the uniprogrammming
systems like MS DOS, when a
process waits for any I/O
operation to be done, the CPU
remains idol. This is an overhead
since it wastes the time and
causes the problem of starvation.
However, In Multiprogramming
systems, the CPU doesn't remain
idle during the waiting time of the
Process and it starts executing
other processes. Operating
System has to define which
process the CPU will be given.
In Multiprogramming systems, the
Operating system schedules the
processes on the CPU to have the
maximum utilization of it and this
procedure is called CPU
scheduling. The Operating System
uses various scheduling algorithm
to schedule the processes.
This is a task of the short term
scheduler to schedule the CPU for
the number of processes present
in the Job Pool. Whenever the
running process requests some IO
operation then the short term
scheduler saves the current
context of the process (also called
PCB) and changes its state from
running to waiting. During the
time, process is in waiting state;
the Short term scheduler picks
another process from the ready
queue and assigns the CPU to this
process. This procedure is
called context switching.
Why do we need Scheduling?
In Multiprogramming, if the long
term scheduler picks more I/O
bound processes then most of the
time, the CPU remains idol. The
task of Operating system is to
optimize the utilization of
resources.
If most of the running processes
change their state from running
to waiting then there may always
be a possibility of deadlock in the
system. Hence to reduce this
overhead, the OS needs to
schedule the jobs to get the
optimal utilization of CPU and to
avoid the possibility to deadlock.
What is Preemptive Scheduling?
Preemptive scheduling is a
method that may be used when a
process switches from a running
state to a ready state or from a
waiting state to a ready state.
The resources are assigned to the
process for a particular time and
then removed. If the resources
still have the remaining CPU burst
time, the process is placed back
in the ready queue. The process
remains in the ready queue until
it is given a chance to execute
again.
When a high-priority process
comes in the ready queue, it
doesn't have to wait for the
running process to finish its burst
time. However, the running
process is interrupted in the
middle of its execution and placed
in the ready queue until the high-
priority process uses the
resources. As a result, each
process gets some CPU time in
the ready queue. It improves the
overhead of switching a process
from running to ready state and
vice versa, increasing preemptive
scheduling flexibility. It may or
may not include SJF and Priority
scheduling.
For example:
Let us take the example of
Preemptive Scheduling. We have
taken P0, P1, P2, and P3 are the
four processes.
P0 3 2
P1 2 4
P2 0 6
P3 1 4
The Once
resources resourc
are es are
assigned assigne
to a d to a
process process
for a long , they
time are
period. held
until it
comple
tes its
burst
period
or
change
s to the
waiting
state.
Its When
process the
may be process
paused in or
the middle starts
of the the
execution. process
executi
on, it
must
comple
te it
before
executi
ng the
other
process
, and it
may
not be
interru
pted in
the
middle.
When a When a
high- high
priority burst
process time
continuous process
ly comes uses a
in the CPU,
ready another
queue, a process
low- with a
priority shorter
process burst
can time
starve. can
starve.
It is It is
flexible. rigid.
It is cost It does
associated not
. cost
associa
ted.
It has It
overheads doesn't
associated have
with overhe
process ad.
scheduling
.
It affects It
the design doesn't
of the affect
operating the
system design
kernel. of the
OS
kernel.
Examples: FCFS
Round and SJF
Robin and are
Shortest exampl
Remaining es of
Time First non-
preemp
tive
schedul
ing.
Process Schedulers
Operating system uses various
schedulers for the process
scheduling described below.
1. Long term scheduler
Long term scheduler is also
known as job scheduler. It
chooses the processes from the
pool (secondary memory) and
keeps them in the ready queue
maintained in the primary
memory.
Long Term scheduler mainly
controls the degree of
Multiprogramming. The purpose
of long term scheduler is to
choose a perfect mix of IO bound
and CPU bound processes among
the jobs present in the pool.
If the job scheduler chooses more
IO bound processes then all of the
jobs may reside in the blocked
state all the time and the CPU will
remain idle most of the time. This
will reduce the degree of
Multiprogramming. Therefore, the
Job of long term scheduler is very
critical and may affect the system
for a very long time.
2. Short term scheduler
Short term scheduler is also
known as CPU scheduler. It
selects one of the Jobs from the
ready queue and dispatch to the
CPU for the execution.
A scheduling algorithm is used to
select which job is going to be
dispatched for the execution. The
Job of the short term scheduler
can be very critical in the sense
that if it selects job whose CPU
burst time is very high then all
the jobs after that, will have to
wait in the ready queue for a very
long time.
This problem is called starvation
which may arise if the short term
scheduler makes some mistakes
while selecting the job.
3. Medium term scheduler
Medium term scheduler takes
care of the swapped out
processes.If the running state
processes needs some IO time for
the completion then there is a
need to change its state from
running to waiting.
Medium term scheduler is used
for this purpose. It removes the
process from the running state to
make room for the other
processes. Such processes are the
swapped out processes and this
procedure is called swapping. The
medium term scheduler is
responsible for suspending and
resuming the processes.
It reduces the degree of
multiprogramming. The swapping
is necessary to have a perfect mix
of processes in the ready queue.
What is Dispatcher in OS
A dispatcher is a special program
that comes into play after the
scheduler. When the short term
scheduler selects from the ready
queue, the Dispatcher performs
the task of allocating the selected
process to the CPU. A running
process goes to the waiting state
for IO operation etc., and then
the CPU is allocated to some
other process. This switching of
CPU from one process to the
other is called context switching.
FCFS Scheduling
First come first serve (FCFS)
scheduling algorithm simply
schedules the jobs according to
their arrival time. The job which
comes first in the ready queue
will get the CPU first. The lesser
the arrival time of the job, the
sooner will the job get the CPU.
FCFS scheduling may cause the
problem of starvation if the burst
time of the first process is the
longest among all the jobs.
Advantages of FCFS
oSimple
oEasy
oFirst come, First serv
Disadvantages of FCFS
1. The scheduling method is
non preemptive, the process
will run to the completion.
2. Due to the non-preemptive
nature of the algorithm, the
problem of starvation may
occur.
3. Although it is easy to
implement, but it is poor in
performance since the average
waiting time is higher as
compare to other scheduling
algorithms.
Multilevel Queue
Scheduling classifies the processes
according to their types. For
example, a multilevel queue
scheduling algorithm makes a
common division between the
interactive processes (foreground)
and batch processes (background).
These two processes have different
response times, so they have
different scheduling requirements.
Also, the interactive process has
higher priority than the batch
process.
In this scheduling, ready queue is
divided into various queues that are
called subqueues. A subqueue is a
distinct operational queue.
The method separates the ready
queue into various separate queues
is Multilevel Queue.
The processes are permanently
assigned to subqueues, generally
based on some property of the
process such as memory size,
priority or process type.
Each subqueue has its own
scheduling algorithm. For example,
interactive processes at the
foreground may use round robin
scheduling while batch jobs at the
background may use
the FCFS method.
In addition, there is a scheduling
algorithm that works globally
between the different subqueues.
Usually this is a fixed priority
preemptive scheduling. For
example, the foreground
queue may have absolute priority
over the background queue.
Introduction to Deadlock
Every process needs some resources to complete
its execution. However, the resource is granted
in a sequential order.
1. The process requests for some resource.
2. OS grant the resource if it is available
otherwise let the process waits.
3. The process uses it and release on the
completion.
A Deadlock is a situation where each of the
computer process waits for a resource which is
being assigned to some another process. In this
situation, none of the process gets executed
since the resource it needs, is held by some
other process which is also waiting for some
other resource to be released.
Let us assume that there are three processes P1,
P2 and P3. There are three different resources
R1, R2 and R3. R1 is assigned to P1, R2 is
assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is
being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for
R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3.
P3 also demands for R1 which is being used by
P1 therefore P3 also stops its execution.
In this scenario, a cycle is being formed among
the three processes. None of the process is
progressing and they are all waiting. The
computer becomes unresponsive since all the
processes got blocked.
Difference between Starvation and Deadlock
S Deadlock Starvation
r.
1 Deadlock is Starvation
a situation is a
where no situation
process got where the
blocked and low
no process priority
proceeds process
got
blocked
and the
high
priority
processes
proceed.
2 Deadlock is Starvation
an infinite is a long
waiting. waiting
but not
infinite.
3 Every Every
Deadlock is starvation
always a need not
starvation. be
deadlock.
4 The The
requested requested
resource is resource is
blocked by continuous
the other ly be used
process. by the
higher
priority
processes.
5 Deadlock It occurs
happens due to the
when uncontroll
Mutual ed priority
exclusion, and
hold and resource
wait, No managem
preemption ent.
and circular
wait occurs
simultaneou
sly.
3. No preemption
4. Circular Wait
Deadlock Prevention
If we simulate deadlock with a table which is
standing on its four legs then we can also
simulate four legs with the four conditions which
when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table
then the table will fall definitely. The same
happens with deadlock, if we can be able to
violate one of the four necessary conditions and
don't let them occur together then we can
prevent the deadlock.
Let's see how we can prevent each of the
conditions.
1. Mutual Exclusion
Mutual section from the resource point of view is
the fact that a resource can never be used by
more than one process simultaneously which is
fair enough but that is the main reason behind
the deadlock. If a resource could have been used
by more than one process at the same time then
the process would have never been waiting for
any resource.
However, if we can be able to violate resources
behaving in the mutually exclusive manner then
the deadlock can be prevented.
Spooling
For a device like printer, spooling can work.
There is a memory associated with the printer
which stores jobs from each of the process into
it. Later, Printer collects all the jobs and print
each one of them according to FCFS. By using
this mechanism, the process doesn't have to wait
for the printer and it can continue whatever it
was doing. Later, it collects the output when it is
produced.
Although, Spooling can be an effective approach
to violate mutual exclusion but it suffers from
two kinds of problems.
1. This cannot be applied to every resource.
2. After some point of time, there may arise
a race condition between the processes to
get space in that spool.
We cannot force a resource to be used by more
than one process at the same time since it will
not be fair enough and some serious problems
may arise in the performance. Therefore, we
cannot violate mutual exclusion for a process
practically.
2. Hold and Wait
Hold and wait condition lies when a process holds
a resource and waiting for some other resource
to complete its task. Deadlock occurs because
there can be more than one process which are
holding one resource and waiting for other in the
cyclic order.
However, we have to find out some mechanism
by which a process either doesn't hold any
resource or doesn't wait. That means, a process
must be assigned all the necessary resources
before the execution starts. A process must not
wait for any resource once the execution has
been started.
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0
A 1 1 0 0
B 0 1 1 2
C 1 2 1 0
D 2 1 1 2
1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)
Example
Let'sconsider 3 processes P1, P2 and P3, and two
types of resources R1 and R2. The resources are
having 1 instance each.
According to the graph, R1 is being used by P1,
P2 is holding R2 and waiting for R1, P3 is waiting
for R1 as well as R2.
The graph is deadlock free since no cycle is being
formed in the graph.
Kill a process
Killing a process can solve our problem but the
bigger concern is to decide which process to kill.
Generally, Operating system kills a process which
has done least amount of work until now.
Kill all process
This is not a suggestible approach but can be
implemented if the problem becomes very
serious. Killing all process will lead to inefficiency
in the system because all the processes will
execute again from starting.
Banker's Algorithm in Operating System
What is Memory?
Computer memory can be defined as a collection
of some data represented in the binary format.
On the basis of various functions, memory can
be classified into various categories. We will
discuss each one of them later in detail.
A computer device that is capable to store any
information or data temporally or permanently, is
called storage device.
How Data is being stored in a computer system?
In order to understand memory management,
we have to make everything clear about how
data is being stored in a computer system.
Machine understands only binary language that is
0 or 1. Computer converts every data into binary
language first and then stores it into the
memory.
That means if we have a program line written
as int α = 10 then the computer converts it into
the binary language and then store it into the
memory blocks.
Memory Management
In this article, we will understand memory
management in detail.
degrees of protection.
o There are mechanisms by which modules can be
o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management
scheme, once a process is loaded, it is given
full processor's time, and no other processor
will interrupt it.
o Fixed Partitioning
o Dynamic Partitioning
Fixed Partitioning
The main memory is divided into several fixed-
sized partitions in a fixed partition memory
management scheme or static partitioning. These
partitions can be of the same size or different
sizes. Each partition can hold a single process.
The number of partitions determines the degree
of multiprogramming, i.e., the maximum number
of processes in memory. These partitions are
made at the time of system generation and
remain fixed after that.
o Simple to implement.
o Easy to manage and design.
1. Internal Fragmentation
. External Fragmentation
The total unused space of various partitions
cannot be used to load the processes even
though there is space available but not in the
contiguous form.
As shown in the image below, the remaining 1
MB space of each partition cannot be used as a
unit to store a 4 MB process. Despite of the fact
that the sufficient space is available to load the
process, process will not be loaded.
3. Limitation on the size of the process
If the process size is larger than the size of
maximum sized partition then that process
cannot be loaded into the memory. Therefore, a
limitation can be imposed on the process size
that is it cannot be larger than the size of the
largest partition.
4. Degree of multiprogramming is less
By Degree of multi programming, we simply
mean the maximum number of processes that
can be loaded into the memory at the same time.
In fixed partitioning, the degree of
multiprogramming is fixed and very less due to
the fact that the size of the partition cannot be
varied according to the size of processes.
Dynamic Partitioning
The dynamic partitioning was designed to
overcome the problems of a fixed partitioning
scheme. In a dynamic partitioning scheme, each
process occupies only as much memory as they
require when loaded for processing. Requested
processes are allocated memory until the entire
physical memory is exhausted or the remaining
space is insufficient to hold the requesting
process. In this scheme the partitions used are of
variable size, and the number of partitions is not
defined at the system generation time.
o Simple to implement.
o Easy to manage and design.
1. No Internal Fragmentation
Given the fact that the partitions in dynamic
partitioning are created according to the need of
the process, It is clear that there will not be any
internal fragmentation because there will not be
any unused remaining space in the partition.
2. No Limitation on the size of the process
In Fixed partitioning, the process with the size
greater than the size of the largest partition
could not be executed due to the lack of
sufficient contiguous memory. Here, In Dynamic
partitioning, the process size can't be restricted
since the partition size is decided according to
the process size.
3. Degree of multiprogramming is dynamic
Due to the absence of internal fragmentation,
there will not be any unused space in the
partition hence more processes can be loaded in
the memory at the same time.
Disadvantages of dynamic partitioning
External Fragmentation
Absence of internal fragmentation doesn't mean
that there will not be external fragmentation.
Let's consider three processes P1 (1 MB) and P2
(3 MB) and P3 (1 MB) are being loaded in the
respective partitions of the main memory.
After some time P1 and P3 got completed and
their assigned space is freed. Now there are two
unused partitions (1 MB and 1 MB) available in
the main memory but they cannot be used to
load a 2 MB process in the memory since they
are not contiguously located.
The rule says that the process must be
contiguously present in the main memory to get
executed. We need to change this rule to avoid
external fragmentation.
Complex Memory Allocation
In Fixed partitioning, the list of partitions is made
once and will never change but in dynamic
partitioning, the allocation and deallocation is
very complex since the partition size will be
varied every time when it is assigned to a new
process. OS has to keep track of all the
partitions.
Due to the fact that the allocation and
deallocation are done very frequently in dynamic
memory allocation and the partition size will be
changed at each time, it is going to be very
difficult for OS to manage everything.
Advantages of paging:
Let's consider,
o Logical Address = 24 bits
o Logical Address space = 2 ^ 24 bytes
o Let's say, Page size = 4 KB = 2 ^ 12 Bytes
o Page offset = 12
o Number of bits in a page = Logical Address -
Page Offset = 24 - 12 = 12 bits
o Number of pages = 2 ^ 12 = 2 X 2 X 10 ^ 1
0 = 4 KB
o Let's say, Page table entry = 1 Byte
o Therefore, the size of the page table = 4 KB
X 1 Byte = 4 KB
Here we are lucky enough to get the page table
size equal to the frame size. Now, the page table
will be simply stored in one of the frames of the
main memory. The CPU maintains a register
which contains the base address of that frame,
every page number from the logical address will
first be added to that base address so that we
can access the actual location of the word being
asked.
However, in some cases, the page table size and
the frame size might not be same. In those
cases, the page table is considered as the
collection of frames and will be stored in the
different frames.
What is Segmentation?
Segmentation is a technique that eliminates the
requirements of contiguous allocation of main
memory. In this, the main memory is divided
into variable-size blocks of physical memory
called segments. It is based on the way the
programmer follows to structure their programs.
With segmented memory allocation, each job is
divided into several segments of different sizes,
one for each module. Functions, subroutines,
stack, array, etc., are examples of such modules.
o
Segmentation
In Operating Systems, Segmentation is a
memory management technique in which the
memory is divided into the variable size parts.
Each part is known as a segment which can be
allocated to a process.
The details about each segment are stored in a
table called a segment table. Segment table is
stored in one (or many) of the segments.
Segment table contains mainly two information
about segment:
o Base: It is the base address of the segment
o Limit: It is the length of the segment.
Why Segmentation is required?
Till now, we were using Paging as our main
memory management technique. Paging is more
close to the Operating system rather than the
User. It divides all the processes into the form of
pages regardless of the fact that a process can
have some relative parts of functions which need
to be loaded in the same page.
Operating system doesn't care about the User's
view of the process. It may divide the same
function into different pages and those pages
may or may not be loaded at the same time into
the memory. It decreases the efficiency of the
system.
It is better to have segmentation which divides
the process into the segments. Each segment
contains the same type of functions such as the
main function can be included in one segment
and the library functions can be included in the
other segment.
other segment.
Translation of Logical address into physical address
by segment table
CPU generates a logical address which contains
two parts:
o Segment Number
o Offset
For Example:
Suppose a 16 bit address is used with 4 bits for
the segment number and 12 bits for the segment
offset so the maximum segment size is 4096 and
the maximum number of segments that can be
refereed is 16.
When a program is loaded into memory, the
segmentation system tries to locate space that is
large enough to hold the first segment of the
process, space information is obtained from the
free list maintained by memory manager. Then it
tries to locate space for other segments. Once
adequate space is located for all the segments, it
loads them into their respective areas.
The operating system also generates a segment
map table for each program.
address space.
o The segment table is of lesser size as
compared to the page table in paging.
Disadvantages
o It can have external fragmentation.
Paging VS Segmentation
Sr Paging Segmentatio
N n
o.
1 Non- Non-
Contiguous contiguous
memory memory
allocation allocation
2 Paging Segmentati
divides on divides
program program
into fixed into
size pages. variable
size
segments.
3 OS is Compiler is
responsible responsible
.
4 Paging is Segmentati
faster than on is
segmentati slower
on than
paging
5 Paging is Segmentati
closer to on is closer
Operating to User
System
6 It suffers It suffers
from from
internal external
fragmentat fragmentat
ion ion
7 There is no There is no
external external
fragmentat fragmentat
ion ion
8 Logical Logical
address is address is
divided divided
into page into
number segment
and page number
offset and
segment
offset
Let us consider,
word size = 8 Bytes = 2 ^ 3 Bytes
Hence,
Physical address space (in words) = (2 ^ 16) /
(2 ^ 3) = 2 ^ 13 Words
Therefore,
Physical Address = 13 bits
In General,
If, Physical Address Space = N Words
In general,
If, logical address space = L words
Then, Logical Address = Log2L bits
What is a Word?
The Word is the smallest unit of the memory. It
is the collection of bytes. Every operating system
defines different word sizes after analyzing the n-
bit address that is inputted to the decoder and
the 2 ^ n memory locations that are produced
from the decoder.
Virtual Memory
Virtual Memory is a storage scheme that provides
user an illusion of having a very big main
memory. This is done by treating a part of
secondary memory as the main memory.
In this scheme, User can load the bigger size
processes than the available main memory by
having the illusion that the memory is available
to load the process.
Instead of loading one big process in the main
memory, the Operating System loads the
different parts of more than one process in the
main memory.
By doing this, the degree of multiprogramming
will be increased and therefore, the CPU
utilization will also be increased.
How Virtual Memory Works?
In modern word, virtual memory has become
quite common these days. In this scheme,
whenever some pages needs to be loaded in the
main memory for the execution and the memory
is not available for those many pages, then in
that case, instead of stopping the pages from
entering in the main memory, the OS search for
the RAM area that are least used in the recent
times or that are not referenced and copy that
into the secondary memory to make the space
for the new pages in the main memory.
Since all this procedure happens automatically,
therefore it makes the computer feel like it is
having the unlimited RAM.
Demand Paging
Demand Paging is a popular method of virtual
memory management. In demand paging, the
pages of a process which are least used, get
stored in the secondary memory.
A page is copied to the main memory when its
demand is made or page fault occurs. There are
various page replacement algorithms which are
used to determine the pages which will be
replaced. We will discuss each one of them later
in detail.
Snapshot of a virtual memory management system
Let us assume 2 processes, P1 and P2, contains
4 pages each. Each page size is 1 KB. The main
memory contains 8 frame of 1 KB each. The OS
resides in the first two partitions. In the third
partition, 1st page of P1 is stored and the other
frames are also shown as filled with the different
pages of processes in the main memory.
The page tables of both the pages are 1 KB size
each and therefore they can be fit in one frame
each. The page tables of both the processes
contain various information that is also shown in
the image.
The CPU contains a register which contains the
base address of page table that is 5 in the case
of P1 and 7 in the case of P2. This page table
base address will be added to the page number
of the Logical address when it comes to
accessing the actual corresponding entry.
entry.
Advantages of Virtual Memory
o The degree of Multiprogramming will be
increased.
o User can run large application with less real
RAM.
o There is no need to buy more memory RAMs.
takes time.
o It takes more time in switching between
applications.
o The user will have the lesser hard disk space
for its use.
Page Fault Handling in Operating System
In this article, you will learn about page fault
handling in the operating system and its steps.
What is Page Fault in Operating System?
1. Page Hit
2. Page Miss