0% found this document useful (0 votes)
14 views34 pages

OS Unit4 Goswami

Uploaded by

Prayagi Ankit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views34 pages

OS Unit4 Goswami

Uploaded by

Prayagi Ankit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT-4 Memory Management

Memory management is the functionality of an operating system which


handles or manages primary memory. Memory management keeps track of
each and every memory location either it is allocated to some process or it
is free. It checks how much memory is to be allocated to processes. It
decides which process will get memory at what time. It tracks whenever
some memory gets freed or unallocated and correspondingly it updates the
status.

Memory management provides protection by using two registers, a base


register and a limit register. The base register holds the smallest legal
physical memory address and the limit register specifies the size of the
range. For example, if the base register holds 300000 and the limit register
is 1200000, then the program can legally access all addresses from
300000 through 419999.

Instructions and data to memory addresses can be done in following ways

1 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


 Compile time -- When it is known at compile time where the process
will reside, compile time binding is used to generate the absolute
code.
 Load time -- When it is not known at compile time where the process
will reside in memory, then the compiler generates re-locatable code.
 Execution time -- If the process can be moved during its execution
from one memory segment to another, then binding must be delayed
to be done at run time

Dynamic Loading

In dynamic loading, a routine of a program is not loaded until it is called by


the program. All routines are kept on disk in a re-locatable load format. The
main program is loaded into memory and is executed. Other routines
methods or modules are loaded on request. Dynamic loading makes better
memory space utilization and unused routines are never loaded.

Dynamic Linking

Linking is the process of collecting and combining various modules of code


and data into a executable file that can be loaded into memory and
executed. Operating system can link system level libraries to a program.
When it combines the libraries at load time, the linking is called static
linking and when this linking is done at the time of execution, it is called as
dynamic linking.

In static linking, libraries linked at compile time, so program code size


becomes bigger whereas in dynamic linking libraries linked at execution
time so program code size remains smaller.

Logical versus Physical Address Space

An address generated by the CPU is a logical address whereas address


actually available on memory unit is a physical address. Logical address is
also known a Virtual address.

Virtual and physical addresses are the same in compile-time and load-time
address-binding schemes. Virtual and physical addresses differ in
execution-time address-binding scheme.

2 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


The set of all logical addresses generated by a program is referred to as a
logical address space. The set of all physical addresses corresponding to
these logical addresses is referred to as a physical address space.

The run-time mapping from virtual to physical address is done by the


memory management unit (MMU) which is a hardware device. MMU uses
the following mechanism to convert virtual address to physical address.

 The value in the base register is added to every address generated


by a user process which is treated as offset at the time it is sent to
memory. For example, if the base register value is 10000, then an
attempt by the user to use address location 100 will be dynamically
reallocated to location 10100.
 The user program deals with virtual addresses; it never sees the real
physical addresses.

Swapping

Swapping is a mechanism in which a process can be swapped temporarily


out of main memory to a backing store , and then brought back into
memory for continued execution.

Backing store is a usually a hard disk drive or any other secondary storage
which fast in access and large enough to accommodate copies of all
memory images for all users. It must be capable of providing direct access
to these memory images.

Major time consuming part of swapping is transfer time. Total transfer time
is directly proportional to the amount of memory swapped. Let us assume
that the user process is of size 100KB and the backing store is a standard
hard disk with transfer rate of 1 MB per second. The actual transfer of the
100K process to or from memory will take

100KB / 1000KB per second

= 1/10 second

= 100 milliseconds

3 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Memory Allocation

Main memory usually has two partitions

 Low Memory -- Operating system resides in this memory.


 High Memory -- User processes then held in high memory.

Operating system uses the following memory allocation mechanism.

In this type of allocation, relocation-register scheme is


used to protect user processes from each other, and from
Single-
changing operating-system code and data. Relocation
1 partition
register contains value of smallest physical address
allocation
whereas limit register contains range of logical addresses.
Each logical address must be less than the limit register.
In this type of allocation, main memory is divided into a
Multiple- number of fixed-sized partitions where each partition
2 partition should contain only one process. When a partition is free,
allocation a process is selected from the input queue and is loaded
into the free partition. When the process terminates, the
4 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
partition becomes available for another process.

Classification of Memory Partition

The main memory is a combination of two main portions- one for the
reserved for operating system (Kernel) and other for the user program.
The partition of the second portion i.e. user space is performed to
implement/achieve contiguous or non-contiguous memory allocation by
dividing the memory partitions into fixed size partitions or variable size
partitions for multiprogramming.
1. Uniprogramming – no partition of user space (single partition)
2. Multiprogramming
a. Contiguous
i. Fixed size partition
ii. Variable or dynamic partition
b. Non – Contiguous
i. Paging (fixed size)
ii. Segmenting (variable size)

1. Contiguous Memory Allocation- Contiguous memory allocation is


basically a method in which a single contiguous section/part of memory is
allocated to a process or file needing it. Because of this all the available
memory space resides at the same place together, which means that the
freely/unused available memory partitions are not distributed in a random
fashion here.

2. Non-Contiguous Memory Allocation- Non-Contiguous memory


allocation is basically a method allocates the memory space present in
different locations to the process as per it’s requirements dynamically. As
all the available memory space is in a distributed pattern so the freely
5 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
available memory space is also scattered here and there.
This technique of memory allocation helps to reduce the wastage of
memory, which eventually gives rise to Internal and external
fragmentation.

Contiguous Memory Allocation

Fixed (or static) Partitioning in Operating System

This is the oldest and simplest technique that allows more than one
processes to be loaded into main memory. In this partitioning method
the number of partitions (non-overlapping) in RAM are all a fixed size, but
they may or may not be same size. This method of partitioning provides
for contiguous allocation. The partition sizes are made before execution or
during system configuration.

As illustrated in the given figure, let all the fixed size partitions are 4MB.
The first process is only consuming 1MB out of 4MB in the main memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB. The sum of
Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)=
3+1+1+2 = 7MB. Suppose process P5 of size 7MB comes. But this process
cannot be accommodated inspite of available free space because of
contiguous allocation (as spanning is not allowed). Hence, 7MB becomes
part of External Fragmentation.

6 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Figure: Fixed Sized partition

Advantages

 Easy to implement- Algorithms needed to implement Fixed


Partitioning are easy to implement.
 Little OS overhead- Processing of Fixed Partitioning require lesser
excess and indirect computational power.

Disadvantages

 Internal Fragmentation- Main memory use is inefficient. Any program,


no matter how small, occupies an entire partition. This can cause
internal fragmentation.
 External Fragmentation- The total unused space of various partitions
cannot be used to load the processes even though there is space
available but not in the contiguous form.
 Limit process size.

7 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


 Limitation on Degree of Multiprogramming- Partition in Main Memory
are made before execution or during system configure. Number of
processes greater than number of partitions in RAM is invalid in Fixed
Partitioning.

Variable (or dynamic) Partitioning in Operating System

It is used to overcome the problem faced by Fixed Partitioning. In contrast


with fixed partitioning, partitions are not made before the execution or
during system configure. Initially RAM is empty and partitions are made
during the run-time according to process’s need instead of partitioning
during system configure. The size of partition will be equal to incoming
process. The partition size varies according to the need of the process
so that the internal fragmentation can be avoided to ensure efficient
utilization of RAM. Number of partitions in main memory is not fixed and
depends on the number of incoming process and Main Memory’s size.

Advantages–
 No Internal Fragmentation- In variable Partitioning, space in main
memory is allocated strictly according to the need of process.

8 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


 No restriction on Degree of Multiprogramming- More number of
processes can be accommodated due to absence of internal
fragmentation.
 No Limitation on the size of the process- The process size can’t be
restricted since the partition size is decided according to the process
size.

Disadvantages–
 Difficult Implementation- Implementing variable Partitioning is difficult
as it involves allocation of memory during run-time rather than during
system configure.
 External Fragmentation- There will be external fragmentation. For
example, suppose in above example- process P1(2MB) and process
P3(1MB) completed their execution. Hence two spaces are left i.e. 2MB
and 1MB. Let’s suppose process P5 of size 3MB comes. The empty
space in memory cannot be allocated as no spanning is allowed in
contiguous allocation. The rule says that process must be contiguously
present in main memory to get executed. Hence it results in External
Fragmentation. Now P5 of size 3 MB cannot be accommodated in spite
of required available space because in contiguous no spanning is
allowed.

9 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Fragmentation

As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that
processes can not be allocated to memory blocks considering their small
size and memory blocks remains unused. This problem is known as
Fragmentation. Fragmentation is of two types

Total memory space is enough to satisfy a request or


External
1 to reside a process in it, but it is not contiguous so it
fragmentation
can not be used.
Memory block assigned to process is bigger. Some
Internal
2 portion of memory is left unused as it can not be used
fragmentation
by another process.

Three programs named A, B, C have been loaded to the first three


partitions while the 4th partition is still free. Program A matches the size of
the partition, so there is no wastage in that partition, but Program B and
Program C are smaller than the partition size. So in partition 2 and partition
3 there is remaining free space. This wastage of free space is called
internal fragmentation. In the above example, it is equal sized fixed
partitions but this can even happen in a situation where partitions of various
fixed sizes are available. Usually the memory or hardest space is divided
into blocks that are usually the size of powers of 2 such as 2, 4, 8, 16
bytes.

10 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Consider the figure above where memory allocation is done dynamically. In
dynamic memory allocation, the allocator allocates only the exact needed
size for that program. First memory is completely free. Then the Programs
A, B, C, D and E of different sizes are loaded one after the other and they
are placed in memory contiguously in that order. Then later, Program A and
Program C closes and they are unloaded from memory. Now there are
three free space areas in the memory, but they are not adjacent. Now a
large program called Program F is going to be loaded but neither of the free
space block is not enough for Program F. The addition of all the free
spaces is definitely enough for Program F, but due to the lack of adjacency
that space is unusable for Program F. This is called External
Fragmentation.

External fragmentation can be reduced by compaction or shuffle memory


contents to place all free memory together in one large block. To make
compaction feasible, relocation should be dynamic.

Placement Algorithms

First fit - First Fit algorithm scans the linked list and whenever it finds the
first big enough hole to store a process, it stops scanning and load the
process into that hole. This procedure produces two partitions. Out of them,
one partition will be a hole while the other partition will store the process.

Advantage
Fastest algorithm, because it searches as little as possible.

11 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Disadvantage
The remaining unused memory areas left after allocation become waste if
it is too smaller. Thus request for larger memory requirement cannot be
accomplished.

Best fit - Best Fit Algorithm

The Best Fit algorithm tries to find out the smallest hole possible in the list
that can accommodate the size requirement of the process.

Advantage
Memory utilization is much better than first fit as it searches the smallest
free partition first available.
Disadvantage
Although best fit minimizes the wastage space, it consumes a lot of
processor time for searching the block which is closer to the required size.
Also, Best-fit may perform poorer than other algorithms in some cases.

Worst fit - The worst fit algorithm scans the entire list every time and tries
to find out the biggest hole in the list which can fulfill the requirement of the
process.

Advantage
It reduces, the rate of production of small gaps.

Disadvantage
If a process requiring larger memory arrives at a later stage then it cannot
be accommodated as the largest hole is already split and occupied.

Next fit – Next fit is a modified version of first fit. It begins as first fit to find
a free partition. When called next time it starts searching from where it left
off, not from the beginning.

Next fit doesn't scan the whole list, it starts scanning the list from the next
node. The idea behind the next fit is the fact that the list has been scanned
once therefore the probability of finding the hole is larger in the remaining
part of the list.
12 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
Experiments over the algorithm have shown that the next fit is not better
then the first fit. So it is not being used these days in most of the cases.

Example: Given memory partitions of 100k, 500k, 200k, 300k, and 600k
(in order). How would each of first fit, best fit, worst fit algorithms place
processes of 212k, 417k, 112k, and 426k (in order)? Which algorithm
makes the most efficient use of memory?
First Fit
212k is put in 500k partition.
417k is put in 600k partition.
112k is put in 288k partition. (new partition 500k – 212k = 288k)
426 must wait.
Best Fit
212k is put in 300k partition.
417k is put in 500k partition.
112k is put in 200k partition.
426k is put in 600k partition.
Worst Fit
212k is put in 600k partition.
417k is put in 500k partition.
112k is put in 388k partition. (new partition 600k – 212k = 388k)
426 must wait.
In this example, best fit is most efficient to utilized space than other, so it
is best.

Example - Consider the requests from processes in given order 300K,


25K, 125K, and 50K. Let there be two blocks of memory available of size
150K followed by a block size 350K.
Which of the following partition allocation schemes can satisfy the above
requests?
A) Best fit but not first fit.
13 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.
Solution:
Best Fit:
300K is allocated from a block of size 350K. 50 is left in the block.
25K is allocated from the remaining 50K block. 25K is left in the block.
125K is allocated from 150 K block. 25K is left in this block also.
50K can’t be allocated even if there is 25K + 25K space available.
First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is be allocated from the 150K block, 125K is left out.
Then 125K and 50K are allocated to the remaining left out partitions.
So, the first fit can handle requests. So option B is the correct choice.

Non-Contiguous Memory Allocation

Paging

Paging is a memory management scheme that permits the physical


memory space of a process to be noncontiguous. External fragmentation
is avoided by using paging technique. Paging is a technique in which
physical memory is broken into blocks of the same size called frames or
page frames and breaking logical memory space into blocks of same size
called pages. When a process is to be executed, it's corresponding pages
are loaded into any available memory frames.

The size of a page is typically power of 2. The selection of a page is


typically power of 2 makes the translation of a logical address into a page
number and page offset particularly easy. If the size of a logical address is
2 power m and the page size is 2 power n (bytes or words), then the high
order (m-n) bits of logical address designed as page number and
remaining lower order n bits used for page offset.

Logical address space of a process can be non-contiguous and a process


is allocated physical memory whenever the free memory frame is available.
Operating system keeps track of all free frames. Operating system needs n
free frames to run a program of size n pages.

14 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Address generated by CPU is divided into

 Page number (p) -- page number is used as an index into a page


table which contains base address of each page in physical memory.
 Page offset (d) -- page offset is combined with base address to
define the physical memory address.

Following figure show the paging table architecture.

15 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Example of Paging

16 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


If the page fault rate is PF %, the time taken in getting a page from the
secondary memory and again restarting is S (service time) and the memory
access time is ma then the effective access time can be given as;

EAT = PF X S + (1 - PF) X (ma)

Let Memory Access Time = 200 nanoseconds

Average Page Fault Service Time = 8 milliseconds

EAT = (1-p)*200+p(8 milliseconds)

= (1-p)*200+p*8000000

= 200+p*7999800

EAT means direct proportional to the page fault rate.

Segmentation

Segmentation is a technique to break memory into logical pieces where


each piece represents a group of related information. For example, data
segments or code segment for each process, data segment for operating
system and so on. Segmentation can be implemented using or without
using paging. Unlike paging, segment are having varying sizes and thus
eliminates internal fragmentation. External fragmentation still exists but to
lesser extent.

17 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Address generated by CPU is divided into

 Segment number (s) -- segment number is used as an index into a


segment table which contains base address of each segment in
physical memory and a limit of segment.
 Segment offset (o) -- segment offset is first checked against limit
and then is combined with base address to define the physical
memory address.

Virtual memory

Virtual memory provides a illusion to the user that they have large main
memory available instead of actually small main memory. It is a technique
that allows the execution of processes which are not completely available
in memory. The main advantage of this scheme is that programs can
be larger than physical memory. Virtual memory is the separation of user
logical memory from physical memory.

This separation allows an extremely large virtual memory to be provided for


programmers when only a smaller physical memory is available. Virtual
memory is commonly implemented by demand paging. It can also be
implemented in a segmentation system. Demand segmentation can also be
used to provide virtual memory.

18 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Demand Paging

A demand paging system is quite similar to a paging system with swapping.


When we want to execute a process, we swap it into memory. Rather than
swapping the entire process into memory. When a process is to be
swapped in, the pages will be used is swapped in. Instead of swapping in a
whole process, only those necessary pages into memory brought when
they are needed. Thus, it avoids reading into memory pages that will not be
used in anyway, decreasing the swap time and the amount of physical
memory needed.

19 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


 Hardware support is required to distinguish between those pages that
are in memory and those pages that are on the disk using the valid-
invalid bit scheme. The valid and the invalid pages can be checked by
checking the bit. In demand paging, only the pages that are required
currently are brought into the main memory.
 Assume that a process has 5 pages: A, B, C, D, E and A and B are in
the memory. With the help of valid-invalid bit, the system can know,
when required, that pages C, D and E are not in the memory.
 A 1 in valid-invalid bit signifies that the page is in memory i.e. valid and
0 signifies that the page may be invalid or haven't brought into the
memory just yet.
 Access to a page marked invalid causes a page-fault trap. This trap is
the result of the operating system's failure to bring the desired page into
memory. But page fault can be handled as following

20 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Step Description

Check an internal table for this process, to determine


Step 1 whether the reference was a valid or it was an invalid
memory access.

If the reference was invalid, terminate the process. If it was


Step 2
valid, but page have not yet brought in, page in the latter.

Step 3 Find a free frame.

Schedule a disk operation to read the desired page into the


Step 4
newly allocated frame.

When the disk read is complete, modify the internal table


Step 5 kept with the process and the page table to indicate that the
page is now in memory.

Restart the instruction that was interrupted by the illegal


address trap. The process can now access the page as
though it had always been in memory. Therefore, the
Step 6
operating system reads the desired page into memory and
restarts the process as though the page had always been in
memory.

Advantages

 Large virtual memory.


 More efficient use of memory.
 Unconstrained multiprogramming. There is no limit on degree of
multiprogramming.

Disadvantages

 Number of tables and amount of processor overhead for handling


page interrupts are greater than in the case of the simple paged
management techniques.
21 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
Thrashing

In virtual memory system, the problem of many page faults occurring in a


short time, called page thrashing, can drastically cut the performance of a
system. Programs that frequently access many widely separated locations
in memory are more likely to cause page thrashing on a system. To resolve
thrashing due to excessive paging, a user can do any of the following:

 To reduce page thrashing, you can run fewer programs


simultaneously.
 You can try changing the way a large program works to maximize the
capability of the operating system to guess which pages won’t be
needed.
 Increase the amount of RAM in the computer.
 Decrease the number of programs being run on the computer.
 Replace programs that are memory-heavy with equivalents that use
less memory.
 Assign working priorities to programs, i.e. low, normal, high.

Example: A Paging system consists of physical memory 224 bytes, pages


of logical address space is 256. Page size is 210 bytes. How many bits are
in a logical address?

Solution: Virtual Address = p + d

p = 256pages (28) = 8

d=210 =10

then the bits in logical address will be = 8 + 10 =18

Page Replacement Algorithm

Page replacement algorithms are the techniques using which Operating


System decides which page in memory to swap out (replace) and write to
disk when a new page is needed to swap in memory. A page replacement
algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced
22 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
to minimize the total number of page misses, while balancing it with the
costs of primary storage and processor time of the algorithm itself. There
are many different page replacement algorithms.

Reference String The string of memory references is called reference


string.

First In First Out (FIFO) algorithm

 Oldest page in main memory is the one which will be selected for
replacement.
 Easy to implement, keep a list, replace pages from the tail and add
new pages at the head.

Belady's anomaly is the name given to the phenomenon where increasing


the number of page frames results in an increase in the number of
page faults for a given memory access pattern. This phenomenon is
commonly experienced when using the First in First Out (FIFO) page
replacement algorithm.

Optimal Page algorithm

 An optimal page-replacement algorithm has the lowest page-fault rate


of all algorithms. An optimal page-replacement algorithm exists, and
has been called OPT or MIN.
23 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
 Replace the page that will not be used for the longest period of time .
Use the time when a page is to be used.

Least Recently Used (LRU) algorithm

 Page which has not been used for the longest time in main memory is
the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages by looking back into
time.

24 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Example: Let Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 and No.s of
frame: 3. Then

Page faults = 9 (FIFO)

(1 2 3) (4 2 3) (4 1 3)(4 1 2)(5 1 2)(5 3 2)(5 3 4)

Page faults = 10 (LRU)

(1 2 3)(4 2 3)(4 1 3)(4 1 2)(5 1 2)(3 1 2)(3 4 2)(3 4 5)

Page faults = 7(optimal)

(1 2 3)(1 2 4)(1 2 5)(3 2 5)(3 4 5)

If No. of frames=4

Page faults= 10 (FIFO)

Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

(1 2 3 4)(5 2 3 4)(5 1 3 4)(5 1 2 4)(5 1 2 3)(4 1 2 3)(4 5 2 3)

Now it is belady’s anomaly example.

Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,


with 4 page frame. Find number of page fault using Optimal Page
Replacement Algorithm.

25 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Example: On a system using simple segmentation compute the physical
address for each of the logical addresses 0,99; 2,78; 1,265; 3,222; 0,111,
according to the following segment table. If the address generates a
segment fault, indicate so.

Segment Base Length


0 330 124
1 876 211
2 111 99
3 498 302

Logical address: 0, 99

Segment no- 0 offset – 99

99 < length : 99 < 124

Physical address= base + offset (word no)

= 330 + 99 = 429

Logical address: 2,78

Segment no- 2 offset – 78

78 < length : 78 < 99

Physical address= base + offset (word no)

= 111 + 78 = 189

Logical address: 1, 265

Segment no- 1 offset – 265

265 < length : 265 < 211

Physical address= no physical address is generated.

Segment fault generated. ………………………………………………

26 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Segmented Paging

Pure segmentation is not very popular and not being used in many of the
operating systems. However, Segmentation can be combined with
Paging to get the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size


segments which are further divided into fixed size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has
multiple page tables.
3. The logical address is represented as Segment Number (base
address), Page number and page offset.

Segment Number- It points to the appropriate Segment Number.

Page Number- It Points to the exact page within the segment

Page Offset- Used as an offset within the page frame

Virtual address format:

Segment No. Page No. Offset /


Word No.

Each Page table contains the various information about every page of the
segment. The Segment Table contains the information about every
segment. Each segment table entry points to a page table entry and every
page table entry is mapped to one of the page within a segment.

27 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Example: Virtual address space has 8 segments. And each segment can
be upto 229 bytes long. The each segment pages onto 256 bytes pages.

Then How many bits in-

Bits in Segment no ? 3 (8=23)

Bits in page no ? 21 (229 /28 = 221 )

Bits in offset ? 8 (256 = 28)

In virtual address? 32 (3 + 21 + 8)

28 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Translation / Mapping of logical address to physical address

The CPU generates a logical address which is divided into two parts:
Segment Number and Segment Offset. The Segment Offset must be less
than the segment limit. Offset is further divided into Page number and Page
Offset. To map the exact page number in the page table, the page number
is added into the page table base.

The actual frame number with the page offset is mapped to the main
memory to get the desired word in the page of the certain segment of the
process.

29 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Advantages

1. It reduces memory usage.


2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual
segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.

Disadvantages

1. Internal Fragmentation will be there.


2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.

Various Page Table Structures

Hierarchical Paging or Multilevel paging

 There might be a case where the page table is too big to fit in a
contiguous space, so we may have a hierarchy with several levels.
 In this type of Paging the logical address space is broke up into
Multiple page tables.

 Hierarchical Paging is one of the simplest techniques and for this


purpose, a two-level page table and three-level page table can be
used.

Two Level Page Table

Consider a system having 32-bit logical address space and a page size of
1 KB and it is further divided into:

 Page Number consisting of 22 bits.


 Page Offset consisting of 10 bits.

As we page the Page table, the page number is further divided into:
30 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER
 Page Number consisting of 12 bits.
 Page Offset consisting of 10 bits.

Thus the Logical address is as follows:

In the above diagram,

P1 is an index into the Outer Page table.

P2 indicates the displacement within the page of the Inner page Table.

As address translation works from outer page table inward so is known


as forward-mapped Page Table.

Below given figure below shows the Address Translation scheme for a two-
level page table

Three Level Page Table

For a system with 64-bit logical address space, a two-level paging scheme
is not appropriate. Let us suppose that the page size, in this case, is 4KB.If
in this case, we will use the two-page level scheme then the addresses will
look like this:

31 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


Thus in order to avoid such a large table, there is a solution and that is to
divide the outer page table, and then it will result in a Three-level page
table:

Hashed Page Tables

This approach is used to handle address spaces that are larger than 32
bits.

 In this virtual page, the number is hashed into a page table.


 This Page table mainly contains a chain of elements hashing to the
same elements.

Each element mainly consists of :

1. The virtual page number


2. The value of the mapped page frame.
3. A pointer to the next element in the linked list.

Given below figure shows the address translation scheme of the Hashed
Page Table:

32 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


The above Figure shows Hashed Page Table

The Virtual Page numbers are compared in this chain searching for a
match; if the match is found then the corresponding physical frame is
extracted.

In this scheme, a variation for 64-bit address space commonly


uses clustered page tables.

Clustered Page Tables

 These are similar to hashed tables but here each entry refers to
several pages (that is 16) rather than 1.
 Mainly used for sparse address spaces where memory references
are non-contiguous and scattered

Inverted Page Tables

The Inverted Page table basically combines A page table and A frame table
into a single data structure.

 There is one entry for each virtual page number and a real page of
memory

33 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER


 And the entry mainly consists of the virtual address of the page
stored in that real memory location along with the information about
the process that owns the page.
 Though this technique decreases the memory that is needed to store
each page table; but it also increases the time that is needed to
search the table whenever a page reference occurs.

Given below figure shows the address translation scheme of the Inverted
Page Table:

In this, we need to keep the track of process id of each entry, because


many processes may have the same logical addresses. Also, many entries
can map into the same index in the page table after going through the hash
function. Thus chaining is used in order to handle this.

34 UNIT-4 (MEMORY MANAGEMENT) BY Sanjay Goswmami, UCER

You might also like