0% found this document useful (0 votes)
7 views22 pages

Memory Management 4th

1) Memory management involves mapping logical addresses generated by programs to physical addresses in memory. The memory management unit (MMU) performs this mapping using base and limit registers. 2) Programs can have their addresses bound at compile time, load time, or execution time. Dynamic loading allows parts of a program to be loaded only when needed. 3) Swapping exchanges processes between main memory and disk storage so that ready processes are in memory while non-ready processes are swapped out. This improves memory utilization and degree of multiprogramming.

Uploaded by

Unnatha Unnatha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views22 pages

Memory Management 4th

1) Memory management involves mapping logical addresses generated by programs to physical addresses in memory. The memory management unit (MMU) performs this mapping using base and limit registers. 2) Programs can have their addresses bound at compile time, load time, or execution time. Dynamic loading allows parts of a program to be loaded only when needed. 3) Swapping exchanges processes between main memory and disk storage so that ready processes are in memory while non-ready processes are swapped out. This improves memory utilization and degree of multiprogramming.

Uploaded by

Unnatha Unnatha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CS4TH3 Operating Systems

Memory Management

Basic Hardware
Program must be
→ brought (from disk) into memory and placed within a process for it to be run.
• Main-memory and registers are only storage CPU can access directly.
• Registers are accessible within in one CPU clock.
• Completing a memory acess can take many cycles of the CPU clock.
• Cache sits between main-memory and CPU registers to ensure fast memory access.
• Protection of memory is required to ensure correct operation.
• The base register holds the smallest legal physical address and the limit register specifies the range of
addresses. A pair of base and limit-registers define the logical (virtual) address space shown in the fig below:

Fig: A base and a limit-register define a logical-address space

Fig: Hardware address protection with base and limit-registers

Protection of memory space is accomplished by having the CPU hardware compare every address generated
in user mode with the registers. The base and limit registers can be loaded only by the operating system which
Nirmala G, Dept. of CSE, SSIT 1
CS4TH3 Operating Systems

uses a special privileged instruction that can be executed only in kernel mode. Only the operating system can
load the base and limit registers.

Address Binding
Address binding of instructions to memory-addresses can happen at 3 different stages (Figure 3.3). It is
mapping from one address space to another.

1) Compile Time
• If memory-location known a priori, absolute code can be generated.
• Must recompile code if starting location changes.

2) Load Time
• Must generate relocatable code if memory-location is not known at compile time. In this case, final
binding is delayed until load time.

3) Execution Time
• Binding delayed until run-time if the process can be moved during its execution from one memory-
segment to another.

Fig: Multistep processing of a user-program

Nirmala G, Dept. of CSE, SSIT 2


CS4TH3 Operating Systems

Logical versus Physical Address Space


Address uniquely identifies a location in the memory. We have two types of addresses that are logical address
and physical address

Logical Address

• It is generated by the CPU while a program is running.


• The logical address is a virtual address and can be viewed by the user.
• The user can’t view the physical address directly. The logical address is used like a reference, to access
the physical address.

Physical Address

• It identifies a physical location of required data in a memory.


• The user never directly deals with the physical address but can access by its corresponding logical
address.

The user program generates the logical address and thinks that the program is running in this logical address
but the program needs physical memory for its execution, therefore, the logical address must be mapped to the
physical address by MMU before they are used.

Hardware device that maps virtual-address to physical-address (Figure 3.4).

Fig: Dynamic relocation using a relocation-register

Nirmala G, Dept. of CSE, SSIT 3


CS4TH3 Operating Systems

Comparison Chart:

PARAMENTER LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU location in a memory unit

Logical Address Space is set

of all logical addresses Physical Address is set of all

Address generated by CPU in reference physical addresses mapped to the

Space to a program. corresponding logical addresses.

User can view the logical User can never view physical

Visibility address of a program. address of program.

Generation generated by the CPU Computed by MMU

The user can use the logical

address to access the physical The user can indirectly access

Access address. physical address but not directly.

Dynamic Loading and Dynamic Linking

Dynamic Loading

All the programs are loaded in the main memory for execution. Sometimes complete program is loaded into
the memory, but sometimes a certain part or routine of the program is loaded into the main memory only when
it is called by the program, this mechanism is called Dynamic Loading.

Advantage is that a routine is loaded only when it is needed. This method is useful when large amounts of code
are needed to handle infrequently occurring cases such as error routines.

It does not require special support from OS. It is the responsibility of the users to design their programs to take
advantage of such method.

Nirmala G, Dept. of CSE, SSIT 4


CS4TH3 Operating Systems

Dynamic linking

Establishing the linking between all the modules or all the functions of the program in order to continue the
program execution is called linking.

Linking intakes the object codes generated by the assembler and combines them to generate the executable
module.

Differences between Linking and Loading

1. The key difference between linking and loading is that the linking generates the executable file of a
program whereas, the loading loads the executable file obtained from the linking into main memory for
execution.

2. The linking intakes the object module of a program generated by the assembler. However, the loading
intakes the executable module generated by the linking.

3. The linking combines all object modules of a program to generate executable modules it also links the
library function in the object module to built-in libraries of the high-level programming language. On the
other hand, loading allocates space to an executable module in main memory.

Shared Libraries

• Shared libraries are libraries that are linked dynamically.


• Shared libraries allow common OS code to be bundled into a wrapper and used by any application software on
the system without loading multiple copies into memory. All the applications on the system can use it without
using more memory.
• A shared library is a file that is intended to be shared by executable files and further shared object files.
• They are loaded into memory at load time or run time, rather than being copied by a linker during the
creation of the executable file.
• Shared Library are open files opened by a process.

Swapping

Swapping is one of the several methods of memory management. In swapping an idle or a blocked process
in the main memory is swapped out to the backing store (disk) and the process that is ready for execution in
the disk, is swapped in main memory for execution.

In simple Swapping is exchanging data between the hard disk and the RAM.

Working:

• A process must be in the main memory before it starts execution. So, a process that is ready for
execution is brought in the main memory. Now, if a running the process gets blocked.
• The memory manager temporarily swaps out that blocked process on to the disk. This makes the space
for another process in the main memory.
• So, the memory manager swaps in the process ready for execution, in the main memory, from the disk.
Nirmala G, Dept. of CSE, SSIT 5
CS4TH3 Operating Systems

• The swapped out process is also brought back into the main memory when it again gets ready for
execution.
• Ideally, the memory manager swaps the processes so fast, that the main memory always has processes
ready for execution.

Swapping of the processes also depends on the priority-based pre-emptive scheduling.

Whenever a process with higher priority arrives the memory manager swaps out the process with the lowest
priority to the disk and swaps in the process with the highest priority in the main memory for execution. When
the highest priority process is finished, the lower priority process is swapped back in memory and continues to
execute. This Variant of swapping is termed as roll-out, roll-in or swap-out swap-in.

Fig: Swapping of two processes using a disk as a backing-store

Swapping depends upon address-binding:


1) If binding is done at load-time, then process cannot be easily moved to a different location.
2) If binding is done at execution-time, then a process can be swapped into a different memory space, because
the physical-addresses are computed during execution-time.

Note: Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the amount
of memory swapped.

Advantages of Swapping

• It offers a higher degree of multiprogramming.


• Allows dynamic relocation. For example, if address binding at execution time is being used, then
processes can be swap in different locations. Else in case of compile and load time bindings,
processes should be moved to the same location.
• It helps to get better utilization of memory.
• Minimum wastage of CPU time on completion so it can easily be applied to a priority-based
scheduling method to improve its performance.

Nirmala G, Dept. of CSE, SSIT 6


CS4TH3 Operating Systems

Disadvantages of Swapping.

• In the case of heavy swapping activity, if the computer system loses power, the user might lose all the
information related to the program.
• If the swapping algorithm is not good, the overall method can increase the number of page faults and
decline the overall processing performance.
• Inefficiency may arise in the case where a resources or a variable is commonly used by the processes
which are participating in the swapping process

Contiguous Memory Allocation


In the contiguous memory allocation, both the operating system and the user must reside in the main memory.
The main memory is divided into two portions one portion is for the operating and other is for the user program.

In contiguous memory allocation, all the available memory space remains together in one place. It means
freely available memory partitions are not scattered here and there across the whole memory space.

In the non-contiguous memory allocation the available free memory space are scattered here and there and
all the free memory space is not at one place. So this is time-consuming.

Memory Protection

Memory-protection means
→ protecting OS from user-process and
→ protecting user-processes from one another.
• Memory-protection is done using
Relocation-register: contains the value of the smallest physical-address.
Limit-register: contains the range of logical-addresses.

Memory Allocation

1) Fixed-sized Partitioning
• The memory is divided into fixed-sized partitions.
• Each partition may contain exactly one process.
• The degree of multiprogramming is bound by the number of partitions.
• When a partition is free, a process is
→ selected from the input queue and
→ loaded into the free partition.
• When the process terminates, the partition becomes available for another process.
Nirmala G, Dept. of CSE, SSIT 7
CS4TH3 Operating Systems

Advantage: Management or book keeping is easy.

Disadvantage: Internal fragmentation

2) Variable-sized Partitioning

The OS keeps a table indicating


→ which parts of memory are available and
→ which parts are occupied.
A hole is a block of available memory.
Normally, memory contains a set of holes of various sizes.
Initially, all memory is
→ available for user-processes and
→ considered one large hole.
When a process arrives, the process is allocated memory from a large hole.
If we find the hole, we
→ allocate only as much memory as is needed and
→ keep the remaining memory available to satisfy future requests.

Advantage: There is no internal fragmentation.

Disadvantage: Management is very difficult as memory is becoming purely fragmented after some time.

Three strategies used to select a free hole from the set of available holes.

1) First Fit
• Allocate the first hole that is big enough.
• Searching can start either
→ at the beginning of the set of holes or
→ at the location where the previous first-fit search ended.
Advantage: Fastest algorithm because it searches as little as possible.

Disadvantage: The remaining unused memory areas left after allocation become waste if it is too smaller. Thus
request for larger memory requirement cannot be accomplished.

2) Best Fit

• Allocate the smallest hole that is big enough.


• We must search the entire list, unless the list is ordered by size.
• This strategy produces the smallest leftover hole.
Advantage: Memory utilization is much better than first fit as it searches the smallest free partition first
available.

Disadvantage: It is slower and may even tend to fill up memory with tiny useless holes.

Nirmala G, Dept. of CSE, SSIT 8


CS4TH3 Operating Systems

3) Worst Fit

• Allocate the largest hole.


• Again, we must search the entire list, unless it is sorted by size.
• This strategy produces the largest leftover hole.
Advantage: Reduces the rate of production of small gaps.

Disadvantage: If a process requiring larger memory arrives at a later stage then it cannot be accommodated as
the largest hole is already split and occupied.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.

Fragmentation
Fragmentation is a phenomenon in which storage space is used inefficiently, reducing capacity or performance
and often both. Whenever a process is loaded or removed from the physical memory block, it creates a small
hole in memory space which is called fragment

Both the internal and external classification affects data accessing speed of the system. Two types of memory
fragmentation:

1) Internal fragmentation: When a process is assigned to a memory block and if that process is smaller than the
memory requested, it creates a free space in the assigned memory block. Then the difference between assigned
and requested memory is called the internal fragmentation.
2) External fragmentation: When the process is loaded or removed from the memory, a free space is created.
This free space creates an empty space in the memory which is called external fragmentation.

S.NO INTERNAL FRAGMENTATION EXTERNAL FRAGMENTATION

In internal fragmentation

fixed-sized memory, blocks In external fragmentation, variable-

square measure appointed to sized memory blocks square

1. process. measure appointed to method.

Internal fragmentation

happens when the method or External fragmentation happens

process is larger than the when the method or process is

2. memory. removed.

Nirmala G, Dept. of CSE, SSIT 9


CS4TH3 Operating Systems

S.NO INTERNAL FRAGMENTATION EXTERNAL FRAGMENTATION

Solution of external fragmentation is

The solution of internal compaction, paging and

3. fragmentation is best-fit block. segmentation.

External fragmentation occurs when

Internal fragmentation occurs memory is divided into variable size

when memory is divided into partitions based on the size of

4. fixed sized partitions. processes.

The unused spaces formed between

The difference between non-contiguous memory fragments

memory allocated and are too small to serve a new

required space or memory is process, is called External

5. called Internal fragmentation. fragmentation .

Nirmala G, Dept. of CSE, SSIT 10


CS4TH3 Operating Systems

The above diagram clearly shows the internal fragmentation because the difference between memory
allocated and required space or memory is called Internal fragmentation.

External fragmentation diagram:

In the above diagram, we can see that, there is enough space (55 KB) to run a process-07 (required 50 KB) but
the memory (fragment) is not contiguous. Here, we use compaction, paging or segmentation to use the free
space to run a process.

Paging
Paging is a memory management scheme. Paging allows a process to be stored in a memory in a non-
contiguous manner. Storing process in a non-contiguous manner solves the problem
of external fragmentation.

Characteristics:

• Paging is a fixed size partitioning scheme.


• In paging, secondary memory and main memory are divided into equal fixed size partitions.
• The partitions of logical(secondary) memory into blocks of the same size are called as pages.
• The partitions of main memory into fixed sized blocks are called as frames

Nirmala G, Dept. of CSE, SSIT 11


CS4TH3 Operating Systems

• Each process is divided into parts where size of each part is same as page size.
• The size of the last part may be less than the page size.
• The pages of process are stored in the frames of main memory depending upon their availability.
Example-

• Consider a process is divided into 4 pages P0, P1, P2 and P3.


• Depending upon the availability, these pages may be stored in the main memory frames in a non-
contiguous fashion as shown-

Translating Logical Address into Physical Address-

• CPU always generates a logical address.


• A physical address is needed to access the main memory.
Following steps are followed to translate logical address into physical address-

Nirmala G, Dept. of CSE, SSIT 12


CS4TH3 Operating Systems

Step-01:
CPU generates a logical address consisting of two parts-
1. Page Number
2. Page Offset

• Page Number specifies the specific page of the process from which CPU wants to read the data.
• Page Offset specifies the specific word on the page that CPU wants to read.
Step-02:
For the page number generated by the CPU,
• Page Table provides the corresponding frame number (base address of the frame) where that page is
stored in the main memory.

Step-03:

• The frame number combined with the page offset forms the required physical address.

• Frame number specifies the specific frame where the required page is stored.
• Page Offset specifies the specific word that has to be read from that page.

Nirmala G, Dept. of CSE, SSIT 13


CS4TH3 Operating Systems

Diagram-

The following diagram illustrates the above steps of translating logical address into physical address-

The advantages of paging are-


• It allows to store parts of a single process in a non-contiguous fashion.
• It solves the problem of external fragmentation.
• Easy to use memory management algorithm
• Swapping is easy between equal-sized pages and page frames.

The disadvantages of paging are-

• Complex memory management algorithm


• Page tables consume additonal memory.
• Multi-level paging may lead to memory reference overhead.

Protection
• Memory-protection is achieved by protection-bits for each frame.
• The protection-bits are kept in the page-table.
• One protection-bit can define a page to be read-write or read-only.
• Every reference to memory goes through the page-table to find the correct frame-number.
• Firstly, the physical-address is computed. At the same time, the protection-bit is checked to verify that no
writes are being made to a read-only page.
• An attempt to write to a read-only page causes a hardware-trap to the OS (or memory-protection violation).
Nirmala G, Dept. of CSE, SSIT 14
CS4TH3 Operating Systems

Valid Invalid Bit


• This bit is attached to each entry in the page-table .
1) Valid bit: The page is in the process’ logical-address space.
2) Invalid bit: The page is not in the process’ logical-address space.
• Illegal addresses are trapped by use of valid-invalid bit.
The OS sets this bit for each page to allow or disallow access to the page.

Hardware Support for Paging


A translation lookaside buffer (TLB) is a memory cache that stores recent translations of virtual
memory to physical addresses for faster retrieval.It can be used to reduce the time taken to access the page
table again and again.

Note: In other words, we can say that TLB is faster and smaller than the main memory but cheaper and bigger
than the register.

Working:
When a logical-address is generated by the CPU, its page-number is presented to the TLB.
If the page-number is found (TLB hit), its frame-number is
→ immediately available and
→ used to access memory.
If page-number is not in TLB (TLB miss), a memory-reference to page table must be made.
The obtained frame-number can be used to access memory (Figure 3.19).
In addition, we add the page-number and frame-number to the TLB, so that they will be
found quickly on the next reference.
• If the TLB is already full of entries, the OS must select one for replacement.
• Percentage of times that a particular page-number is found in the TLB is called hit ratio

Nirmala G, Dept. of CSE, SSIT 15


CS4TH3 Operating Systems

Some TLBs have wired down entries that can't be removed.


• Some TLBs store ASID (address-space identifier) in each entry of the TLB that uniquely
→ identify each process and
→ provide address space protection for that process

Advantage:
• Search operation is fast.
• TLB reduces the effective access time.
• Only one memory access is required when TLB hit occurs.
Disadvantage:
• TLB can hold the data of only one process at a time.
• When context switches occur frequently, the performance of TLB degrades due to low hit ratio.
• As it is a special hardware, it involves additional cost.

Structure of the Page Table


• Page table is a data structure.
• It maps the page number referenced by the CPU to the frame number where that page is stored.
Characteristics-
• Page table is stored in the main memory.
• Number of entries in a page table = Number of pages in which the process is divided.
• Page Table Base Register (PTBR) contains the base address of page table.
• Each process has its own independent page table.

Three types
1) Hierarchical Paging
2) Hashed Page-tables
3) Inverted Page-tables

Nirmala G, Dept. of CSE, SSIT 16


CS4TH3 Operating Systems

Hierarchical Paging

• Problem: Most computers support a large logical-address space (232 to 264). In these systems, the page-
table itself becomes excessively large.
Solution: Divide the page-table into smaller pieces.

Two Level Paging Algorithm


• The page-table itself is also paged (Figure 3.22).
• This is also known as a forward-mapped page-table because address translation works from the outer page-
table inwards.
• For example (Figure 3.23):
Consider the system with a 32-bit logical-address space and a page-size of 4 KB.
A logical-address is divided into
→ 20-bit page-number and
→ 12-bit page-offset.
Since the page-table is paged, the page-number is further divided into
→ 10-bit page-number and
→ 10-bit page-offset.
Thus, a logical-address is as follows:

Figure: A two-level page-table scheme

Nirmala G, Dept. of CSE, SSIT 17


CS4TH3 Operating Systems

Figure: Address translation for a two-level 32-bit paging architecture

Hashed Page Tables


This approach is used for handling address spaces larger than 32 bits.
• The hash-value is the virtual page-number.
• Each entry in the hash-table contains a linked-list of elements that hash to the same location (to
handle collisions).
• Each element consists of 3 fields:
1) Virtual page-number
2) Value of the mapped page-frame and
3) Pointer to the next element in the linked-list.
• The algorithm works as follows (Figure 3.24):
1) The virtual page-number is hashed into the hash-table.
2) The virtual page-number is compared with the first element in the linked-list.
3) If there is a match, the corresponding page-frame (field 2) is used to form the desired
physical-address.
4) If there is no match, subsequent entries in the linked-list are searched for a matching virtual
page-number.

Clustered Page Tables


• These are similar to hashed page-tables except that each entry in the hash-table refers to several
pages rather than a single page.
• Advantages:
1) Favourable for 64-bit address spaces.
2) Useful for address spaces, where memory-references are non-contiguous and scattered
throughout the address space.

Figure: Hashed page-table


Nirmala G, Dept. of CSE, SSIT 18
CS4TH3 Operating Systems

Inverted Page Tables

• Has one entry for each real page of memory.


• Each entry consists of
→ virtual-address of the page stored in that real memory-location and
→ information about the process that owns the page.

Figure Inverted page-table

Each virtual-address consists of a triplet


<process-id, page-number, offset>.
• Each inverted page-table entry is a pair <process-id, page-number>
• The algorithm works as follows:
1) When a memory-reference occurs, part of the virtual-address, consisting of <process-id,
page-number>, is presented to the memory subsystem.
2) The inverted page-table is then searched for a match.
3) If a match is found, at entry i-then the physical-address <i, offset> is generated.
4) If no match is found, then an illegal address access has been attempted.
• Advantage:
1) Decreases memory needed to store each page-table
• Disadvantages:
1) Increases amount of time needed to search table when a page reference occurs.
2) Difficulty implementing shared-memory.

Segmentation

Segmentation is a memory management technique which supports user's view of memory.

Nirmala G, Dept. of CSE, SSIT 19


CS4TH3 Operating Systems

Types of Segmentation

1. Virtual memory segmentation


Each processor job is divided into several segments, It is not essential all of which are resident at any
one point in time.
2. Simple segmentation
Each process is divided into many segments, and all segments are loaded into the memory at run time,
but not necessarily contiguously.

Basic Method

A logical-address space is a collection of segments.


• Each segment has a name and a length.
• The addresses specify both
→ segment-name and
→ offset within the segment.
• Normally, the user-program is compiled, and the compiler automatically constructs segments
reflecting the input program.
For ex:
• The code
• The heap, from which memory is allocated .
• The standard C library
• Global variables
• The stacks used by each thread

Fig: Programmer’s view of a program

Hardware Support

• Segment-table maps 2 dimensional user-defined addresses into one-dimensional physical-addresses.


• In the segment-table, each entry has following 2 fields:
Nirmala G, Dept. of CSE, SSIT 20
CS4TH3 Operating Systems

1) Segment-base contains starting physical-address where the segment resides in memory.


2) Segment-limit specifies the length of the segment (Figure 3.27).
• A logical-address consists of 2 parts:
1) Segment-number(s) is used as an index to the segment-table .
2) Offset(d) must be between 0 and the segment-limit.
• If offset is not between 0 & segment-limit, then we trap to the OS(logical-addressing attempt beyond
end of segment).
• If offset is legal, then it is added to the segment-base to produce the physical-memory address.

Figure 3.27 Segmentation hardware

Advantages of Segmentation
• It allows to divide the program into modules which provides better visualization.
• Segment table consumes less space as compared to Page Table in paging.
• It solves the problem of internal fragmentation.

Disadvantage of Segmentation
• There is an overhead of maintaining a segment table for each process.
• The time taken to fetch the instruction increases since now two memory accesses are required.
• Segments of unequal size are not suited for swapping.
• It suffers from external fragmentation as the free space gets broken down into smaller pieces with the
processes being loaded and removed from the main memory.

Nirmala G, Dept. of CSE, SSIT 21


CS4TH3 Operating Systems

BASIS FOR
PAGING SEGMENTATION
COMPARISON

Basic A page is of fixed block size. A segment is of variable size.

Fragmentation Paging may lead to internal Segmentation may lead to external

fragmentation. fragmentation.

Address The user specified address is The user specifies each address by two

divided by CPU into a page quantities a segment number and the

number and offset. offset (Segment limit).

Size The hardware decides the page The segment size is specified by the

size. user.

Table Paging involves a page table that Segmentation involves the segment table

contains base address of each that contains segment number and offset

page. (segment length).

Nirmala G, Dept. of CSE, SSIT 22

You might also like