0% found this document useful (0 votes)
77 views19 pages

Os Unit-3 (Bca)

Memory management is required in operating systems to efficiently allocate memory for processes and minimize fragmentation. There are two types of addresses - logical addresses generated by the CPU and physical addresses seen by memory. Memory can be allocated using contiguous, fixed partition, or multiple partition allocation with techniques like first fit, best fit, or worst fit. Fragmentation, including internal and external, occurs when unused memory blocks are too small to allocate to processes. Paging and swapping are used to virtualize memory addresses and more efficiently utilize physical memory.

Uploaded by

Bot Id
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views19 pages

Os Unit-3 (Bca)

Memory management is required in operating systems to efficiently allocate memory for processes and minimize fragmentation. There are two types of addresses - logical addresses generated by the CPU and physical addresses seen by memory. Memory can be allocated using contiguous, fixed partition, or multiple partition allocation with techniques like first fit, best fit, or worst fit. Fragmentation, including internal and external, occurs when unused memory blocks are too small to allocate to processes. Paging and swapping are used to virtualize memory addresses and more efficiently utilize physical memory.

Uploaded by

Bot Id
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

The term Memory can be defined as a collection of data in a specific format.

It is used to
store instructions and processed data. The memory comprises a large array or group of words
or bytes, each with its own location. The primary motive of a computer system is to execute
programs. These programs, along with the information they access, should be in the main
memory during execution. The CPU fetches instructions from memory according to the value
of the program counter. 
To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important. Many memory management methods exist, reflecting various
approaches, and the effectiveness of each algorithm depends on the situation. 
Here, we will cover the following memory management topics:                                          
 What is Main Memory
 What is Memory Management
 Why memory Management is required
 Logical address space and Physical address space
 Static and dynamic loading
 Static and dynamic linking
 Swapping
 Contiguous Memory allocation
 Memory Allocation
 First Fit
 Best Fit
 Worst Fit
 Fragmentation
 Internal Fragmentation
 External Fragmentation
 Paging

Now before, We start memory management let us known about what is main memory.

What is Main Memory:

The main memory is central to the operation of a modern computer. Main Memory is a large
array of words or bytes, ranging in size from hundreds of thousands to billions. Main
memory is a repository of rapidly available information shared by the CPU and I/O devices.
Main memory is the place where programs and information are kept when the processor is
effectively utilizing them.  Main memory is associated with the processor, so moving
instructions and information into and out of the processor is extremely fast.  Main memory is
also known as RAM(Random Access Memory). This memory is a volatile memory.RAM
lost its data when a power interruption occurs.
Figure 1: Memory hierarchy

What is Memory Management :

In a multiprogramming computer, the operating system resides in a part of memory and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called memory management. Memory management is a method in the operating
system to manage operations between main memory and disk during process execution. The
main aim of memory management is to achieve efficient utilization of memory.  

Why Memory Management is required:

 Allocate and de-allocate memory before and after process execution.


 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Now we are discussing the concept of logical address space and Physical address space: 

Logical and Physical Address Space:

Logical Address space: An address generated by the CPU is known as “Logical Address”. It
is also known as a Virtual address. Logical address space can be defined as the size of the
process. A logical address can be changed.
Physical Address space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is
done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.

Static and Dynamic Loading:

To load a process into the main memory is done by a loader. There are two different types of
loading :
 Static loading:- loading the entire program into a fixed address. It requires more memory
space.
 Dynamic loading:- The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on disk
in a relocatable load format. One of the advantages of dynamic loading is that unused
routine is never loaded. This loading is useful when a large amount of code is needed to
handle it efficiently.

 Static and Dynamic linking:

To perform a linking task a linker is used. A linker is a program that takes one or more object
files generated by a compiler and combines them into a single executable file.
 Static linking: In static linking, the linker combines all necessary program modules into
a single executable program. So there is no runtime dependency. Some operating systems
support only static linking, in which system language libraries are treated like any other
object module.
 Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In
dynamic linking, “Stub” is included for each appropriate library routine reference. A stub
is a small piece of code. When the stub is executed, it checks whether the needed routine
is already in memory or not. If not available then the program loads the routine into
memory.
 

Swapping :

When a process is executed it must have resided in memory. Swapping is a process of swap a


process temporarily into a secondary memory from the main memory, which is fast as
compared to secondary memory. A swapping allows more processes to be run and can be fit
into memory at one time. The main part of swapping is transferred time and the total time
directly proportional to the amount of memory swapped. Swapping is also known as roll-out,
roll in, because if a higher priority process arrives and wants service, the memory manager
can swap out the lower priority process and then load and execute the higher priority process.
After finishing higher priority work, the lower priority process swapped back in memory and
continued to the execution process.  

 Contiguous Memory Allocation :


The main memory should oblige both the operating system and the different client processes.
Therefore, the allocation of memory becomes an important task in the operating system.  The
memory is usually divided into two partitions: one for the resident operating system and one
for the user processes. We normally need several user processes to reside in memory
simultaneously. Therefore, we need to consider how to allocate available memory to the
processes that are in the input queue waiting to be brought into memory. In adjacent memory
allotment, each process is contained in a single contiguous segment of memory.  
Memory allocation:
To gain proper memory utilization, memory allocation must be allocated efficient manner.
One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions. 
Multiple partition allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available
for other processes. 
Fixed partition allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes. Initially,
all memory is available for user processes and is considered one large block of available
memory. This available memory is known as “Hole”. When the process arrives and needs
memory, we search for a hole that is large enough to store this process. If the requirement
fulfills then we allocate memory to process, otherwise keeping the rest available to satisfy
future requests. While allocating a memory sometimes dynamic storage allocation problems
occur, which concerns how to satisfy a request of size n from a list of free holes. There are
some solutions to this problem:
First fit:- 
In the first fit, the first available free hole fulfills the requirement of the process allocated. 

Here, in this diagram 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.
 Best fit:-
In the best fit, allocate the smallest hole that is big enough to process requirements. For this,
we search the entire list, unless the list is ordered by size. 
Here in this example, first, we traverse the complete list and find the last hole 25KB is the
best suitable hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory allocation
techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This method
produces the largest leftover hole. 

Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.

Fragmentation:

A Fragmentation is defined as when the process is loaded and removed after execution from
memory, it creates a small free hole. These holes can not be assigned to new processes
because holes are not combined or do not fulfill the memory requirement of the process.  To
achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problem. In operating system two types of fragmentation:
Internal fragmentation: 
Internal fragmentation occurs when memory blocks are allocated to the process more than
their requested size. Due to this some unused space is leftover and creates an internal
fragmentation problem.
 Example: Suppose there is a fixed partitioning is used for memory allocation and the
different size of block 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size
2MB comes and demand for the block of memory. It gets a memory block of 3MB but 1MB
block memory is a waste, and it can not be allocated to other processes too. This is called
internal fragmentation.
External fragmentation:
In external fragmentation, we have a free memory block, but we can not assign it to process
because blocks are not contiguous.
Example: Suppose (consider above example) three process p1, p2, p3 comes with size 2MB,
4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB
allocated respectively. After allocating process p1 process and p2 process left 1MB and
2MB. Suppose a new process p4 comes and demands a 3MB block of memory, which is
available, but we can not assign it because free memory space is not contiguous.  This is
called external fragmentation.
Both the first fit and best-fit systems for memory allocation affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used. In the
compaction technique, all free memory space combines and makes one large block. So, this
space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address space
of the processes to be noncontiguous, thus permit a process to be allocated physical memory
where ever the latter is available.

 Paging:

Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. This scheme permits the physical address space of a process to be non-
contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
 Logical Address Space or Virtual Address Space (represented in words or bytes): The set
of all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on a memory unit
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
 If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1 G =
230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
 If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words (1
M = 220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging technique.
 The Physical Address Space is conceptually divided into several fixed-size blocks,
called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size
Let us consider an example:
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

The address generated by the CPU is divided into


 Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
 Page offset(d): Number of bits required to represent a particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number frame
 Frame offset(d): Number of bits required to represent a particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated registers. But
the usage of register for the page table is satisfactory only if the page table is small. If the
page table contains a large number of entries then we can use TLB(translation Look-aside
buffer), a special, small, fast look-up hardware cache.
 The TLB is an associative, high-speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously. If the
item is found, then the corresponding value is returned.
 
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)

Paging in Operating System


 Difficulty Level : Medium
 Last Updated : 21 Sep, 2021
 Read

 Discuss
Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
 Logical Address Space or Virtual Address Space( represented in words or bytes): The set
of all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on memory unit
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
 If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G =
230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
 If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M
= 220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as paging technique.
 The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
 The Logical address Space is also splitted into fixed-size blocks, called pages.
 Page Size = Frame Size
Let us consider an example:
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

 
Address generated by CPU is divided into
 Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
 Page offset(d): Number of bits required to represent particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number.
 Frame offset(d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
 
The hardware implementation of page table can be done by using dedicated registers. But the
usage of register for the page table is satisfactory only if page table is small. If page table
contain large number of entries then we can use TLB(translation Look-aside buffer), a
special, small, fast look up hardware cache.
 The TLB is associative, high speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously.If
the item is found, then corresponding value is returned.

 
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)
ALLOCATING KERNEL MEMORY

What is allocating kernel memory?

It is defined as how quickly adjacent buddies can be combined to form larger


segments this is known as coalescing. For example, when the kernel releases the C1 unit it
was allocated, the system can coalesce C1 and C2 into a 64kb segment.15-Jul-2021

When a process running in user mode requests additional memory, pages are allocated from
the list of free page frames maintained by the kernel. Remember, too, that if a user process
requests a single byte of memory, internal fragmentation will result, as the process will be
granted, an entire page frame. Kernel memory, however, is often allocated from a free-
memory pool different from the list used to satisfy ordinary user-mode processes. There are
two primary

1. The kernel requests memory for data structures of varying sizes, some of which are less
than a page in size. As a result, the kernel must use memory conservatively and attempt to
minimize waste due to fragmentation. This is especially important because many operating
systems do not subject kernel code or data to the paging system.

2. Pages allocated to user-mode processes do not necessarily have to be in contiguous


physical memory. However, certain hardware devices interact directly with physical memory
—-without the benefit of a virtual memory interface—and consequently may require memory
residing in physically contiguous pages. In the following sections, we examine two strategies
for managing free memory that is assigned to kernel processes. Two strategies for managing
free memory that is assigned to kernel processes:

1. Buddy system – Buddy allocation system is an algorithm in which a larger memory block
is divided into sm

1. Buddy system

Buddy allocation system is an algorithm in which a larger memory block is divided into
small parts to satisfy the request. This algorithm is used to give best fit. The two smaller parts
of block are of equal size and called as buddies. In the same manner one of the two buddies
will further divide into smaller parts until the request is fulfilled. Benefit of this technique is
that the two buddies can combine to form the block of larger size according to the memory
request.

Example – If the request of 25Kb is made then block of size 32Kb is allocated.

Four Types of Buddy System –

Binary buddy system

Fibonacci buddy system

Weighted buddy system

Tertiary buddy system

Why buddy system?


If the partition size and process size are different then poor match occurs and may use space
inefficiently. It is easy to implement and efficient then dynamic allocation.

Binary buddy system – The buddy system maintains a list of the free blocks of each size
(called a free list), so that it is easy to find ablock of the desired size, if one is available. If no
block of the requested size is available, Allocate searches for the first nonempty list for
blocks of atleast the size requested. In either case, a block is removed from the free list.
Example – Assume the size of memory segment is initially 256kb and the kernel rquests
25kb of memory. The segment is initially divided into two buddies. Let we call A1 and A2
each 128kb in
size. One of these buddies is further divided into two 64kb buddies let say B1 and B2. But
the next highest power of 25kb is 32kb so, either B1 or B2 is further divided into two 32kb
buddies(C1 and C2) and finally one of these buddies is used to satisfy the 25kb request. A
split block can only be merged with its unique buddy block, which then reforms the larger
block they were split from.

Fibonacci buddy system – This is the system in which blocks are divided into sizes which are
fibonacci numbers. It satisfy the following relation: Zi = Z(i-1)+Z(i-2) 0, 1, 1, 2, 3, 5, 8, 13,
21, 34, 55, 144, 233, 377, 610. The address calculation for the binary and weighted buddy
systems is straight forward, but the original procedure for the Fibonacci buddy system was
either limited to a small, fixed number of block sizes or a time consuming computation.

Advantages –  In comparison to other simpler techniques such as dynamic allocation, the


buddy memory system has little external fragmentation.
 The buddy memory allocation system is implemented with the use of a binary tree to
represent used or unused split memory blocks.
 The buddy system is very fast to allocate or deallocate memory.
 In buddy systems, the cost to allocate and free a block of memory is low compared to that
of best-fit or first-fit algorithms.
 Other advantage is coalescing.
 Address calculation is easy.

What is coalescing?
It is defined as how quickly adjacent buddies can be combined to form larger segments this
is known as coalescing. For example, when the kernel releases the C1 unit it was allocated,
the system can coalesce C1 and C2 into a 64kb segment. This segment B1 can in turn be
coalesced with its buddy B2 to form a 128kb segment. Ultimately we can end up with the
original 256kb segment. Drawback – The main drawback in buddy system is internal
fragmentation as larger block of memory is acquired then required. For example if a 36 kb
request is made then it can only be satisfied by 64 kb segment and remaining memory is
wasted

. 2. Slab Allocation
A second strategy for allocating kernel memory is known as slab allocation. It eliminates
fragmentation caused by allocations and de-allocations. This method is used to retain
allocated memory that contains a data object of a certain type for reuse upon subsequent
allocations of objects of the same type. In slab allocation memory chunks suitable to fit data
objects of certain type or size are pre-allocated. Cache does not free the space immediately
after use although it keeps track of data which are required frequently so that whenever
request is made the data will reach very fast. Two terms required are:

Slab – A slab is made up of one or more physically contiguous pages. The slab is the actual
container of data associated with objects of the specific kind of the containing cache.  Cache
– Cache represents a small amount of very fast memory. A cache consists of one or more
slabs. There is a single cache for each unique kernel data structure.
Example –  A separate cache for a data structure representing processes descriptors 
Separate cache for file objects  Separate cache for semaphores etc. Each cache is populated
with objects that are instantiations of the kernel data structure the cache represents. For
example the cache representing semaphores stores instances of semaphore objects, the cache
representing process descriptors stores instances of process descriptor objects.
Implementation – The slab allocation algorithm uses caches to store kernel objects. When a
cache is created a number of objects which are initially marked as free are allocated to the
cache. The number of objects in the cache depends on size of the associated slab. Example –
A 12 kb slab (made up of three contiguous 4 kb pages) could store six 2 kb objects. Initially
all objects in the cache are marked as free. When a new object for a kernel data structure is
needed, the allocator can assign any free object from the cache to satisfy the request. The
object assigned from the cache is marked as used. In linux, a slab may in one of three
possible states:

1. Full
– All objects in the slab are marked as used
2. Empty – All objects in the slab are marked as free
3. Partial –
The slab consists of both The slab allocator first attempts to satisfy the request with a free
object in a partial slab. If none exists, a free object is assigned from an empty slab. If no
empty slabs are available, a new slab is allocated from contiguous physical pages and
assigned to a cache. Benefits of slab allocator –

 No memory is wasted due to fragmentation because each unique kernel data structure has
an associated cache.
 Memory request can be satisfied quickly.
 The slab allocating scheme is particularly effective for managing when objects are
frequently allocated or de-allocated. The act of allocating and releasing memory can be a
time consuming process. However, objects are created in advance and thus can be quickly
allocated from the cache. When the kernel has finished with an object and releases it, it is
marked as free and return to its cache, thus making it immediately available for subsequent
request from the kernel.

Allocation of Frames

An important aspect of operating systems, virtual memory is implemented using


demand paging. Demand paging necessitates the development of a page-replacement
algorithm and a frame allocation algorithm. Frame allocation algorithms are used if
you have multiple processes; it helps decide how many frames to allocate to each
process.
There are various constraints to the strategies for the allocation of frames:
 You cannot allocate more than the total number of available frames.
 At least a minimum number of frames should be allocated to each process. This
constraint is supported by two reasons. The first reason is, as less number of
frames are allocated, there is an increase in the page fault ratio, decreasing the
performance of the execution of the process. Secondly, there should be enough
frames to hold all the different pages that any single instruction can reference.
Frame allocation algorithms –
The two algorithms commonly used to allocate frames to a process are:
1. Equal allocation: In a system with x frames and y processes, each process gets
equal number of frames, i.e. x/y. For instance, if the system has 48 frames and 9
processes, each process will get 5 frames. The three frames which are not allocated
to any process can be used as a free-frame buffer pool.
 Disadvantage: In systems with processes of varying sizes, it does not make
much sense to give each process equal frames. Allocation of a large number of
frames to a small process will eventually lead to the wastage of a large number
of allocated unused frames.
2. Proportional allocation: Frames are allocated to each process according to the
process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m, where S
is the sum of the sizes of all the processes and m is the number of frames in the
system. For instance, in a system with 62 frames, if there is a process of 10KB and
another process of 127KB, then the first process will be allocated (10/137)*62 = 4
frames and the other process will get (127/137)*62 = 57 frames.
 Advantage: All the processes share the available frames according to their
needs, rather than equally.

Global vs Local Allocation –


The number of frames allocated to a process can also dynamically change depending
on whether you have used global replacement or local replacement for replacing
pages in case of a page fault.
1. Local replacement: When a process needs a page which is not in the memory, it
can bring in the new page and allocate it a frame from its own set of allocated
frames only.
 Advantage: The pages in memory for a particular process and the page fault
ratio is affected by the paging behavior of only that process.
 Disadvantage: A low priority process may hinder a high priority process by
not making its frames available to the high priority process.
2. Global replacement: When a process needs a page which is not in the memory, it
can bring in the new page and allocate it a frame from the set of all frames, even if
that frame is currently allocated to some other process; that is, one process can
take a frame from another.
 Advantage: Does not hinder the performance of processes and hence results in
greater system throughput.
 Disadvantage: The page fault ratio of a process can not be solely controlled by
the process itself. The pages in memory for a process depends on the paging
behavior of other processes as well.

You might also like