Os Unit-3 (Bca)
Os Unit-3 (Bca)
It is used to
store instructions and processed data. The memory comprises a large array or group of words
or bytes, each with its own location. The primary motive of a computer system is to execute
programs. These programs, along with the information they access, should be in the main
memory during execution. The CPU fetches instructions from memory according to the value
of the program counter.
To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important. Many memory management methods exist, reflecting various
approaches, and the effectiveness of each algorithm depends on the situation.
Here, we will cover the following memory management topics:
What is Main Memory
What is Memory Management
Why memory Management is required
Logical address space and Physical address space
Static and dynamic loading
Static and dynamic linking
Swapping
Contiguous Memory allocation
Memory Allocation
First Fit
Best Fit
Worst Fit
Fragmentation
Internal Fragmentation
External Fragmentation
Paging
Now before, We start memory management let us known about what is main memory.
The main memory is central to the operation of a modern computer. Main Memory is a large
array of words or bytes, ranging in size from hundreds of thousands to billions. Main
memory is a repository of rapidly available information shared by the CPU and I/O devices.
Main memory is the place where programs and information are kept when the processor is
effectively utilizing them. Main memory is associated with the processor, so moving
instructions and information into and out of the processor is extremely fast. Main memory is
also known as RAM(Random Access Memory). This memory is a volatile memory.RAM
lost its data when a power interruption occurs.
Figure 1: Memory hierarchy
In a multiprogramming computer, the operating system resides in a part of memory and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called memory management. Memory management is a method in the operating
system to manage operations between main memory and disk during process execution. The
main aim of memory management is to achieve efficient utilization of memory.
Logical Address space: An address generated by the CPU is known as “Logical Address”. It
is also known as a Virtual address. Logical address space can be defined as the size of the
process. A logical address can be changed.
Physical Address space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is
done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.
To load a process into the main memory is done by a loader. There are two different types of
loading :
Static loading:- loading the entire program into a fixed address. It requires more memory
space.
Dynamic loading:- The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on disk
in a relocatable load format. One of the advantages of dynamic loading is that unused
routine is never loaded. This loading is useful when a large amount of code is needed to
handle it efficiently.
To perform a linking task a linker is used. A linker is a program that takes one or more object
files generated by a compiler and combines them into a single executable file.
Static linking: In static linking, the linker combines all necessary program modules into
a single executable program. So there is no runtime dependency. Some operating systems
support only static linking, in which system language libraries are treated like any other
object module.
Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In
dynamic linking, “Stub” is included for each appropriate library routine reference. A stub
is a small piece of code. When the stub is executed, it checks whether the needed routine
is already in memory or not. If not available then the program loads the routine into
memory.
Swapping :
Here, in this diagram 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.
Best fit:-
In the best fit, allocate the smallest hole that is big enough to process requirements. For this,
we search the entire list, unless the list is ordered by size.
Here in this example, first, we traverse the complete list and find the last hole 25KB is the
best suitable hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory allocation
techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This method
produces the largest leftover hole.
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.
Fragmentation:
A Fragmentation is defined as when the process is loaded and removed after execution from
memory, it creates a small free hole. These holes can not be assigned to new processes
because holes are not combined or do not fulfill the memory requirement of the process. To
achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problem. In operating system two types of fragmentation:
Internal fragmentation:
Internal fragmentation occurs when memory blocks are allocated to the process more than
their requested size. Due to this some unused space is leftover and creates an internal
fragmentation problem.
Example: Suppose there is a fixed partitioning is used for memory allocation and the
different size of block 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size
2MB comes and demand for the block of memory. It gets a memory block of 3MB but 1MB
block memory is a waste, and it can not be allocated to other processes too. This is called
internal fragmentation.
External fragmentation:
In external fragmentation, we have a free memory block, but we can not assign it to process
because blocks are not contiguous.
Example: Suppose (consider above example) three process p1, p2, p3 comes with size 2MB,
4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB
allocated respectively. After allocating process p1 process and p2 process left 1MB and
2MB. Suppose a new process p4 comes and demands a 3MB block of memory, which is
available, but we can not assign it because free memory space is not contiguous. This is
called external fragmentation.
Both the first fit and best-fit systems for memory allocation affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used. In the
compaction technique, all free memory space combines and makes one large block. So, this
space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address space
of the processes to be noncontiguous, thus permit a process to be allocated physical memory
where ever the latter is available.
Paging:
Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. This scheme permits the physical address space of a process to be non-
contiguous.
Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
Logical Address Space or Virtual Address Space (represented in words or bytes): The set
of all logical addresses generated by a program
Physical Address (represented in bits): An address actually available on a memory unit
Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1 G =
230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words (1
M = 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging technique.
The Physical Address Space is conceptually divided into several fixed-size blocks,
called frames.
The Logical Address Space is also split into fixed-size blocks, called pages.
Page Size = Frame Size
Let us consider an example:
Physical Address = 12 bits, then Physical Address Space = 4 K words
Logical Address = 13 bits, then Logical Address Space = 8 K words
Page size = frame size = 1 K words (assumption)
Discuss
Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
Logical Address Space or Virtual Address Space( represented in words or bytes): The set
of all logical addresses generated by a program
Physical Address (represented in bits): An address actually available on memory unit
Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G =
230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M
= 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as paging technique.
The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
The Logical address Space is also splitted into fixed-size blocks, called pages.
Page Size = Frame Size
Let us consider an example:
Physical Address = 12 bits, then Physical Address Space = 4 K words
Logical Address = 13 bits, then Logical Address Space = 8 K words
Page size = frame size = 1 K words (assumption)
Address generated by CPU is divided into
Page number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
Page offset(d): Number of bits required to represent particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into
Frame number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number.
Frame offset(d): Number of bits required to represent particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of page table can be done by using dedicated registers. But the
usage of register for the page table is satisfactory only if page table is small. If page table
contain large number of entries then we can use TLB(translation Look-aside buffer), a
special, small, fast look up hardware cache.
The TLB is associative, high speed memory.
Each entry in TLB consists of two parts: a tag and a value.
When this memory is used, then an item is compared with all tags simultaneously.If
the item is found, then corresponding value is returned.
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)
ALLOCATING KERNEL MEMORY
When a process running in user mode requests additional memory, pages are allocated from
the list of free page frames maintained by the kernel. Remember, too, that if a user process
requests a single byte of memory, internal fragmentation will result, as the process will be
granted, an entire page frame. Kernel memory, however, is often allocated from a free-
memory pool different from the list used to satisfy ordinary user-mode processes. There are
two primary
1. The kernel requests memory for data structures of varying sizes, some of which are less
than a page in size. As a result, the kernel must use memory conservatively and attempt to
minimize waste due to fragmentation. This is especially important because many operating
systems do not subject kernel code or data to the paging system.
1. Buddy system – Buddy allocation system is an algorithm in which a larger memory block
is divided into sm
1. Buddy system
Buddy allocation system is an algorithm in which a larger memory block is divided into
small parts to satisfy the request. This algorithm is used to give best fit. The two smaller parts
of block are of equal size and called as buddies. In the same manner one of the two buddies
will further divide into smaller parts until the request is fulfilled. Benefit of this technique is
that the two buddies can combine to form the block of larger size according to the memory
request.
Example – If the request of 25Kb is made then block of size 32Kb is allocated.
Binary buddy system – The buddy system maintains a list of the free blocks of each size
(called a free list), so that it is easy to find ablock of the desired size, if one is available. If no
block of the requested size is available, Allocate searches for the first nonempty list for
blocks of atleast the size requested. In either case, a block is removed from the free list.
Example – Assume the size of memory segment is initially 256kb and the kernel rquests
25kb of memory. The segment is initially divided into two buddies. Let we call A1 and A2
each 128kb in
size. One of these buddies is further divided into two 64kb buddies let say B1 and B2. But
the next highest power of 25kb is 32kb so, either B1 or B2 is further divided into two 32kb
buddies(C1 and C2) and finally one of these buddies is used to satisfy the 25kb request. A
split block can only be merged with its unique buddy block, which then reforms the larger
block they were split from.
Fibonacci buddy system – This is the system in which blocks are divided into sizes which are
fibonacci numbers. It satisfy the following relation: Zi = Z(i-1)+Z(i-2) 0, 1, 1, 2, 3, 5, 8, 13,
21, 34, 55, 144, 233, 377, 610. The address calculation for the binary and weighted buddy
systems is straight forward, but the original procedure for the Fibonacci buddy system was
either limited to a small, fixed number of block sizes or a time consuming computation.
What is coalescing?
It is defined as how quickly adjacent buddies can be combined to form larger segments this
is known as coalescing. For example, when the kernel releases the C1 unit it was allocated,
the system can coalesce C1 and C2 into a 64kb segment. This segment B1 can in turn be
coalesced with its buddy B2 to form a 128kb segment. Ultimately we can end up with the
original 256kb segment. Drawback – The main drawback in buddy system is internal
fragmentation as larger block of memory is acquired then required. For example if a 36 kb
request is made then it can only be satisfied by 64 kb segment and remaining memory is
wasted
. 2. Slab Allocation
A second strategy for allocating kernel memory is known as slab allocation. It eliminates
fragmentation caused by allocations and de-allocations. This method is used to retain
allocated memory that contains a data object of a certain type for reuse upon subsequent
allocations of objects of the same type. In slab allocation memory chunks suitable to fit data
objects of certain type or size are pre-allocated. Cache does not free the space immediately
after use although it keeps track of data which are required frequently so that whenever
request is made the data will reach very fast. Two terms required are:
Slab – A slab is made up of one or more physically contiguous pages. The slab is the actual
container of data associated with objects of the specific kind of the containing cache. Cache
– Cache represents a small amount of very fast memory. A cache consists of one or more
slabs. There is a single cache for each unique kernel data structure.
Example – A separate cache for a data structure representing processes descriptors
Separate cache for file objects Separate cache for semaphores etc. Each cache is populated
with objects that are instantiations of the kernel data structure the cache represents. For
example the cache representing semaphores stores instances of semaphore objects, the cache
representing process descriptors stores instances of process descriptor objects.
Implementation – The slab allocation algorithm uses caches to store kernel objects. When a
cache is created a number of objects which are initially marked as free are allocated to the
cache. The number of objects in the cache depends on size of the associated slab. Example –
A 12 kb slab (made up of three contiguous 4 kb pages) could store six 2 kb objects. Initially
all objects in the cache are marked as free. When a new object for a kernel data structure is
needed, the allocator can assign any free object from the cache to satisfy the request. The
object assigned from the cache is marked as used. In linux, a slab may in one of three
possible states:
1. Full
– All objects in the slab are marked as used
2. Empty – All objects in the slab are marked as free
3. Partial –
The slab consists of both The slab allocator first attempts to satisfy the request with a free
object in a partial slab. If none exists, a free object is assigned from an empty slab. If no
empty slabs are available, a new slab is allocated from contiguous physical pages and
assigned to a cache. Benefits of slab allocator –
No memory is wasted due to fragmentation because each unique kernel data structure has
an associated cache.
Memory request can be satisfied quickly.
The slab allocating scheme is particularly effective for managing when objects are
frequently allocated or de-allocated. The act of allocating and releasing memory can be a
time consuming process. However, objects are created in advance and thus can be quickly
allocated from the cache. When the kernel has finished with an object and releases it, it is
marked as free and return to its cache, thus making it immediately available for subsequent
request from the kernel.
Allocation of Frames