OS Chapter-8 Memory Management
OS Chapter-8 Memory Management
MAIN MEMORY
OBJECTIVES
• Background
• Swapping
• Contiguous memory allocation
• Segmentation
• Paging
BACKGROUND
BASIC HARDWARE
To separate memory spaces, we need the
ability to determine the range of legal
addresses that the process may access and to
ensure that the process can access only these
legal addresses.
Compile Time: If you know at compile time where the process will reside in
memory, then absolute code can be generated. For example, if you know that a
user process will reside starting at location R, then the generated compiler code
will start at that location and extend up from there. If, at some later time, the
starting location changes, then it will be necessary to recompile this code.
Loadtime: If it is not known at compile time where the process will reside in
memory, then the compiler must generate relocatable code. In this case, final
binding is delayed until load time. If the starting address changes, we need only
reload the user code to incorporate this changed value.
Execution time: If the process can be moved during its execution from one
memory segment to another, then binding must be delayed until run time. Special
hardware must be available for this scheme to work. Most general-purpose
operating systems use this method.
DIFFERENCES BETWEEN LOGICAL AND PHYSICAL
The basic difference between Logical and physical address is that Logical
address is generated by CPU in perspective of a program whereas the physical
address is a location that exists in the memory unit.
Logical Address Space is the set of all logical addresses generated by CPU for a
program whereas the set of all physical address mapped to corresponding
logical addresses is called Physical Address Space.
The logical address does not exist physically in the memory whereas physical
address is a location in the memory that can be accessed physically.
The logical address is generated by the CPU while the program is running
whereas the physical address is computed by the Memory Management Unit
(MMU).
ADDRESS TRANSLATION FROM LOGICAL AND PHYSICAL ADDRESSES IN
CONTIGOUS ALLOCATION
With dynamic loading, a routine is not loaded until it is called. All routines are
kept on disk in a relocatable load format. The main program is loaded into
memory and is executed. When a routine needs to call another routine, the
calling routine first checks to see whether the other routine has been loaded. If
it has not, the relocatable linking loader is called to load the desired routine
into memory and to update the program’s address tables to reflect this change.
Then control is passed to the newly loaded routine.
This method is particularly useful when large amounts of code are needed to
handle infrequently occurring cases, such as error routines. In this case,
although the total program size may be large, the portion that is used (and
hence loaded) may be much smaller.
Dynamic loading does not require special support from the operating system. It
is the responsibility of the users to design their programs to take advantage of
such a method. Operating systems may help the programmer, however, by
providing library routines to implement dynamic loading.
STATIC LINKING
When we click the .exe (executable) file of the program and it starts running, all the
necessary contents of the binary file have been loaded into the process’s virtual address
space. However, most programs also need to run functions from the system libraries, and
these library functions also need to be loaded.
In the simplest case, the necessary library functions are embedded directly in the
program’s executable binary file. Such a program is statically linked to its libraries, and
statically linked executable codes can commence running as soon as they are loaded.
Disadvantage:
Every program generated must contain copies of exactly the same common system library
functions. In terms of both physical memory and disk-space usage, it is much more
effi cient to load the system libraries into memory only once. Dynamic linking allows this
single loading to happen.
DYNAMIC LINKING
Dynamic linking, in contrast, is similar to dynamic loading. Here, though, linking, rather
than loading, is postponed until execution time.
Without this facility, each program on a system must include a copy of its language
library (or at least the routines referenced by the program) in the executable image. This
requirement wastes both disk space and main memory.
With dynamic linking, a stub is included in the image for each library- routine reference.
The stub is a small piece of code that indicates how to locate the appropriate memory-
resident library routine or how to load the library if the routine is not already present.
When the stub is executed, it checks to see whether the needed routine is already in
memory. If it is not, the program loads the routine into memory.
This feature can be extended to library updates (such as bug fixes). A library may be
replaced by a new version, and all programs that reference the library will automatically
use the new version.
STATIC LOADING AND DYNAMIC
Loading the entire program intoLOADING
the Loading the program into the main
main memory before start of the memory on demand is called as
program execution is called as static dynamic loading.
loading.
Efficient utilisation of memory.
Inefficient utilisation of memory
because whether it is required or not
required entire program is brought into
the main memory. It is slower as it requires to load
routines whenever need by any section
Its faster as each program has its of the program.
routines are already loaded into the
memory
STATIC LINKING AND DYNAMIC LINKING
SWAPPING
SWAPPING
SWAPPING
CONTIGUOUS MEMORY
ALLOCATION
MEMORY ALLOCATION
Main memory usually has two partitions −
Low Memory − Operating system resides in this memory.
High Memory − User processes are held in high memory.
Operating system uses the following memory allocation mechanism.
ADDRESS TRANSLATION FROM LOGICAL TO
PHYSICAL ADDRESS IN CONTIGUOUS ALLOCATION
CONTIGUOUS MEMORY
The main memory must ALLOCATION
accommodate both the operating system and the
various user processes. We therefore need to allocate main memory in
the most effi cient way possible. The one way is contiguous memory
allocation.
The memory is usually divided into two partitions: one for the resident
operating system and one for the user processes.
External Fragmentation: The total unused space of various partitions cannot be used to load
the processes even though there is space available but not in the contiguous form. As shown in
the image, the remaining 1 MB space of each partition cannot be used as a unit to store a 4 MB
process. Despite of the fact that the sufficient space is available to load the process, process
will not be loaded.
Limitation on the size of the process: If the process size is larger than the size of maximum
sized partition than that process cannot be loaded into the memory. Therefore, a limitation can
be imposed on the process size that is it cannot be larger than the size of the largest partition.
The first partition is reserved for the operating system. The remaining
space is divided into parts. The size of each partition will be equal to the
size of the process. The partition size varies according to the need of the
process so that the internal fragmentation can be avoided
DYNAMIC PARTITIONING
EXTERNAL FRAGMENTATION
After some time P1 and P3 got OCCURS
completed and their assigned space is
freed. Now there are two unused
partitions (5 MB and 3 MB) available
in the main memory but they cannot
be used to load a 8 MB process in the
memory since they are not
contiguously located.
COMPACTION- SOLUTION TO EXTERNAL
FRAGMENTATION
Usually the remaining processes are moved to the top and the free
spaces are brought together to form a single large memory unit i.e.
contiguous memory.
MEMORY ALLOCATION
ALGORITHMS
First-fit: Allocate the first hole that is big enough. Searching can start either at
the beginning of the set of holes or at the location where the previous first-fit
search ended. We can stop searching as soon as we find a free hole that is large
enough.
Worst-fit: Allocate the largest hole. Again, we must search the entire list, unless
it is sorted by size. This strategy produces the largest leftover hole, which may be
more useful than the smaller leftover hole from a best-fit approach.
Best-fit: Allocate the smallest hole that is big enough. We must search the entire
list, unless the list is ordered by size. This strategy produces the smallest leftover
hole.
FIRST-FIT
In this method, first job claims
the first available memory
with space more than or equal
to it’s size. The operating
system doesn’t search for
appropriate partition but just
allocate the job to the nearest
memory partition available
with suffi cient size.
WORST-FIT
In this allocation technique the
process traverse the whole
memory and always search for
largest hole/partition, and then
the process is placed in that
hole/partition.It is a slow
process because it has to
traverse the entire memory to
search largest hole.
BEST-FIT
This method keeps the
free/busy list in order by size
– smallest to largest. In this
method, the operating
system first searches the
whole of the memory
according to the size of the
given job and allocates it to
the closest-fitting free
partition in the memory,
making it able to use
memory effi ciently. Here the
jobs are in the order from
smalest job to largest job.
PROBLEMS
Que. 1. Consider a variable partition memory management scheme. Given five memory
partitions of 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB (in order), how would each of
the first-fit, best-fit, and worst-fit algorithms place processes of 212 KB, 417 KB, 112 KB,
and 426 KB (in order)?
Which algorithm makes the most efficient use of memory?
Let p1, p2, p3 & p4 are the names of the processes
a. First-fit:
P1>>> 100, 500, 200, 300, 600
P2>>> 100, 288, 200, 300, 600
P3>>> 100, 288, 200, 300, 183
100, 116, 200, 300, 183 <<<<< final set of hole
P4 (426K) must wait
b. Best-fit:
P1>>> 100, 500, 200, 300, 600
P2>>> 100, 500, 200, 88, 600
P3>>> 100, 83, 200, 88, 600
P4>>> 100, 83, 88, 88, 600
100, 83, 88, 88, 174 <<<<< final set of hole
c. Worst-fit:
P1>>> 100, 500, 200, 300, 600
P2>>> 100, 500, 200, 300, 388
P3>>> 100, 83, 200, 300, 388
100, 83, 200, 300, 276 <<<<< final set of hole
P4 (426K) must wait
In this example, Best-fit turns out to be the best because there is no wait processes
SOLUTION
SEGMENTATION
SEGMENTATION
In Operating Systems, Segmentation is a memory management technique
in which, the memory is divided into the variable size parts. Each part is
known as segment which can be allocated to a process.
The details about each segment are stored in a table called as segment
table. Segment table is stored in one (or many) of the segments.
<segment-number, offset>
Whenever a process arrives in the system for execution, its size is expressed
in pages.
Page Table stores the base address or Index number of corresponding frames.
PAGING
Logical address space of a process can be noncontiguous; process is
allocated physical memory whenever the latter is available
Divide physical memory into fixed-sized blocks called frames (size is power
of 2, between 512 bytes and 8,192 bytes)
To run a program of size n pages, need to find n free frames and load
program
Computer system assigns the binary addresses to the memory locations. However,
The system uses amount of bits to address a memory location.
Using 1 bit, we can address two memory locations. Using 2 bits we can address 4
and using 3 bits we can address 8 memory locations.
A pattern can be identified in the mapping between the number of bits in the
address and the range of the memory locations.
these n bits can be divided into two parts, that are, K bits and (n-k) bits.
ADDRESS TRANSLATION
SCHEME
m-n n
p d
PAGE SIZE
The page size (like the frame size) is defined by the hardware.
The size of a page is a power of 2, varying between 512 bytes and 1GB per
page, depending on the computer architecture.
If the size of the logical address space is 2^m, and a page size is 2^n
bytes, then the high-order m − n bits of a logical address designate the
page number, and the n low-order bits designate the page offset.
ADDRESS TRANSLATION IN
PAGING
PAGE TABLE
PROBLEMS
Q1. Assuming a 1-KB page size, what are the page numbers and offsets for
the following address reference (provided as decimal numbers):
2378
19360
34560
3. Consider a virtual address space of eight pages with 1024 bytes each,
mapped onto a physical memory of 32 frames. How many bits are used in
the virtual address ? How many bits are used in the physical address ?
Q1.
Page size=1KB=1024 Bytes
(i) 2378/1024=2.3, therefore address 2378 will be in page no. 3.
And offset will be 330 (1024*2+330=2378).
2. Solution:
There are 13 bits in the virtual address.
When there is context switch, TLB must be cleared or flushed before taking the
next process, so that this process can use this TLB for its use. Is it good..?
Effective access time = m(for page table) + m(for particular page in page
table)
Q1. Consider a paging system with the page table stored in memory.
a. If a memory reference takes 200 nanoseconds, how long does a
paged memory reference take?
b. b. If we add associative registers, and 75 percent of all page-table
references are found in the associative registers, what is the
eff ective memory reference time? (Assume that finding a page-
table entry in the associative registers takes zero time, if the
entry is there.)
Q2. Consider a paging system with the page table stored in memory:
c. If a paged memory reference takes 220 nanoseconds, how long
does a memory reference take?
d. b. If we add associative registers, and we have an eff ective access
time of 150 nanoseconds, what is the hit-ratio? Assume access to
the associative registers takes 5 nanoseconds.
SOLUTION
a.400 nanoseconds; 200 nanoseconds to access the page table and 200
nanoseconds to access the word in memory.
Ans2.
a. 110ns
Shared code
• The pages for the private code and data can appear
anywhere in the logical address space
1. Which address binding scheme generates different logical and physical addresses?
2. Name the memory management technique that supports the programmer’s view of memory.
3. Assuming a 1-KB page size, what are the page numbers and offsets for the following address reference (provided as decimal numbers):
8370
4700
4. What do you understand by Swapping? List any two reasons why swapping is not supported on Mobile systems.
Swapping is a MM scheme in which any process can be temporarily swapped from main memory to secondary so that
main memory can be made available for other processes.
Because mobile devices use flash memory with limited capacity and no support of secondary memory.
5. Given memory partitions of 100KB, 500KB, 200KB, 300KB and 600KB (in order), how would each of the best-fit and worst-fit algorithms place
processes of 212KB, 417KB, 120KB and 426KB(in order)? Which algorithm makes the most efficient use of memory?
6. Differentiate between external and internal fragmentation by taking suitable examples.
7. Consider a logical address space of eight pages of 1024 words each, mapped onto a physical memory of 64 frames.
How many bits are there in the logical address?
How many bits are there in the physical address?
8. f page size=2048 bytes and process size=80,766 bytes, then find the number of pages needed for a process to be allocated using paging memory
management technique.
9. Consider a paging system with the page table stored in memory. If a memory reference takes 150 ns, TLB is added and 70% of all page table
reference are found in the TLBs, what is the effective memory time?( Assume that finding the page-table entry in the TLBs takes 10 ns, if the entry is
there).
10. What is stub in dynamic linking.