0% found this document useful (0 votes)
4 views9 pages

Module 4 Os

This document covers memory management concepts including address spaces, contiguous memory allocation, and various memory allocation strategies such as paging and segmentation. It discusses the importance of protecting memory spaces for both user processes and the operating system, as well as the challenges of fragmentation and the solutions to these issues. Additionally, it outlines the functioning of the memory-management unit (MMU) and the processes involved in translating logical addresses to physical addresses.

Uploaded by

u2203012
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views9 pages

Module 4 Os

This document covers memory management concepts including address spaces, contiguous memory allocation, and various memory allocation strategies such as paging and segmentation. It discusses the importance of protecting memory spaces for both user processes and the operating system, as well as the challenges of fragmentation and the solutions to these issues. Additionally, it outlines the functioning of the memory-management unit (MMU) and the processes involved in translating logical addresses to physical addresses.

Uploaded by

u2203012
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

MODULE 4

Memory Management
CONCEPT OF ADDRESS SPACES, SWAPPING, CONTIGUOUS MEMORY
ALLOCATION, FIXED AND VARIABLE PARTITIONS, SEGMENTATION,
PAGING. VIRTUAL MEMORY, DEMAND PAGING, PAGE REPLACEMENT
ALGORITHMS - FIFO, OPTIMAL, LRU

Introduction
 The main purpose of a computer system is to execute programs. These programs, together
with the data they access, must be at least partially in main memory during execution.
 Modern computer systems maintain several processes in memory during system execution.
Many memory-management schemes exist, reflecting various approaches, and the
effectiveness of each algorithm varies with the situation. Selection of a memory
management scheme for a system depends on many factors, especially on the system’s
hardware design. Most algorithms require some form of hardware support.
 Memory consists of a large array of bytes, each with its own address. The CPU fetches
instructions from memory according to the value of the program counter. These instructions
may cause additional loading from and storing to specific memory addresses.
 The memory unit sees only a stream of memory addresses; it does not know how they are
generated (by the instruction counter, indexing, indirection, literal addresses, and so on) or
what they are for (instructions or data).

Concept of address spaces


 Registers that are built into each CPU core are generally accessible within one cycle of the
CPU clock.
 Main memory is accessed via a transaction on the memory bus and completing a memory
access may take many cycles of the CPU clock.
o In such cases, the processor normally needs to stall, since it does not have the data
required to complete the instruction that it is executing. This situation is intolerable
because of the frequency of memory accesses.
o The remedy is to add fast memory between the CPU and main memory, typically on
the CPU chip for fast access such as cache.
 For proper system operation, we must protect the operating system from access by user
processes, as well as protect user processes from one another.
 We first need to make sure that each process has a separate memory space.
o To separate memory spaces, we need the ability to determine the range of legal
addresses that the process may access and to ensure that the process can access
only these legal addresses.
o We can provide this protection by using two registers, usually a base and a limit.
o The base register holds the smallest legal physical memory address; the limit register
specifies the size of the range.
 Protection of memory space is accomplished by having the CPU hardware compare every
address generated in user mode with the registers. Any attempt by a program executing in
user mode to access operating-system memory or other users’ memory results in a trap to
the operating system, which treats the attempt as a fatal error.
 This scheme prevents a user program from (accidentally or deliberately) modifying the code
or data structures of either the operating system or other users.
 The base and limit registers can be loaded only by the operating system, which uses a special
privileged instruction. Since privileged instructions can be executed only in kernel mode, and
since only the operating system executes in kernel mode, only the operating system can load
the base and limit registers. This scheme allows the operating system to change the value of
the registers but prevents user programs from changing the registers’ contents.
 The operating system, executing in kernel mode, is given unrestricted access to both
operating-system memory and users’ memory. This provision allows the operating system to
load users’ programs into users’ memory, to dump out those programs in case of errors, to
access and modify parameters of system calls, to perform I/O to and from user memory, and
to provide many other services.
 In most cases, a user program goes through several steps before being executed.
Addresses may be represented in different ways during these steps. Addresses in the
source program are generally symbolic. A compiler typically binds these symbolic
addresses to relocatable addresses. The linker or loader in turn binds the relocatable
addresses to absolute addresses. Each binding is a mapping from one address space to
another.

 Classically, the binding of instructions and data to memory addresses can be done at any
step along the way:
o Compile time: If you know at compile time where the process will reside in
memory, then absolute code can be generated.
o Load time: If it is not known at compile time where the process will reside in
memory, then the compiler must generate relocatable code.
o Execution time: If the process can be moved during its execution from one
memory segment to another, then binding must be delayed until run time.

Logical Versus Physical Address Space


 An address generated by the CPU is commonly referred to as a logical address or virtual
address.
 An address seen by the memory unit— i.e., the one loaded into the memory-address
register of the memory—is commonly referred to as a physical address.
 Binding addresses at either compile or load time generates identical logical and physical
addresses. However, the execution-time address-binding scheme results in differing
logical and physical addresses.
 The set of all logical addresses generated by a program is a logical address space.
 The set of all physical addresses corresponding to these logical addresses is a physical
address space.
 The run-time mapping from virtual to physical addresses is done by a hardware device
called the memory-management unit (MMU).

 The value in the relocation register (base register is now called a relocation register) is
added to every address generated by a user process at the time the address is sent to
memory.
 We now have two different types of addresses: logical addresses (in the range 0 to max)
and physical addresses (in the range R + 0 to R + max for a base value R).

 To obtain better memory-space utilization, we can use dynamic loading. With dynamic
loading, a routine is not loaded until it is called. All routines are kept on disk in a
relocatable load format. The main program is loaded into memory and is executed.
When a routine needs to call another routine, the calling routine first checks to see
whether the other routine has been loaded. If it has not, the relocatable linking loader is
called to load the desired routine into memory and to update the program’s address
tables to reflect this change. Then control is passed to the newly loaded routine.
o The advantage of dynamic loading is that a routine is loaded only when it is
needed. This method is particularly useful when large amounts of code are
needed to handle infrequently occurring cases, such as error routines.
 Dynamically linked libraries (DLLs) are system libraries that are linked to user programs
when the programs are run.

Contiguous memory allocation


 The memory is usually divided into two partitions: one for the operating system and one for
the user processes.
 In contiguous memory allocation, each process is contained in a single section of memory
that is contiguous to the section containing the next process.
 The issue of memory protection can be addressed by relocation register together with a limit
register. The relocation register contains the value of the smallest physical address; the limit
register contains the range of logical addresses. Each logical address must fall within the
range specified by the limit register. The MMU maps the logical address dynamically by
adding the value in the relocation register. This mapped address is sent to memory.
 When the CPU scheduler selects a process for execution, the dispatcher loads the relocation
and limit registers with the correct values as part of the context switch. Because every
address generated by a CPU is checked against these registers, we can protect both the
operating system and the other users’ programs and data from being modified by this
running process.
 The relocation-register scheme provides an effective way to allow the operating system’s size
to change dynamically. This flexibility is desirable in many situations. If a device driver is not
currently in use, it makes little sense to keep it in memory; instead, it can be loaded into
memory only when it is needed. Likewise, when the device driver is no longer needed, it can
be removed and its memory allocated for other needs.

 One of the simplest methods of allocating memory is to assign processes to variably sized
partitions in memory, where each partition may contain exactly one process.
 In this variable-partition scheme, the operating system keeps a table indicating which parts
of memory are available and which are occupied. Initially, all memory is available for user
processes and is considered one large block of available memory, a hole. Eventually, as you
will see, memory contains a set of holes of various sizes.
 Initially, the memory is fully utilized, containing processes 5, 8, and 2. After process 8 leaves,
there is one contiguous hole. Later on, process 9 arrives and is allocated memory. Then
process 5 departs, resulting in two non-contiguous holes.
 In general, when a process arrives and needs memory, the system searches the set for a hole
that is large enough for this process. If the hole is too large, it is split into two parts. One part
is allocated to the arriving process; the other is returned to the set of holes. When a process
terminates, it releases its block of memory, which is then placed back in the set of holes. If
the new hole is adjacent to other holes, these adjacent holes are merged to form one larger
hole.
 This procedure is a particular instance of the general dynamic storage allocation problem,
which concerns how to satisfy a request of size n from a list of free holes. There are many
solutions to this problem. The first-fit, best-fit , and worst-fit strategies are the ones most
commonly used to select a free hole from the set of available holes.
o First fit. Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search
ended. We can stop searching as soon as we find a free hole that is large enough.
o Best fit. Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is ordered by size. This strategy produces the smallest leftover hole.
o Worst fit. Allocate the largest hole. Again, we must search the entire list, unless it is
sorted by size. This strategy produces the largest leftover hole, which may be more
useful than the smaller leftover hole from a best-fit approach.
 Simulations have shown that both first fit and best fit are better than worst fit in terms of
decreasing time and storage utilization. Neither first fit nor best fit is clearly better than the
other in terms of storage utilization, but first fit is generally faster.

Fragmentation
 Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation. As processes are loaded and removed from memory, the free memory space
is broken into little pieces.
 External fragmentation exists when there is enough total memory space to satisfy a request
but the available spaces are not contiguous: storage is fragmented into a large number of
small holes. This fragmentation problem can be severe. In the worst case, we could have a
block of free (or wasted) memory between every two processes. If all these small pieces of
memory were in one big free block instead, we might be able to run several more processes.
 The unused memory that is internal to a partition is called internal fragmentation.
 One solution to the problem of external fragmentation is compaction but it is expensive
another possible solution to the external-fragmentation problem is to permit the logical
address space of processes to be non-contiguous, thus allowing a process to be allocated
physical memory wherever such memory is available. This is the strategy used in paging, the
most common memory-management technique for computer systems.
Paging
 The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called frames and breaking logical memory into blocks of the same size called
pages.
 When a process is to be executed, its pages are loaded into any available memory frames
from their source (a file system or the backing store)
 The backing store is divided into fixed-sized blocks that are the same size as the memory
frames or clusters of multiple frames.
 The backing store is divided into fixed-sized blocks that are the same size as the memory
frames or clusters of multiple frames.
 Every address generated by the CPU is divided into two parts: a page number (p) and a page
offset (d):

 The page number is used as an index into a per-process page table.


 The page table contains the base address of each frame in physical memory, and the offset is
the location in the frame being referenced. Thus, the base address of the frame is combined
with the page offset to define the physical memory address.
 The following outlines the steps taken by the MMU to translate a logical address generated
by the CPU to a physical address:

 As the offset d does not change, it is not replaced, and the frame number and offset now
comprise the physical address.
 The selection of a power of 2 as a page size makes the translation of a logical address into a
page number and page offset particularly easy. If the size of the logical address space is 2m,
and a page size is 2n bytes, then the high-order m−n bits of a logical address designate the
page number, and the n low-order bits designate the page offset. Thus, the logical address is
as follows:

 where p is an index into the page table and d is the displacement within the page.

 For example, consider the memory in Figure 9.10. Here, in the logical address, n = 2 and m =
4. Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we show how
the programmer’s view of memory can be mapped into physical memory. Logical address 0 is
page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical
address 0 maps to physical address 20 [= (5 × 4) + 0]. Logical address 3 (page 0, offset 3)
maps to physical address 23 [= (5 × 4) + 3]. Logical address 4 is page 1, offset 0; according to
the page table, page 1 is mapped to frame 6. Thus, logical address 4 maps to physical address
24 [= (6 × 4) + 0]. Logical address 13 maps to physical address 9.
 When we use a paging scheme, we have no external fragmentation: any free frame can be
allocated to a process that needs it. However, we may have some internal fragmentation.
 When a process arrives in the system to be executed, its size, expressed in pages, is
examined. Each page of the process needs one frame. Thus, if the process requires n pages,
at least n frames must be available in memory. If n frames are available, they are allocated to
this arriving process. The first page of the process is loaded into one of the allocated frames,
and the frame number is put in the page table for this process. The next page is loaded into
another frame, its frame number is put into the page table, and so on.

 Since the operating system is managing physical memory, it must be aware of the allocation
details of physical memory—which frames are allocated, which frames are available, how
many total frames there are, and so on. This information is generally kept in a single, system-
wide data structure called a frame table. The frame table has one entry for each physical
page frame, indicating whether the latter is free or allocated and, if it is allocated, to which
page of which process (or processes).

You might also like