0% found this document useful (0 votes)
3 views

Class test 1

The document outlines the steps of address binding, defining logical and physical addresses, and explaining external fragmentation and compaction in memory management. It discusses the Translation Look-aside Buffer (TLB) and various memory allocation strategies, including contiguous memory allocation and fragmentation issues. Additionally, it covers memory protection mechanisms and the implications of different memory allocation strategies on fragmentation.

Uploaded by

22cs024
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Class test 1

The document outlines the steps of address binding, defining logical and physical addresses, and explaining external fragmentation and compaction in memory management. It discusses the Translation Look-aside Buffer (TLB) and various memory allocation strategies, including contiguous memory allocation and fragmentation issues. Additionally, it covers memory protection mechanisms and the implications of different memory allocation strategies on fragmentation.

Uploaded by

22cs024
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1. What are the steps in which address binding is carried out?

1. Compile time: If memory location known a priori, absolute code can be generated;
must recompile code if starting location changes.
2. Load time: Must generate relocatable code if memory location is not known at
compile time. Binding is delayed until load time.
3. Execution time: Binding delayed until run time if the process can be moved during
its execution from one memory segment to another. Need hardware support for
address maps (e.g., base and limit registers).

2. Define logical address and physical address.

An address generated by the CPU is referred as logical address. An address seen


by the memory unit that is the one loaded into the memory address register of the memory is
commonly referred to as physical address.

3.What is the concept of external fragmentation?


External fragmentation exists when enough total memory space exists to satisfy a request,
but it is not contiguous.

It is a memory management problem in which frequent requests and releases of


groups of contiguous page frames of different sizes may lead to a situation in which several
small blocks of free page frames are "scattered" inside blocks of allocated page frames.
That is storage is fragmented into large number of small holes.
As a result, it may become impossible to allocate a large block of contiguous page
frames, even if there are enough free pages to satisfy the request.

4. What is compaction?

Compaction shuffles memory contents to place all free memory together in one large
block. Compaction is possible only if relocation is dynamic, and is done at execution time.
Compaction provides solution to external fragmentation

5. Describe: Translation Look-aside Buffer (TLB).

It is a special, small, fast lookup hardware cache which provides a solution for the problem
of need of two memory access to access a byte. TLB is associative and high speed
memory. It contains few of the page table entries and if the page number is found, its frame
number is immediately available and used to access memory.

Part B

1. Consider the following page reference string


5, 0, 1, 5, 0, 3, 0, 1, 2, 3, 1, 3, 2, 1, 2, 0, 1, 7, 0, 1,4,1

How many page faults would occur for the following replacement algorithms, assuming three
frames? Remember all frames are initially empty, so your first unique pages will all cost one
fault each.
 FIFO replacement
 Optimal replacement
 LRU replacement

1. Explain about contiguous memory allocation.

 The main memory must accommodate both the operating system and the various
user processes.
 Therefore it is needed to allocate main memory in the most efficient way possible.
 The memory is usually divided into two partitions: one for the resident operating
system and one for the user processes.
 We can place the operating system in either low memory or high memory. The
major factor affecting this decision is the location of the interrupt vector. Since
the interrupt vector is often in low memory, programmers usually place the
operating system in low memory as well.
 We usually want several user processes to reside in memory at the same time.
We therefore need to consider how to allocate available memory to the
processes that are in the input queue waiting to be brought into memory.
 In contiguous memory allocation, each process is contained in a single
section of memory that is contiguous to the section containing the next process.

Memory Protection

 Preventing a process from accessing memory it does not own by shall be


achieved by having a system combining a relocation register together with a
limit register.
 The relocation register contains the value of the smallest physical address; the
limit register contains the range of logical addresses (for example, relocation =
100040 and limit = 74600).
 Each logical address must fall within the range specified by the limit register. The
MMU maps the logical address dynamically by adding the value in the relocation
register. This mapped address is sent to memory.

 When the CPU scheduler selects a process for execution, the dispatcher loads
the relocation and limit registers with the correct values as part of the context
switch. Because every address generated by a CPU is checked against these
registers, we can protect both the operating system and the other users’
programs and data from being modified by this running process.
 The relocation-register scheme provides an effective way to allow the operating
system’s size to change dynamically. This flexibility is desirable in many
situations.
 For example, the operating system contains code and buffer space for device
drivers. If a device driver (or other operating-system service) is not commonly
used, we do not want to keep the code and data in memory, as we might be able
to use that space for other purposes. Such code is sometimes called transient
operating-system code; it comes and goes as needed.
 Thus, using this code changes the size of the operating system during program
execution.

Memory Allocation

 One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions. Each partition may contain exactly one process.
Thus, the degree of multiprogramming is bound by the number of partitions.
 In this multiple partition method, when a partition is free, a process is selected
from the input queue and is loaded into the free partition. When the process
terminates, the partition becomes available for another process.
 In the variable-partition scheme, the operating system keeps a table indicating
which parts of memory are available and which are occupied.
 Initially, all memory is available for user processes and is considered one large
block of available memory, a hole.
 Eventually a memory contains a set of holes of various sizes.
 As processes enter the system, they are put into an input queue. The operating
system takes into account the memory requirements of each process and the
amount of available memory space in determining which processes are allocated
memory. When a process is allocated space, it is loaded into memory, and it can
then compete for CPU time. When a process terminates, it releases its memory,
which the operating system may then fill with another process from the input
queue.
 At any given time, then, we have a list of available block sizes and an input
queue. The operating system can order the input queue according to a
scheduling algorithm.
 Memory is allocated to processes until, finally, the memory requirements of the
next process cannot be satisfied—that is, no available block of memory (or hole)
is large enough to hold that process.
 The operating system can then wait until a large enough block is available, or it
can skip down the input queue to see whether the smaller memory requirements
of some other process can be met.
 In general, as mentioned, the memory blocks available comprise a set of holes of
various sizes scattered throughout memory. When a process arrives and
needs memory, the system searches the set for a hole that is large enough for
this process.
 If the hole is too large, it is split into two parts. One part is allocated to the arriving
process; the other is returned to the set of holes. When a process terminates, it
releases its block of memory, which is then placed back in the set of holes.
 If the new hole is adjacent to other holes, these adjacent holes are merged to
form one larger hole.
 At this point, the system may need to check whether there are processes waiting
for memory and whether this newly freed and recombined memory could satisfy
the demands of any of these waiting processes.
 This procedure is a particular instance of the general dynamic storage
allocation problem, which concerns how to satisfy a request of size n from a list
of free holes.
 There are many solutions to this problem. The first-fit, best-fit, and worst-fit
strategies are the ones most commonly used to select a free hole from the set of
available holes.
1. First fit. Allocate the first hole that is big enough. Searching can start
either at the beginning of the set of holes or at the location where the
previous first-fit search ended. We can stop searching as soon as we find
a free hole that is large enough.
2. Best fit. Allocate the smallest hole that is big enough. We must search
the entire list, unless the list is ordered by size. This strategy produces the
smallest leftover hole.
3. Worst fit. Allocate the largest hole. Again, we must search the entire list,
unless it is sorted by size. This strategy produces the largest leftover hole,
which may be more useful than the smaller leftover hole from a best-fit
approach.

 Simulations have shown that both first fit and best fit are better than worst fit in
terms of decreasing time and storage utilization. Neither first fit nor best fit is
clearly better than the other in terms of storage utilization, but first fit is generally
faster.

Fragmentation

 Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation.
 As processes are loaded and removed from memory, the free memory space is
broken into little pieces. External fragmentation exists when there is enough total
memory space to satisfy a request but the available spaces are not contiguous:
storage is fragmented into a large number of small holes. This fragmentation
problem can be severe.
 In the worst case, we could have a block of free (or wasted) memory between
every two processes. If all these small pieces of memory were in one big free
block instead, we might be able to run several more processes.
 Whether we are using the first-fit or best-fit strategy can affect the amount of
fragmentation. (First fit is better for some systems, whereas best fit is better for
others.)
 Depending on the total amount of memory storage and the average process size,
external fragmentation may be a minor or a major problem. Statistical analysis of
first fit, for instance, reveals that, even with some optimization, given N allocated
blocks, another 0.5 N blocks will be lost to fragmentation. That is, one-third of
memory may be unusable! This property is known as the 50-percent rule.
 Memory fragmentation can be internal as well as external.
 Consider a multiple-partition allocation scheme with a hole of 18,464 bytes.
Suppose that the next process requests 18,462 bytes. If we allocate exactly the
requested block, we are left with a hole of 2 bytes. The overhead to keep track of
this hole will be substantially larger than the hole itself.
 The general approach to avoiding this problem is to break the physical memory
into fixed-sized blocks and allocate memory in units based on block size.
 With this approach, the memory allocated to a process may be slightly larger than
the requested memory. The difference between these two numbers is internal
fragmentation—unused memory that is internal to a partition.
 One solution to the problem of external fragmentation is compaction. The goal is
to shuffle the memory contents so as to place all free memory together in one
large block.
 Compaction is not always possible, however. If relocation is static and is done at
assembly or load time, compaction cannot be done. It is possible only if
relocation is dynamic and is done at execution time. If addresses are relocated
dynamically, relocation requires only moving the program and data and then
changing the base register to reflect the new base address.
 When compaction is possible, we must determine its cost. The simplest
compaction algorithm is to move all processes toward one end of memory; all
holes move in the other direction, producing one large hole of available memory.
This scheme can be expensive.
 Another possible solution to the external-fragmentation problem is to permit the
logical address space of the processes to be noncontiguous, thus allowing a
process to be allocated physical memory wherever such memory is available.
 Two complementary techniques achieve this solution: segmentation and
paging. These techniques can also be combined.

2.

You might also like