0% found this document useful (0 votes)
47 views44 pages

Ch9 Main Memory

Uploaded by

aljm68019
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views44 pages

Ch9 Main Memory

Uploaded by

aljm68019
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Chapter 9: Main Memory

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Outline
 Background

 Contiguous Memory Allocation


 Variable Partition

 Non-contiguous Memory Allocation


 Segmentation
 Paging

Operating System Concepts – 10th Edition 9.2 Silberschatz, Galvin and Gagne ©2018
Objectives

 To provide a detailed description of various ways of


organizing memory hardware
 To discuss various memory-management techniques,
including Segmentation and paging.

Operating System Concepts – 10th Edition 9.3 Silberschatz, Galvin and Gagne ©2018
Background
 Program must be brought (from disk) into memory and placed within
a process for it to be run
 Main memory is central to the operation of a modern computer
system.
 Memory consists of a large array of bytes, each with its own address.
 In this chapter, we discuss various ways to manage memory.

Operating System Concepts – 10th Edition 9.4 Silberschatz, Galvin and Gagne ©2018
Background
 Main memory and the registers built into each processing core are
the only general-purpose storage that the CPU can access directly.
• There are machine instructions that take memory addresses as arguments,
but none that take disk addresses.
• Therefore, any instructions in execution, and any data being used by the
instructions, must be in one of these direct-access storage devices. If the
data are not in memory, they must be moved there before the CPU can
operate on them.

 Register access is done in one CPU clock (or less)


 Main memory can take many cycles, causing a stall
• The CPU fetches instructions from memory according to the value of the
program counter. These instructions may cause additional loading from and
storing to specific memory addresses.

 Memory unit only sees a stream of:


• addresses + read requests, or
• address + data and write requests
 Cache sits between main memory and CPU registers

Operating System Concepts – 10th Edition 9.5 Silberschatz, Galvin and Gagne ©2018
Protection
 Protection of memory is required to ensure correct operations.
 i.e. we need to ensure that a process can access only those
addresses in its address space. HOW?
 We can provide this protection by using a pair of base and limit
registers which define the logical address space of a process

1. The base register holds the smallest


legal physical memory address;
2. The limit register specifies the size
of the range.

 For example, if the base register holds


300040 and the limit register is 120900, then
the program can legally access all addresses
from 300040 through 420939 (inclusive).

Operating System Concepts – 10th Edition 9.6 Silberschatz, Galvin and Gagne ©2018
Hardware Address Protection
 CPU must check every memory access (address) generated in user mode to
be sure it is between base and limit for that user
 Any attempt by a program executing in user mode to access operating-system
memory or other users’ memory results in a trap to the operating system,
which treats the attempt as a fatal error.
 This scheme prevents a user program from (accidentally or deliberately)
modifying the code or data structures of either the operating system or other
users.
 The instructions to loading the base and limit registers are privileged

Operating System Concepts – 10th Edition 9.7 Silberschatz, Galvin and Gagne ©2018
Address Binding
 Usually, a program resides on a disk as a binary executable file.
 To run, the program must be brought into memory and placed within the
context of a process, where it becomes eligible for execution on an
available CPU.
 As the process executes, it accesses instructions and data from memory.
 Eventually, the process terminates, and its memory is reclaimed for use by
other processes.

 Most systems allow a user process to reside in any part of the


physical memory. Thus, although the address space of the computer
may start at 00000, the first address of the user process need not be
00000.
 Addresses represented in different ways at different stages of a
program’s life
• Source code addresses usually symbolic (e.g count, min, ID, etc)
• Compiled code addresses bind to relocatable addresses
 i.e., “14 bytes from beginning of this module”
• Linker or loader will bind relocatable addresses to absolute
addresses
 i.e., 74014
Operating System Concepts – 10th Edition 9.8 Silberschatz, Galvin and Gagne ©2018
Figure 9.3: Multistep Processing of a User
Program

Operating System Concepts – 10th Edition 9.9 Silberschatz, Galvin and Gagne ©2018
Binding of Instructions and Data to
Memory
 Address binding of instructions and data to memory addresses
can happen at three different stages:

• Compile time: If memory location of a process is known a priori,


then absolute code can be generated. However, we must
recompile code if starting location changes

• Load time: The compiler must generate relocatable code if


memory location is not known at compile time. In this case, final
binding is delayed until load time. If the starting address changes,
we need only reload the user code to incorporate this changed
value.

• Execution time: Binding must be delayed until run time if the


process can be moved during its execution from one memory
segment to another
 This scheme needs hardware support for address maps (e.g.,
base and limit registers).
 Most operating systems use this method
Operating System Concepts – 10th Edition 9.10 Silberschatz, Galvin and Gagne ©2018
Logical vs. Physical Address
Space
 The concept of a logical address space that is bound to a separate
physical address space is central to proper memory management:
• Logical address – generated by the CPU (value in the range 0 to
max); also referred to as virtual address.
• Physical address – address seen by the memory unit (in the range R
+ 0 to R + max for a base value R)

• Logical and physical addresses are


the same in compile-time and
load-time address-binding
schemes;
• Whereas logical (virtual) and
physical addresses differ in
execution-time address-binding
 Logical address scheme
space is the set of all logical addresses generated by
a program
 Physical address space is the set of all physical addresses
corresponding to these logical addresses.

Operating System Concepts – 10th Edition 9.11 Silberschatz, Galvin and Gagne ©2018
Memory-Management Unit
(MMU)
 The run-time mapping from virtual to physical addresses is done by
a hardware device called the memory-management unit (MMU)
(Figure 9.4).

 We can choose from many different methods to accomplish such


mapping, as we discuss in Section 9.2 through Section 9.3.

Operating System Concepts – 10th Edition 9.12 Silberschatz, Galvin and Gagne ©2018
Memory-Management Unit
(Cont.)
 Consider a simple scheme which is a generalization of the base-
register scheme.
 The base register now is called a relocation register
 The value in the relocation register is added to every address
generated by a user process at the time it is sent to memory
 The user program deals with logical addresses; it never sees the real
physical addresses
• Execution-time binding occurs when reference is made to location
in memory
• Logical address is bound to physical addresses
• For example, if the base is
at 14000, then an attempt
by the user to address
location 0 is dynamically
relocated to location 14000;
• an access to location 346 is
mapped to location 14346.

Figure 9.5 Dynamic


relocation using a
Operating System Concepts – 10th Edition relocation register.
9.13 Silberschatz, Galvin and Gagne ©2018
Dynamic Loading
 The entire program does need to be in memory to execute. The size of a
process has thus been limited to the size of physical memory.
 To obtain better memory-space utilization, we can use dynamic
loading. With dynamic loading, a routine is not loaded until it is
called.
 All routines are kept on disk in relocatable load format.

 Advantages:
1. Better memory-space utilization since unused routine is never
loaded
2. Useful when large amounts of code are needed to handle
infrequently occurring cases such as error routines.
3. No special support from the operating system is required
 Implemented through program design
 OS can help the programmer by providing libraries to
implement dynamic loading

Operating System Concepts – 10th Edition 9.14 Silberschatz, Galvin and Gagne ©2018
Dynamic Linking
 Static linking – system libraries and program code are combined by
the loader into the binary program image
 Dynamic linking –linking postponed until execution time

Dynamically linked libraries (DLLs)


are system libraries that are linked to user programs
at run time.

Operating System Concepts – 10th Edition 9.15 Silberschatz, Galvin and Gagne ©2018
Advantages of Dynamic Linking

 Dynamic linking is particularly useful for libraries.


 (+) Without DLL, each program on a system must include a copy of its
language library in the executable image. This requirement not only
increases the size of an executable image but also may waste main
memory
But with DLLs, these libraries can be shared among multiple
processes, so that only one instance of the DLL in main memory. For
this reason, DLLs are also known as shared libraries, and are used
extensively in Windows and Linux systems.

 (+) Dynamically linked libraries can be extended to library updates


(such as bug fixes). In addition, a library may be replaced by a new
version, and all programs that reference the library will automatically
use the new version.

 (-) Unlike dynamic loading, dynamic linking and shared libraries


generally require help from the operating system.

Operating System Concepts – 10th Edition 9.16 Silberschatz, Galvin and Gagne ©2018
Contiguous Memory Allocation
 Main memory must support both OS and user processes. Even with
limited resource, it must allocate efficiently
 Contiguous allocation is an early method. Here, the main memory is
usually divided into 2 partitions:
1. resident operating system, usually held in high memory with
interrupt vector
2. user processes held in low memory
• each process is contained in a single section of memory that is
contiguous to the section containing the next process.

Operating System Concepts – 10th Edition 9.17 Silberschatz, Galvin and Gagne ©2018
Contiguous Allocation (Cont.)
 Relocation registers are used to protect user processes from each
other, and from changing operating-system code and data
• Base register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical address
must be less than the limit register
 (for example, relocation = 100040 and limit = 74600).
• MMU maps logical address dynamically by adding the value in the
relocation register. This mapped address is sent to memory
• When the CPU scheduler selects a process for execution, the dispatcher
loads the relocation and limit registers with the correct values as part of
the context switch. Because every address generated by a CPU is checked
against these registers, we can protect both the operating system and the
other users’ programs and data from being modified by this running
process.
• Can then allow actions such as kernel code being transient and kernel
changing size

Operating System Concepts – 10th Edition 9.18 Silberschatz, Galvin and Gagne ©2018
Figure 9.6 Hardware Support for Relocation and Limit
Registers

Operating System Concepts – 10th Edition 9.19 Silberschatz, Galvin and Gagne ©2018
Variable Partition
 Multiple-partition allocation
• One of the simplest methods of allocating memory is to assign
processes to variably sized partitions in memory, where each
partition may contain exactly one process
• Degree of multiprogramming limited by number of partitions
• Variable-partition sizes for efficiency (sized to a given process’
needs)
• Hole – block of available memory; holes of various size are scattered
throughout memory
• Operating system maintains information (table) about:
a) allocated partitions b) free partitions (hole)
• When a process arrives, it is allocated memory from a hole large
enough to accommodate it. Process exiting frees its partition,
adjacent free partitions combined

Operating System Concepts – 10th Edition 9.20 Silberschatz, Galvin and Gagne ©2018
Variable Partition
• The memory blocks available comprise a set of holes of various sizes
scattered throughout memory.

• When a process arrives and needs memory, the system searches the set
for a hole that is large enough for this process.

• If the hole is too large, it is split into two parts. One part is allocated to
the arriving process; the other is returned to the set of holes.

• When a process terminates, it releases its block of memory, which is then


placed back in the set of holes.

• If the new hole is adjacent to other holes, these adjacent holes are
merged to form one larger hole.

Operating System Concepts – 10th Edition 9.21 Silberschatz, Galvin and Gagne ©2018
Variable Partition
• What happens when there isn’t sufficient memory to
satisfy the demands of an arriving process?

1. One option is to simply reject the process and provide an


appropriate error message.

2. Alternatively, we can place such processes into a wait queue. When


memory is later released, the operating system checks the wait queue
to determine if it will satisfy the memory demands of a waiting
process.

Operating System Concepts – 10th Edition 9.22 Silberschatz, Galvin and Gagne ©2018
Dynamic Storage-Allocation
Problem
• How to satisfy a request of size n from a list of free holes?

 First-fit: Allocate the first hole that is big enough


 Best-fit: Allocate the smallest hole that is big enough;
must search entire list, unless ordered by size
 Produces the smallest leftover hole

 Worst-fit: Allocate the largest hole; must also search


entire list
 Produces the largest leftover hole

• (+) First-fit and best-fit better than worst-fit in terms of speed


and storage utilization
• (+) First-fit and best-fit are equal in terms of storage
utilization, but first-fit is generally faster in terms of speed.
• (-) Both the first-fit and best-fit strategies for memory
allocation suffer from external fragmentation

Operating System Concepts – 10th Edition 9.23 Silberschatz, Galvin and Gagne ©2018
Fragmentation
 As processes are loaded and removed from memory, the free memory
space is broken into little pieces. Memory fragmentation can be
internal as well as external.

External Fragmentation – total memory space exists to


satisfy a request, but it is not contiguous, storage is
fragmented into a large number of small holes.

Internal Fragmentation – allocated memory may be slightly


larger than requested memory; this size difference is unused
memory that is internal to a partition. This happens when
memory is allocated in fixed sized blocks.

 Depending on the total amount of memory storage and the average


process size, external fragmentation may be a minor or a major
problem.
• First fit analysis reveals that given N blocks allocated, 0.5 N
blocks lost to fragmentation
 1/3 may be unusable ->This property is known as the 50-
percent rule

Operating System Concepts – 10th Edition 9.24 Silberschatz, Galvin and Gagne ©2018
Fragmentation

Operating System Concepts – 10th Edition 9.25 Silberschatz, Galvin and Gagne ©2018
Fragmentation Solution

 Two Possible Solutions to External Fragmentation:


1. Reduce external fragmentation by compaction
• Shuffle memory contents to place all free memory together in one
large block
• (-) Compaction is expensive and is possible only if relocation is
dynamic, and is done at execution time

2. Permit the logical address space of processes to be noncontiguous,


thus allowing a process to be allocated physical memory wherever such
memory is available. However, other memory allocation techniques will
be used such as Paging

Operating System Concepts – 10th Edition 9.26 Silberschatz, Galvin and Gagne ©2018
(+/-) Variable Partitioning (Extra)
 Ref:
https://www.geeksforgeeks.org/variable-or-dynamic-partitioning-in-operating-
system/

 Advantages of Variable Partitioning –


1. No Internal Fragmentation:
In variable Partitioning, space in main memory is allocated strictly
according to the need of process.
2. No Limitation on the size of the process:
In Fixed partitioning, the process with the size greater than the size of the
largest partition could not be loaded and process can not be divided as it is
invalid in contiguous allocation technique (should be fully loaded into
memory). Here, In variable partitioning, the process size can’t be
restricted since the partition size is decided according to the process size.

 Disadvantages of Variable Partitioning –


1. Difficult Implementation:
Implementing variable Partitioning is difficult as compared to Fixed
Partitioning as it involves allocation of memory during run-time rather than
during system configure.
2. External Fragmentation

Operating System Concepts – 10th Edition 9.27 Silberschatz, Galvin and Gagne ©2018
Segmentation
 Memory Segmentation - the division of a computer's primary
memory into segments or sections. Each space is of variable size.
 This memory-management scheme supports user view of memory (i.e.
the user doesn’t care how the program is actually represented in
memory, he/she views it as segments):
 A program is a collection of segments.
 A segment is a logical unit such as:

 main program  procedure


 function  method
 object  common block
 local variables  global variables
 common block  Stack

 symbol table  arrays

Operating System Concepts – 10th Edition 8.


9.28 Silberschatz, Galvin and Gagne ©2018
Segmentation Architecture (cont.)
 Each segment has a name and a length. For simplicity of
implementation, segments are numbered and are referred to by a
segment number, rather than by a segment name
 Programmer specifies each address by two quantities: a segment
name and an offset. Thus, a logical address consists of a two
tuple:
<segment-number, offset>
 Elements within a segment are identified by their offset from the
beginning of the segment: the 1st statement of the program, the
7th stack frame entry in the stack, the 5th instruction of the Sqrt(),
and so on.
 The offset of the logical address must be between 0 and the
segment limit.

Operating System Concepts – 10th Edition 9.29 Silberschatz, Galvin and Gagne ©2018
Segmentation Architecture (cont.)
 Normally, when a program is compiled, the compiler automatically
constructs segments reflecting the input program. The C compiler
might create separate segments for the following:
1. The code
2. Global variables
3. The heap, from which memory is allocated
4. The stacks used by each thread
5. The standard C library
 Libraries that are linked in during compile time might be assigned separate
segments. The loader would take all these segments and assign them
segment numbers.

 Although the programmer can now refer to objects in the program by a


two-dimensional address, the actual physical memory is still, of course, a
one dimensional sequence of bytes.
 Thus, we must define an implementation to map two-dimensional user-
defined addresses into one-dimensional physical addresses. This mapping is
affected by a segment table.

Operating System Concepts – 10th Edition 9.30 Silberschatz, Galvin and Gagne ©2018
Segmentation Architecture
(cont.)
 Segment Table – maps two-dimensional physical addresses. Each table entry
has:
 base – contains the starting physical address where the segments reside
in memory
 limit – specifies the length of the segment

Operating System Concepts – 10th Edition 9.31 Silberschatz, Galvin and Gagne ©2018
Logical View of Segmentation

Byte 53 of segment 2 is 4353 ?


Byte 852 of segment 3 is 4052 ?
Byte 1211 of segment 0 ? gives trap error

Operating System Concepts – 10th Edition 9.32 Silberschatz, Galvin and Gagne ©2018
(+/-) Segmentation
 Ref:
https://www.geeksforgeeks.org/segmentation-in-operating-system/?ref=lb
p

 Advantages of Segmentation –
1. No Internal fragmentation.
2. Segment Table consumes less space in comparison to Page
table in paging.

 Disadvantage of Segmentation –
External fragmentation, and hence need for compaction

Operating System Concepts – 10th Edition 9.33 Silberschatz, Galvin and Gagne ©2018
Paging
 Paging is used in most operating systems. It is implemented through cooperation
between the operating system and the computer hardware.

 Physical address space of a process can be noncontiguous; process is allocated


physical memory whenever the latter is available
1. Avoids external fragmentation
2. Avoids problem of varying sized memory chunks

 Still have Internal fragmentation

Operating System Concepts – 10th Edition 9.34 Silberschatz, Galvin and Gagne ©2018
Paging Method
 Method:
• Divide physical memory into fixed-sized blocks called frames
 Size is power of 2, between 512 bytes and 16 Mbytes
• Divide logical memory into blocks of same size called pages
• When a process is to be executed, its pages are loaded into any available memory frames from
their source (a file system).
• Keep track of all free frames
• To run a program of size N pages, need to find N free frames and load program
• Set up a page table to translate logical to physical addresses

Operating System Concepts – 10th Edition 9.35 Silberschatz, Galvin and Gagne ©2018
Paging Method
 Every address generated by CPU (logical address) is divided into:
• Page number (p) – used as an index into a page table which contains base address
of each page in physical memory
• Page offset (d) – combined with base address to define the physical memory
address
page number page offset
p d
m -n n

 Physical address in paging is divided into:


• frame number (f)
• offset (d)

Operating System Concepts – 10th Edition 9.36 Silberschatz, Galvin and Gagne ©2018
Logical address vs. Physical Address in Paging

Operating System Concepts – 10th Edition 9.37 Silberschatz, Galvin and Gagne ©2018
Figure 9.8 Paging Hardware
• The following outlines the steps taken by the MMU to translate a logical address
generated by the CPU to a physical address:
1. Extract the page number p and use it as an index into the page table.
2. Extract the corresponding frame number f from the page table.
3. Replace the page number p in the logical address with the frame number f .

• As the offset d does not change, it is not replaced, and the frame number and offset now
comprise the physical address.

Operating System Concepts – 10th Edition 9.38 Silberschatz, Galvin and Gagne ©2018
Fig 9.9 Paging Model of Logical and Physical
Memory

Operating System Concepts – 10th Edition 9.39 Silberschatz, Galvin and Gagne ©2018
Free Frames

A new process
arrives that
needs 4 pages. If
there are 4 frames
available in memory,
they will be
allocated to this Before allocation
After allocation
process

Operating System Concepts – 10th Edition 9.40 Silberschatz, Galvin and Gagne ©2018
Address Translation Scheme

 For given logical address space 2m and page size 2n bytes, then the high-order m−n bits of a logical
address designate the page number, and the n low-order bits designate the page offset. Thus, the
logical address is as follows:

page number page offset


p d
m -n n

where p is an index into the page table and d is the displacement within the page.

Operating System Concepts – 10th Edition 9.41 Silberschatz, Galvin and Gagne ©2018
Pag
e
Logical
Address Data Offset Paging Example
no
0 A 0 Logical address: n = 2 and m = 4. Using a page
Pag 1 B 1 size of 4 bytes and a physical memory of 32 bytes
e 2 C 2
(8 frames enough for 8 pages)
#0
RAM
3 D 3
Page Table
4 E 0
P
5 F 1
Page Frame
age #1 6 G 2 no no
7 H 3
0 5
8 I 0
Pag 9 J 1
1 6
e
#2 10 k 2 2 1
11 L 3
3 2
12 M 0
Formula in Paging
Pag 13 N 1
e Physical address= Frame no* (division
14 O 2
#3 limit) + offset
15 p 3 32-byte memory and 4-byte
• Logical address 0(0000) is page 0, offset 0 i.e. itpages
is at frame 5
• Logical address 0 has physical address 5*4+0=20
• Logical address 3 (0011) is page 0, offset 3 is at 5*4+3 = 23
• Logical address 4(0100) is page 1, offset 0 i.e. it is at frame 6
• Physical address 6*4+0=24
• Logical address 13(1101) is page 3, offset 1 i.e. it is at frame 2
• Physical address 2*4+1 =9
Operating System Concepts – 10th Edition 9.42 Silberschatz, Galvin and Gagne ©2018
Segmentation vs. Paging
Paging Segmentation
Allows internal fragmentation No Internal fragmentation
No external fragmentation Allows external fragmentation
Program (process) is divided into Program (process) is divided into
equal-sized pages unequal-sized segments
Absolute (physical) address is calculated absolute address is calculated using
using page number and the offset segment number and the offset
Requires Page Map Table (PMT) Requires Segment Map Table (SMT). It
consumes less space in comparison to
Page table in paging.

• Recommended short video on Memory Manager


schemes:

https://www.youtube.com/watch?v=qdkxXygc3rE&list=PLm
bPuZ0NsyGS8ef6zaHd2qYylzsHxL63x&index=2

Operating System Concepts – 10th Edition 9.43 Silberschatz, Galvin and Gagne ©2018
End of Chapter 9

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018

You might also like