0% found this document useful (0 votes)
11 views38 pages

5chapter Five - Memory Management

The document covers memory management in operating systems, detailing concepts such as logical vs. physical address space, swapping, and various memory management schemes including paging and segmentation. It explains address binding, memory partitioning techniques, and how the Memory Management Unit (MMU) operates. Additionally, it discusses virtual memory, page replacement algorithms, and the differences between paging and segmentation.

Uploaded by

jamsibro140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views38 pages

5chapter Five - Memory Management

The document covers memory management in operating systems, detailing concepts such as logical vs. physical address space, swapping, and various memory management schemes including paging and segmentation. It explains address binding, memory partitioning techniques, and how the Memory Management Unit (MMU) operates. Additionally, it discusses virtual memory, page replacement algorithms, and the differences between paging and segmentation.

Uploaded by

jamsibro140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Chapter

Memory 5
Management

Operating System
1
Memory Management

• Introduction
• Logical versus Physical Address Space
• Swapping
• Memory Management Schemes
• Memory Partitioning
• Contiguous Allocation
• Paging
• Segmentation

2
Introduction

• Program must be brought into memory and placed to be executed.


• Input queue – collection of processes on the disk that are waiting to
be brought into memory for execution.
• User programs go through several steps before being executed.

3
Binding of Instructions and Data
to Memory
Address binding of instructions and data to memory addresses can
happen at three different stages.
Compile-time Address Binding :
If the compiler is responsible for performing address binding then it is
called compile-time address binding.
It will be done before loading the program into memory.
The compiler requires interacts with an OS memory manager to
perform compile time address binding.
Load time Address Binding :
It will be done after loading the program into memory.
This type of address binding will be done by the OS memory manager
i.e loader.
Execution time or dynamic Address Binding :
The dynamic type of address binding done by the processor at the time
of program execution.

4
Memory-Management Unit
(MMU)
• Part of the operating system that manages memory is called the
Memory Manager (MM).
 Keeping track of which part of memory is in use
 Allocating and de-allocating memory to processes.
 Managing swapping between memory and disk
• A good memory manager has the following attributes
 It allows all processes to run.
 Its memory (space used for the management activity) overhead
must be reasonable.
 Its time overhead (time required for the management activity) is
reasonable.

5
Logical vs. Physical Address
Space
 Logical address – generated by the CPU; also referred to as
virtual address.
 Physical address – address seen by the memory unit.
• Logical and physical addresses are the same in compile-time and
load-time address-binding schemes; logical (virtual) and physical
addresses differ in execution-time address-binding scheme.

6
Swapping

• A process can be swapped temporarily out of memory to a backing


store, and then brought back into memory for continued execution.
• Backing store – fast disk large enough to accommodate copies of all
memory images for all users; must provide direct access to these
memory images.
• Roll out, roll in – swapping variant used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority
process can be loaded and executed.
• Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped.
• Modified versions of swapping are found on many systems, i.e., UNIX
and Microsoft Windows.

7
Schematic View of Swapping

8
Memory Management Schemes

• Uni-programming: Only one process will be in the memory in


addition to the operating system.
• Multiprogramming: It allows multiple programs to run in parallel.
Divides the memory among concurrently running processes and the
operating system.

9
Address Spaces

• Two problems have to be solved to allow multiple applications to be


in memory at the same time without interfering with each other:
 Protection
 Relocation
• A better solution to allow multiple programs to be in memory at the
same time without interfering with each other: address space.
• An address space is the set of addresses that a process can use to
address memory.
• Each process has its own address space.
• Difficulty:
 How to give each program its own address space?

10
Memory Partitioning

• In order to run multiple programs in parallel a memory should


be partitioned.
• There are two partitioning options – fixed partitioning and
Dynamic partitioning
 Fixed partitioning – Partitioning is done before the processes
comes to memory.
 Dynamic partitioning – Partitioning is done when processes
request memory space.

11
Fixed partitioning

• It is the simplest and oldest partitioning technique.


• It divides the memory into fixed number of equal or unequal sized
partitions.
• The partitioning can be done by the OS or the operator when the
system is started.
• Variations of fixed partitioning
 Equal sized fixed partitions.
 Unequal sized fixed partitions.
• Problems:
 A program may be too big to fit into a partition.
 Inefficient use of memory due to internal fragmentation(is the
phenomenon, in which there is wasted space internal to a partition
due to the fact that the process loaded is smaller than the partition.)
12
Dynamic partitioning

• Partitions are created dynamically during the allocation of memory.


• The size and number of partitions vary throughout of the operation
of the system.
• Processes will be allocated exactly as much memory they require.
• Dynamic partitioning leads to a situation in which there are a lot of
small useless holes in memory. This phenomenon is called external
fragmentation.
• Solution to external fragmentation:
 Compaction – Combining the multiple useless holes in to one big
hole.
 The problem of memory compaction is that it requires a lot of CPU
time.

13
Memory Partitioning (Cont.)

• A process has three major sections - stack, data and code.


• It is expected that most processes will grow as they run, so it is a
good idea to allocate a little extra memory whenever a process is
loaded into the memory.
• With fixed partitions, the space given to process is usually larger than
it asked for and hence the process can expand if it needs.
• With dynamic partitions since a process is allocated exactly as much
memory it requires, it creates an overhead of moving and swapping
processes whenever the process grows dynamically.
• As shown in the following diagram the extra space is located between
the stack and data segments because the code segment doesn’t
grow.

14
Memory Partitioning (Cont.)

• If a process runs out of the extra space allocated to it, either


 It will have to be moved to a hole with enough space.
 Swapped out of memory until a large enough holes can be
created.
 Kill the process

15
Managing Free Memory

• When memory is assigned dynamically, the operating system must


manage it.
• In general terms, there are two ways to keep track of memory usage:
 Bitmaps
 Linked lists

16
Memory Management with
Bitmaps
• The memory is divided into fixed size allocation units.
• Each allocation unit is represented with a bit in the bit map. If the bit is 0, it
means the allocation unit is free and if it is 1, it means it is occupied.
• The following figure shows part of memory and the corresponding bitmap.

17
Memory Management with
Linked Lists
• A linked list of allocated and free memory segments is used.
• Each segment is either a process or a hole between two processes
and contains a number of allocation units.
• Each entry in the list consists of
 Segment type: P/H (1bit)
 The address at which it starts
 The length of the segment
 A pointer to the next entry

18
How Do You Allocate Memory to
New Processes?
How to satisfy a request of size n from a list of free holes?
• First-fit: Allocate the first hole that is big enough.
• Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size. Produces the smallest leftover
hole.
• Worst-fit: Allocate the largest hole; must also search entire list.
Produces the largest leftover hole.
• First-fit and best-fit better than worst-fit in terms of speed and
storage utilization.

19
VIRTUAL MEMORY

• Hide the real memory from the user.


• A process may be larger than the available main memory.
• Swapping is not an attractive option, b/c it takes seconds to swap out
a 1-GB program and the same to swap in a 1-GB program.
• A solution adopted in the 1960s was to split programs into little
pieces, called overlays.
 The programmer will divide the program into modules and the
main program is responsible for switching the modules in and
out of memory as needed.
 The programmer must know how much memory is available and
it wastes the programmer time.

20
VIRTUAL MEMORY (Cont.)

• In Virtual memory the OS keeps those parts of the program currently


in use in main memory and the rest on the disk.
• If a process encounters a logical address that is not in main memory,
it generates an interrupt indicating a memory access fault and the OS
puts the process in blocking state until it brings the piece that
contains the logical address that caused the access fault.
• Program generated addresses are called virtual addresses and the
set of such addresses form the virtual address space.
• The virtual address will go to the Memory Management Unit (MMU)
that maps the virtual addresses into physical addresses.
• There are two virtual memory implementation techniques
Paging
Segmentation

21
Paging

• Paging is one of the memory management scheme where a computer can


store and retrieve data from secondary storage for in the main memory.
• Logical address space of a process can be noncontiguous(very close);
process is allocated physical memory whenever that memory is available
and the program needs it.
• Divide physical memory into fixed-sized blocks called frames (size is power
of 2, between 512 bytes and 8192 bytes).
• Divide logical memory into blocks of same size called pages.
• Keep track of all free frames.
• To run a program of size n pages, need to find n free frames and load
program.
• Set up a page table to translate logical to physical addresses.
• Internal fragmentation.

22
Paging Example

23
Address Translation Scheme

• Address generated by CPU is divided into:


 Page number (p) – used as an index into a page table which
contains base address of each page in physical memory.
 Page offset (d) – combined with base address to define the
physical memory address that is sent to the memory unit.

24
Address Translation
Architecture

25
Demand Paging

• Bring a page into memory only when it is needed.


 Less I/O needed
 Less memory needed
 Faster response
 More users

26
Page Replacement Algorithms

• When a page fault occurs, the operating system has to choose a page
to evict (remove from memory) to make room for the incoming page.
• While it would be possible to pick a random page to evict at each
page fault, system performance is much better if a page that is not
heavily used is chosen.
• If a heavily used page is removed, it will probably have to be brought
back in quickly, resulting in extra overhead.

27
First-In-First-Out (FIFO)
Algorithm
• FIFO page replacement algorithm selects the page that has been in
memory the longest.
• Example: FIFO

28
The Optimal Page Replacement
Algorithm
• Replace page that will not be used for longest period of time.
• 4 frames example

• Impossible to know whether the page will be referenced or not in the


future (it’s unrealizable).

29
Least Recently Used (LRU)
Algorithm
• It replaces the page in memory that has not been referenced for the
longest time.
• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

30
LRU Algorithm (Cont.)

• Counter implementation
 Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter.
 When a page needs to be changed, look at the counters to
determine which are to change.
• Stack implementation – keep a stack of page numbers in a double link
form:
 Page referenced:
 move it to the top
 requires 6 pointers to be changed
 No search for replacement

31
Thrashing

• The phenomenon of moving pages back and forth between memory


and secondary storage.

32
Segmentation

• Memory-management scheme that supports user view of memory.


• A program is a collection of segments. A segment is a logical unit such
as:
main program,
procedure,
function,
local variables, global variables,
common block,
stack,
symbol table, arrays

33
Logical View of Segmentation

34
Segmentation Architecture

• Segment table – maps two-dimensional physical addresses; each


table entry has:
 base – contains the starting physical address where the
segments reside in memory.
 limit – specifies the length of the segment.

35
Logical View of Segmentation

36
Paging Vs. Segmentation
Segmentation Paging
Program is divided into variable size Program is divided into fixed size pages
segments
User or compiler is responsible for Division into pages is performed by the
dividing the program into segments Operating System
Segmentation is slower than paging Paging is faster than segmentation
Segmentation is visible to the user Paging is invisible to the user
Segmentation eliminates internal Paging suffers from internal
fragmentation fragmentation
Segmentation suffers from external There is no external fragmentation
fragmentation

37
Thank you

38

You might also like