OS - Memory Management
OS - Memory Management
This part of the SIM introduces the Memory Manager (also known as random access
memory or RAM, core memory, or primary storage) and four types of memory
allocation schemes.
– Single-User Contiguous Scheme
Characteristics of this Algorithm:
• The program to be executed is loaded entirely and loaded into contiguous
memory locations.
• It is executed in its entirety and when execution ceases the memory is de-
allocated for reuse.
• Today's personal computer OS usually work in a way similar to this.
• Most PC operating systems (Mac OS, Windows, Unix) are able to perform
multitasking,
• However, the mode in which people use them is often single-task-at-time (real
time) mode.
• Whether the memory allocated to a task in a PC is contiguous depends upon
the allocation algorithm in use. Often the memory allocation is contiguous, if
possible.
• This mode of computing was not particularly cost effective, especially when
system was expensive.
• Today memory is relatively inexpensive so that allocating memory to a single
task at one time is not as "expensive" as it once was.
1
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
• The mode described here does not permit multiprogramming, rather, this
mode of computing preceded multiprogramming.
The first memory allocation scheme worked like this: Each program to be processed
was loaded in its entirety into memory and allocated as much contiguous space in
memory as it needed, as shown in image below. The key words here are entirety
and contiguous. If the program was too large and didn’t fit the available memory
space, it couldn’t be executed. And, although early computers were physically large,
they had very little memory.
2
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
if no, then continue with step 7
7 Load instruction in memory
8 Read next instruction of program
9 Go to step 4
– Fixed Partitions
The first attempt to allow for multiprogramming used fixed partitions (also called
static partitions) within the main memory—one partition for each job. Because the
size of each partition was designated when the system was powered on, each
partition could only be reconfigured when the computer system was shut down,
reconfigured, and restarted. Thus, once the system was in operation the partition
sizes remained static. A critical factor was introduced with this scheme: protection
of the job’s memory space. Once a partition was assigned to a job, no other job
could be allowed to enter its boundaries, either accidentally or intentionally
3
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
As each job terminates, the status of its memory partition is changed from busy to
free so an incoming job can be assigned to that partition.
4
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
go to step 1 to handle next job in line
Else
counter = counter + 1
End do
5 No partition available at this time, put job in waiting queue
6 Go to step 1 to handle next job in line
– Dynamic Partitions
With dynamic partitions,
available memory is still kept in
contiguous blocks but jobs are
given only as much memory as
they request when they are
loaded for processing.
Although this is a significant
improvement over fixed
partitions because memory
isn’t wasted within the partition,
it doesn’t entirely eliminate the
problem.
• The task is still loaded into contiguous memory and loaded entirely,
• Here the partition assigned is only as large as requested.
• Clearly, this makes much more efficient use of memory.
• This method helps to eliminate fragmentation as jobs are loaded initially, but
over time external fragmentation develops and memory use becomes less
efficient.
• External Fragmentation refers to the wasted memory between allocated
memory blocks.
For both fixed and dynamic memory allocation schemes, the operating system must
keep lists of each memory location noting which are free and which are busy. Then
as new jobs come into the system, the free partitions must be allocated.
5
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
These partitions may be allocated on the basis of first-fit memory allocation (first
partition fitting the requirements) or best-fit memory allocation (least wasted
space, the smallest partition fitting the requirements).
First-Fit Algorithm
• First-Fit is faster allocating but Best-Fit uses memory more efficiently.
• In the First-Fit algorithm the operating system keeps two lists: the free
list and the busy list.
• The operating system takes a job from the Entry Queue, looks at the
minimum partition size it will need and then examines the Free Blocklist
until large enough available block is found.
• The first one found is then assigned to the job. If no block large
enough is found, then the operating system will enqueue job in the
Waiting Queue.
• It then takes the next job from Entry Queue and repeats the procedure.
Example.
Example.
6
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
The algorithms for best-fit and first-fit are very different. Here’s how first-fit is
implemented:
First-Fit Algorithm
1 Set counter to 1
2 Do while counter <= number of blocks in memory
If job_size > memory_size(counter)
Then counter = counter + 1
Else
load job into memory_size(counter)
adjust free/busy memory lists
go to step 4
End do
3 Put job in waiting queue
4 Go fetch next job
The algorithm for best-fit is slightly more complex because the goal is to find the
smallest memory block into which the job will fit:
Best-Fit Algorithm
1 Initialize memory_block(0) = 99999
2 Compute initial_memory_waste = memory_block(0) – job_size
3 Initialize subscript = 0
4 Set counter to 1
5 Do while counter <= number of blocks in memory
If job_size > memory_size(counter)
Then counter = counter + 1
7
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Else
memory_waste = memory_size(counter) – job_size
If initial_memory_waste > memory_waste
Then subscript = counter
initial_memory_waste = memory_waste
counter = counter + 1
End do
6 If subscript = 0
Then put job in waiting queue
Else
load job into memory_size(subscript)
adjust free/busy memory lists
7 Go fetch next job
Deallocation
Until now, we’ve considered only the problem of how memory blocks are
allocated, but eventually there comes a time when memory space must be
released, or deallocated.
In a Fixed Partition scheme this is very straight forward, simply deallocate the
partition and declare it available.
8
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Else
search for null entry in free memory list
enter job_size and beginning_address in the entry slot
set its status to “free”
Example.
9
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Three snapshots of memory before and after compaction with the operating system
occupying the first 10K of memory. When Job 6 arrives requiring 84K, the initial
memory layout in (a) shows external fragmentation totaling 96K of space.
Immediately after compaction (b), external fragmentation has been eliminated,
making room for Job 6 which, after loading, is shown in (c).
In the previous lesson we looked at simple memory allocation schemes. Each one
required that the Memory Manager store the entire program in main memory in
contiguous locations; and as we pointed out, each scheme solved some problems
but created others, such as fragmentation or the overhead of relocation.
In this part of the module(SIM) we’ll follow the evolution of virtual memory with four
memory allocation schemes that first remove the restriction of storing the programs
contiguously, and then eliminate the requirement that the entire program reside in
memory during its execution. These schemes are paged, demand paging,
segmented, and segmented/demand paged allocation, which form the foundation for
our current virtual memory methods.
Paged Memory Allocation
10
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
The simplified example for the image above shows how the Memory Manager keeps track of
a program that is four pages long. To simplify the arithmetic, we’ve arbitrarily set the page
size at 100 bytes. Job 1 is 350 bytes long and is being readied for execution.
• Memory Manager requires three tables to keep track of the job’s pages:
– Job Table (JT) contains information about
o Size of the job
o Memory location where its PMT is stored
– Page Map Table (PMT) contains information about
o Page number and its corresponding page frame memory
address
– Memory Map Table (MMT) contains
o Location for each page frame
o Free/busy status
11
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Demand Paging
Demand paging introduced the concept of loading only a part of the program into
memory for processing. It was the first widely used scheme that removed the
restriction of having the entire job in memory from the beginning to the end of its
processing.
With demand paging, jobs are still divided into equally sized pages that initially
reside in secondary storage. When the job begins to run, its pages are brought into
memory only as they are needed.
Demand paging takes advantage of the fact that programs are written sequentially
so that while one section, or module, is processed all of the other modules are idle.
Not all the pages are accessed at the same time, or even sequentially.
When you choose one option from the menu of an application program such as this
one, the other modules that aren’t currently required (such as Help) don’t need to be
moved into memory immediately.
12
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Swapping Process:
• To move in a new page, a resident page must be swapped back into
secondary storage; involves
– Copying the resident page to the disk (if it was modified)
– Writing the new page into the empty page frame
• Requires close interaction between hardware components, software
algorithms, and policy schemes
Although demand paging is a solution to inefficient memory utilization, it is not free of
problems. When there is an excessive amount of page swapping between main
memory and secondary storage, the operation becomes inefficient. This
phenomenon is called thrashing.
W
o
r
k
i
13
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
ng of a FIFO algorithm for a job with four pages (A, B, C, D) as it’s processed
by a system with only two available page frames
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the
empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page
slot i.e 1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page
slot i.e 3 —>1 Page Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault
14
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Example.
Again, Page Replacement Algorithm decides which page to remove, also called
swap out when a new page needs to be loaded into the main memory. Page
Replacement happens when a requested page is not present in the main memory
and the available space is not sufficient for allocation to the requested page.
An example of demand paging that causes a page swap each time the loop is
executed and results in thrashing. If only a single page frame is available, this
program will have one-page fault each time the loop is executed.
15
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
What is segmented memory allocation?
- Each job is divided into several segments of different sizes, one for
each module that contains pieces to perform related functions
16
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
17
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
– Compaction, external fragmentation, secondary storage handling
• Three-dimensional addressing scheme
– Segment number, page number (within segment), and displacement
(within page)
How the Job Table, Segment Map Table, Page Map Table, and main memory
interact in a segment/paging scheme.
• Disadvantages
– Overhead: managing the tables
– Time required: referencing tables
• Associative memory
– Several registers allocated to each job
• Segment and page numbers: associated with main memory
– Page request: initiates two simultaneous searches
• Associative registers
18
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
• SMT and PMT
– Primary advantage (large associative memory)
• Increased speed
– Disadvantage
• High cost of complex hardware
Virtual Memory
• Made possible by swapping pages in/out of memory
• Program execution: only a portion of the program in memory at any given
moment
• Requires cooperation between:
– Memory Manager: tracks each page or segment
– Processor hardware: issues the interrupt and resolves the virtual
address
Comparison of the advantages and disadvantages of virtual memory with paging and
segmentation.
• Advantages
– Job size: not restricted to size of main memory
– More efficient memory use
– Unlimited amount of multiprogramming possible
– Code and data sharing allowed
– Dynamic linking of program segments facilitated
• Disadvantages
– Higher processor hardware costs
– More overhead: handling paging interrupts
– Increased software complexity: prevent thrashing
Cache Memory
• Small, high-speed intermediate memory unit
• Computer system’s performance increased
19
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
– Faster processor access compared to main memory
– Stores frequently used data and instructions
• Cache levels
– L2: connected to CPU; contains copy of bus data
– L1: pair built into CPU; stores instructions and data
• Data/instructions: move between main memory and cache
– Methods similar to paging algorithms
Comparison of (a) the traditional path used by early computers between main
memory and the CPU and (b) the path used by modern computers to connect the
main memory and the CPU via cache memory.
• Four cache memory design factors
– Cache size, block size, block replacement algorithm, and rewrite policy
• Optimal cache and replacement algorithm
– 80-90% of all requests in cache possible
• Cache hit ratio
20
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Self-Help: You can also refer to the sources below to help you further
understand the lesson:
Suggested Reference:
• Stallings, William. Operating Systems Internals and Design Principles 7th
Edition
• https://www.geeksforgeeks.org/operating-systems
• McHoes, Ann Mclver (2014). Understanding Operating Systems, Boston,
MA: Cengage Learning
• Marmel, Elaine (2013). Windows 8 Digital Classroom, Indianapolis, Ind:
Wiley
21