0% found this document useful (0 votes)
9 views21 pages

OS - Memory Management

This document discusses early memory management systems, including single-user contiguous allocation where a program uses contiguous memory locations, fixed partitions which divide memory into static sized partitions, and dynamic partitions which allocate only as much memory as a program needs but can still lead to fragmentation over time.

Uploaded by

lordzjason14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views21 pages

OS - Memory Management

This document discusses early memory management systems, including single-user contiguous allocation where a program uses contiguous memory locations, fixed partitions which divide memory into static sized partitions, and dynamic partitions which allocate only as much memory as a program needs but can still lead to fragmentation over time.

Uploaded by

lordzjason14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

College of Computing Education

3rd Floor, DPT Building


Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

Memory Management: Early Systems


The management of main memory is critical. In fact, from a historical perspective,
the performance of the entire system has been directly dependent on two things:
How much memory is available and how it is optimized while jobs are being
processed.

This part of the SIM introduces the Memory Manager (also known as random access
memory or RAM, core memory, or primary storage) and four types of memory
allocation schemes.
– Single-User Contiguous Scheme
Characteristics of this Algorithm:
• The program to be executed is loaded entirely and loaded into contiguous
memory locations.
• It is executed in its entirety and when execution ceases the memory is de-
allocated for reuse.
• Today's personal computer OS usually work in a way similar to this.
• Most PC operating systems (Mac OS, Windows, Unix) are able to perform
multitasking,
• However, the mode in which people use them is often single-task-at-time (real
time) mode.
• Whether the memory allocated to a task in a PC is contiguous depends upon
the allocation algorithm in use. Often the memory allocation is contiguous, if
possible.
• This mode of computing was not particularly cost effective, especially when
system was expensive.
• Today memory is relatively inexpensive so that allocating memory to a single
task at one time is not as "expensive" as it once was.

1
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
• The mode described here does not permit multiprogramming, rather, this
mode of computing preceded multiprogramming.
The first memory allocation scheme worked like this: Each program to be processed
was loaded in its entirety into memory and allocated as much contiguous space in
memory as it needed, as shown in image below. The key words here are entirety
and contiguous. If the program was too large and didn’t fit the available memory
space, it couldn’t be executed. And, although early computers were physically large,
they had very little memory.

Algorithm to Load a Job in a Single-User System


1 Store first memory location of program into base register (for memory
protection)
2 Set program counter (it keeps track of memory space used by the
program) equal to address of first memory location
3 Read first instruction of program
4 Increment program counter by number of bytes in instruction
5 Has the last instruction been reached?
if yes, then stop loading program
if no, then continue with step 6
6 Is program counter greater than memory size?
if yes, then stop loading program

2
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
if no, then continue with step 7
7 Load instruction in memory
8 Read next instruction of program
9 Go to step 4

– Fixed Partitions
The first attempt to allow for multiprogramming used fixed partitions (also called
static partitions) within the main memory—one partition for each job. Because the
size of each partition was designated when the system was powered on, each
partition could only be reconfigured when the computer system was shut down,
reconfigured, and restarted. Thus, once the system was in operation the partition
sizes remained static. A critical factor was introduced with this scheme: protection
of the job’s memory space. Once a partition was assigned to a job, no other job
could be allowed to enter its boundaries, either accidentally or intentionally

This mode was developed during early attempts at the development of


multiprogramming (multitasking). When an operating system using this strategy
was being configured a systems engineer or manager would specify fixed sized
partitions of memory. Jobs are then assigned and loaded into fixed partitions.
This system insured against intrusion on one task by another. The partitions in
which the operating system is loaded were considered protected.
As with the previous method, tasks are still loaded entirely and into contiguous
memory. Each job received one partition. This may waste space since a very
small task may not need all of the memory of the partition to which it is assigned.
Wasted memory within a block is known as Internal Fragmentation For
efficiency, the partitions and their sizes must be “tuned” to the average job mix on
the system. See figure 2.1 for an example.
This partition scheme is more flexible than the single-user scheme because it
allows several programs to be in memory at the same time. However, it still
requires that the entire program be stored contiguously and in memory from the
beginning to the end of its execution.

3
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

As each job terminates, the status of its memory partition is changed from busy to
free so an incoming job can be assigned to that partition.

Algorithm to Load a Job in a Fixed Partition


1 Determine job’s requested memory size
2 If job_size > size of largest partition
Then reject the job print appropriate message to operator go to
step 1 to handle next job in line
Else
continue with step 3
3 Set counter to 1
4 Do while counter <= number of partitions in memory
If job_size > memory_partition_size(counter)
Then counter = counter + 1
Else
If memory_partition_size(counter) = “free”
Then load job into memory_partition(counter)
change memory_partition_status(counter) to “busy”

4
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
go to step 1 to handle next job in line
Else
counter = counter + 1
End do
5 No partition available at this time, put job in waiting queue
6 Go to step 1 to handle next job in line

– Dynamic Partitions
With dynamic partitions,
available memory is still kept in
contiguous blocks but jobs are
given only as much memory as
they request when they are
loaded for processing.
Although this is a significant
improvement over fixed
partitions because memory
isn’t wasted within the partition,
it doesn’t entirely eliminate the
problem.

• The task is still loaded into contiguous memory and loaded entirely,
• Here the partition assigned is only as large as requested.
• Clearly, this makes much more efficient use of memory.
• This method helps to eliminate fragmentation as jobs are loaded initially, but
over time external fragmentation develops and memory use becomes less
efficient.
• External Fragmentation refers to the wasted memory between allocated
memory blocks.

For both fixed and dynamic memory allocation schemes, the operating system must
keep lists of each memory location noting which are free and which are busy. Then
as new jobs come into the system, the free partitions must be allocated.

5
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

These partitions may be allocated on the basis of first-fit memory allocation (first
partition fitting the requirements) or best-fit memory allocation (least wasted
space, the smallest partition fitting the requirements).
First-Fit Algorithm
• First-Fit is faster allocating but Best-Fit uses memory more efficiently.
• In the First-Fit algorithm the operating system keeps two lists: the free
list and the busy list.
• The operating system takes a job from the Entry Queue, looks at the
minimum partition size it will need and then examines the Free Blocklist
until large enough available block is found.
• The first one found is then assigned to the job. If no block large
enough is found, then the operating system will enqueue job in the
Waiting Queue.
• It then takes the next job from Entry Queue and repeats the procedure.
Example.

An example of a first-fit free scheme


Best-Fit Algorithm
• The operating system takes a job from the Entry Queue, looks at the
minimum partition size it will need and then examines the Free Block List
until it finds the free block closest in size to the job.
• When this block is found, it is then assigned to the job.
• If no block large enough is found, then the operating system will
enqueue job in the Waiting Queue.
• It then takes the next job from Entry Queue and repeats the procedure.

Example.

6
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

An example of a best-fit free scheme

The algorithms for best-fit and first-fit are very different. Here’s how first-fit is
implemented:
First-Fit Algorithm
1 Set counter to 1
2 Do while counter <= number of blocks in memory
If job_size > memory_size(counter)
Then counter = counter + 1
Else
load job into memory_size(counter)
adjust free/busy memory lists
go to step 4
End do
3 Put job in waiting queue
4 Go fetch next job

The algorithm for best-fit is slightly more complex because the goal is to find the
smallest memory block into which the job will fit:

Best-Fit Algorithm
1 Initialize memory_block(0) = 99999
2 Compute initial_memory_waste = memory_block(0) – job_size
3 Initialize subscript = 0
4 Set counter to 1
5 Do while counter <= number of blocks in memory
If job_size > memory_size(counter)
Then counter = counter + 1

7
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Else
memory_waste = memory_size(counter) – job_size
If initial_memory_waste > memory_waste
Then subscript = counter
initial_memory_waste = memory_waste
counter = counter + 1
End do
6 If subscript = 0
Then put job in waiting queue
Else
load job into memory_size(subscript)
adjust free/busy memory lists
7 Go fetch next job

Deallocation
Until now, we’ve considered only the problem of how memory blocks are
allocated, but eventually there comes a time when memory space must be
released, or deallocated.

In a Fixed Partition scheme this is very straight forward, simply deallocate the
partition and declare it available.

In a Dynamic Partition Size scheme deallocated partitions must be merged with


any others they are contiguous to and the new block sizes calculated and
entered into the Free Blocks List

A dynamic partition system uses a more complex algorithm because the


algorithm tries to combine free areas of memory whenever possible. Therefore,
the system must be prepared for three alternative situations:
• Case 1. When the block to be deallocated is adjacent to another free block
• Case 2. When the block to be deallocated is between two free blocks
• Case 3. When the block to be deallocated is isolated from other free blocks

Algorithm to Deallocate Memory Blocks


If job_location is adjacent to one or more free blocks
Then
If job_location is between two free blocks
Then merge all three blocks into one block
memory_size(counter-1) = memory_size(counter-1) + job_size
+ memory_size(counter+1)
set status of memory_size(counter+1) to null entry
Else
merge both blocks into one
memory_size(counter-1) = memory_size(counter-1) + job_size

8
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Else
search for null entry in free memory list
enter job_size and beginning_address in the entry slot
set its status to “free”

– Relocatable Dynamic Partitions


• Both of the fixed and dynamic memory allocation schemes described thus far
shared some unacceptable fragmentation characteristics that had to be
resolved before the number of jobs waiting to be accepted became unwieldy.
In addition, there was a growing need to use all the slivers of memory often
left over.
• The solution to both problems was the development of relocatable dynamic
partitions. With this memory allocation scheme, the Memory Manager
relocates programs to gather together all of the empty blocks and compact
them to make one block of memory large enough to accommodate some or all
of the jobs waiting to get in.
• The compaction of memory, sometimes referred to as garbage collection or
defragmentation, is performed by the operating system to reclaim
fragmented sections of the memory space.
• Memory Manager relocates programs to gather together all of the empty
blocks
• Compact the empty blocks to make one block of memory large enough to
accommodate some or all of the jobs waiting to get in
• Compaction: Reclaiming fragmented sections of the memory space
– Every program in memory must be relocated so they are contiguous
– Operating system must distinguish between addresses and data values
o Every address must be adjusted to account for the program’s
new location in memory
o Data values must be left alone

Example.

9
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

Three snapshots of memory before and after compaction with the operating system
occupying the first 10K of memory. When Job 6 arrives requiring 84K, the initial
memory layout in (a) shows external fragmentation totaling 96K of space.
Immediately after compaction (b), external fragmentation has been eliminated,
making room for Job 6 which, after loading, is shown in (c).

Memory Management: Virtual Memory


n Virtual memory – separation of user logical memory from physical memory.
l Only part of the program needs to be in memory for execution
l Logical address space can therefore be much larger than physical
address space
l Allows address spaces to be shared by several processes
l Allows for more efficient process creation
n Virtual memory can be implemented via:
l Demand paging
l Demand segmentation

In the previous lesson we looked at simple memory allocation schemes. Each one
required that the Memory Manager store the entire program in main memory in
contiguous locations; and as we pointed out, each scheme solved some problems
but created others, such as fragmentation or the overhead of relocation.

In this part of the module(SIM) we’ll follow the evolution of virtual memory with four
memory allocation schemes that first remove the restriction of storing the programs
contiguously, and then eliminate the requirement that the entire program reside in
memory during its execution. These schemes are paged, demand paging,
segmented, and segmented/demand paged allocation, which form the foundation for
our current virtual memory methods.
Paged Memory Allocation

10
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

• Divides each incoming job into pages of equal size


• Works well if page size, memory block size (page frames), and size of disk
section (sector, block) are all equal
• Before executing a program, Memory Manager:
– Determines number of pages in program
– Locates enough empty page frames in main memory
– Loads all of the program’s pages into them

Paged memory allocation


scheme for a job of 350 lines
Programs that are too long
to fit on a single page are
split into equal-sized
pages that can be stored
in free page frames. In this
example, each page frame
can hold 100 bytes. Job 1 is
350 bytes long and is
divided among four page
frames, leaving internal
fragmentation in the last
page frame.

The simplified example for the image above shows how the Memory Manager keeps track of
a program that is four pages long. To simplify the arithmetic, we’ve arbitrarily set the page
size at 100 bytes. Job 1 is 350 bytes long and is being readied for execution.

• Memory Manager requires three tables to keep track of the job’s pages:
– Job Table (JT) contains information about
o Size of the job
o Memory location where its PMT is stored
– Page Map Table (PMT) contains information about
o Page number and its corresponding page frame memory
address
– Memory Map Table (MMT) contains
o Location for each page frame
o Free/busy status

11
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Demand Paging
Demand paging introduced the concept of loading only a part of the program into
memory for processing. It was the first widely used scheme that removed the
restriction of having the entire job in memory from the beginning to the end of its
processing.
With demand paging, jobs are still divided into equally sized pages that initially
reside in secondary storage. When the job begins to run, its pages are brought into
memory only as they are needed.

Demand paging takes advantage of the fact that programs are written sequentially
so that while one section, or module, is processed all of the other modules are idle.
Not all the pages are accessed at the same time, or even sequentially.

When you choose one option from the menu of an application program such as this
one, the other modules that aren’t currently required (such as Help) don’t need to be
moved into memory immediately.

Demand paging requires


that the Page Map Table
for each job keep track of
each page as it is loaded
or removed from main
memory. Each PMT tracks
the status of the page,
whether it has been
modified, whether it has
been recently referenced,
and the page frame
number for each page
currently in main memory.

12
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Swapping Process:
• To move in a new page, a resident page must be swapped back into
secondary storage; involves
– Copying the resident page to the disk (if it was modified)
– Writing the new page into the empty page frame
• Requires close interaction between hardware components, software
algorithms, and policy schemes
Although demand paging is a solution to inefficient memory utilization, it is not free of
problems. When there is an excessive amount of page swapping between main
memory and secondary storage, the operation becomes inefficient. This
phenomenon is called thrashing.

Thrashing : An excessive amount of page swapping between main memory


and secondary storage
– Operation becomes inefficient
– Caused when a page is removed from memory but is called back
shortly thereafter
– Can occur across jobs, when a large number of jobs are vying for a
relatively few number of free pages
– Can happen within a job (e.g., in loops that cross page boundaries)
Page fault: a failure to find a page in memory

Page Replacement Algorithms


In a computer operating system that uses paging for virtual memory management,
page replacement algorithms decide which memory pages to page out, sometimes
called swap out, or write to disk, when a page of memory needs to be allocated.
• Policy that selects the page to be removed is crucial to system efficiency.
Types of page replacement algorithms:
Ø First-in first-out (FIFO): Removes page that has been in memory for
the longest.

W
o
r
k
i

13
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
ng of a FIFO algorithm for a job with four pages (A, B, C, D) as it’s processed
by a system with only two available page frames

FIFO is the simplest page replacement algorithm. In this algorithm, the


operating system keeps track of all pages in the memory in a queue,
the oldest page is in the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for removal.
Example. Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page
frames. Find number of page faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the
empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page
slot i.e 1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page
slot i.e 3 —>1 Page Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault

ØLeast-recently-used (LRU): Removes page that has been least and


this algorithm replaces the page which has not been referred for a long
time. This algorithm is just opposite to the optimal page replacement
algorithm. In this, we look at the past instead of staring at future.

14
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
Example.

Number of Page Faults = 9


Number of Hits = 6

Again, Page Replacement Algorithm decides which page to remove, also called
swap out when a new page needs to be loaded into the main memory. Page
Replacement happens when a requested page is not present in the main memory
and the available space is not sufficient for allocation to the requested page.

Segmented Memory Allocation


What is the problem of the demand page memory allocation that gives birth the
concept of segmented memory allocation?
Thrashing
- An excessive amount of page swapping between main memory
and secondary storage
- Caused when a page is removed from memory but is called
back shortly thereafter
- Can happen within a job (e.g., in loops that cross page
boundaries)

An example of demand paging that causes a page swap each time the loop is
executed and results in thrashing. If only a single page frame is available, this
program will have one-page fault each time the loop is executed.

15
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
What is segmented memory allocation?
- Each job is divided into several segments of different sizes, one for
each module that contains pieces to perform related functions

- Main memory is no longer divided into page frames, rather allocated in


a dynamic manner
- Segments are set up according to the program’s structural modules
when a program is compiled or assembled

Segment Table – It maps two-dimensional Logical address into one-dimensional


Physical address. It’s each table entry has:
• Base Address: It contains the starting physical address where the
segments reside in memory.
• Limit: It specifies the length of the segment.

Translation of Two-dimensional Logical Address to one dimensional Physical


Address.

16
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

Address generated by the CPU is divided into:


•Segment number (s): Number of bits required to represent the segment.
•Segment offset (d): Number of bits required to represent the size of the
segment.
Advantages of Segmentation –
• No Internal fragmentation.
• Segment Table consumes less space in comparison to Page table in
paging.
Disadvantage of Segmentation –
• As processes are loaded and removed from the memory, the free
memory space is broken into little pieces, causing External
fragmentation.

Segmented/Demand Page Memory Allocation


• Subdivides segments: equal-sized pages
– Smaller than most segments
– More easily manipulated than whole segments
– Segmentation’s logical benefits
– Paging’s physical benefits
• Segmentation problems removed

17
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
– Compaction, external fragmentation, secondary storage handling
• Three-dimensional addressing scheme
– Segment number, page number (within segment), and displacement
(within page)

• Scheme requires four tables


– Job Table: one for the whole system
• Every job in process
– Segment Map Table: one for each job
• Details about each segment
– Page Map Table: one for each segment
• Details about every page
– Memory Map Table: one for the whole system
• Monitors main memory allocation: page frames

How the Job Table, Segment Map Table, Page Map Table, and main memory
interact in a segment/paging scheme.
• Disadvantages
– Overhead: managing the tables
– Time required: referencing tables
• Associative memory
– Several registers allocated to each job
• Segment and page numbers: associated with main memory
– Page request: initiates two simultaneous searches
• Associative registers

18
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
• SMT and PMT
– Primary advantage (large associative memory)
• Increased speed
– Disadvantage
• High cost of complex hardware

Virtual Memory
• Made possible by swapping pages in/out of memory
• Program execution: only a portion of the program in memory at any given
moment
• Requires cooperation between:
– Memory Manager: tracks each page or segment
– Processor hardware: issues the interrupt and resolves the virtual
address

Comparison of the advantages and disadvantages of virtual memory with paging and
segmentation.

• Advantages
– Job size: not restricted to size of main memory
– More efficient memory use
– Unlimited amount of multiprogramming possible
– Code and data sharing allowed
– Dynamic linking of program segments facilitated
• Disadvantages
– Higher processor hardware costs
– More overhead: handling paging interrupts
– Increased software complexity: prevent thrashing

Cache Memory
• Small, high-speed intermediate memory unit
• Computer system’s performance increased

19
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116
– Faster processor access compared to main memory
– Stores frequently used data and instructions
• Cache levels
– L2: connected to CPU; contains copy of bus data
– L1: pair built into CPU; stores instructions and data
• Data/instructions: move between main memory and cache
– Methods similar to paging algorithms

Comparison of (a) the traditional path used by early computers between main
memory and the CPU and (b) the path used by modern computers to connect the
main memory and the CPU via cache memory.
• Four cache memory design factors
– Cache size, block size, block replacement algorithm, and rewrite policy
• Optimal cache and replacement algorithm
– 80-90% of all requests in cache possible
• Cache hit ratio

• Average memory access time

20
College of Computing Education
3rd Floor, DPT Building
Matina Campus, Davao City
Telefax: (082)
Phone No.: (082)300-5456/305-0647 Local 116

Self-Help: You can also refer to the sources below to help you further
understand the lesson:
Suggested Reference:
• Stallings, William. Operating Systems Internals and Design Principles 7th
Edition
• https://www.geeksforgeeks.org/operating-systems
• McHoes, Ann Mclver (2014). Understanding Operating Systems, Boston,
MA: Cengage Learning
• Marmel, Elaine (2013). Windows 8 Digital Classroom, Indianapolis, Ind:
Wiley

21

You might also like