0% found this document useful (0 votes)
21 views69 pages

Topic 3

The document discusses memory management techniques in operating systems. It covers topics like virtual memory, demand paging, swapping, cache memory and segmentation. Memory management aims to track memory usage and allocate memory efficiently to running processes.

Uploaded by

Kalai Shan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views69 pages

Topic 3

The document discusses memory management techniques in operating systems. It covers topics like virtual memory, demand paging, swapping, cache memory and segmentation. Memory management aims to track memory usage and allocate memory efficiently to running processes.

Uploaded by

Kalai Shan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

CHAPTER 3

MEMORY MANAGEMENT
Memory Management Of
Operating System
List of content
3.1 Understand Memory Management
3.1.1 Compare between resident and 3.2.2 Discover the following related to
transient routines. memory management terminologies
3.1.2 Relate virtual memory and Cache a. Fixed – partition memory
memory technique. management
3.1.3 Explain terms of Paged Virtual b. Dynamic memory management
Memory
c. Segmentation
a. Page Tables
d. Paging
b. Dynamic Address Translation
c. Paging Supervisor
3.2.3 Apply cache operation to perform in
operating system
3.2 Apply Virtual Memory and Cache a. CPU cache
Memory Management
b. Disk cache
3.2.1 Illustrate a Model of Virtual Memory
c. Web cache
a. Demand Paging
b. Swapping
c. Shared Virtual Memory
Part 1
3.1 Understand Memory Management

3.1.1 Compare between resident and


transient routines.

3.1.2 Relate virtual memory and Cache


memory technique.

3.1.3 Explain terms of Paged Virtual


Memory
• a. Page Tables
• b. Dynamic Address Translation
• c. Paging Supervisor
Introduction to Main Memory
• Main memory is central to the operation of a
modern computer system.
• Main memory is a large array of words or bytes,
ranging in size from hundreds of thousands to
billions. Each word or byte has its own address.
• Most computers have a memory hierarchy, with
a small amount of very fast, expensive, volatile
cache memory, hundreds of medium speed,
medium price, volatile main memory (RAM), and
tens or hundreds of gigabytes of slow, cheap,
nonvolatile disk storage.
Cont.
• The part of operating system that manages memory
hierarchy is usually called the memory manager.
• Memory management is the functionality of an
operating system which handles or manages primary
memory.
• Memory management keeps track of each and every
memory location either it is allocated to some process
or it is free.
• It checks how much memory is to be allocated to
processes. It decides which process will get memory at
what time. It tracks whenever some memory gets freed
or unallocated and correspondingly it updates the
status.
Storage Device Hierarchy
Hierarchy of memory organization
• The memory hierarchy contain level characteristic by the
speed and cost of memory in each level.
Access
time CACHE MEMORY
decreases
A processor may
access programs and
Access data directly.
speed PRIMARY MEMORY
increases

Cost per
bit The system must
increases first move programs
SECONDARY AND TERTIARY STORAGE and data to main
Capacity memory before a
decreases processor may
reference them.
Example of secondary storage – tape or disk
Function of Memory Manager
• The memory manager function is to keep track of which
parts of memory are in use and which part are not use.
• Coordinate how memory hierarchy is used
• The memory manager is an operating system component
concerned with the system’s memory organization
scheme and memory management strategies.
• Determines how available memory space is allocated to
process.
• And how to respond to changes in a process memory
usages.
• It’s also interact with special purpose memory
management hardware to improve performance.
Memory Management Strategy
Divided into
• Fetch strategies
• Placement strategies
• Replacement strategies
a) Fetch strategies
• Determine when to move the next piece of a program or data to main
memory from secondary storage.
• Divide into two types
• Demand fetch strategies
• Conventional
• System places the next piece of program or data in main
memory when running program references it.
• Anticipatory fetch strategies
• Attempt to load a piece of program or data into memory before it
is referenced.
b) Placement strategies

• Determine where in main memory the system should place incoming


program or data pieces.
• Include first fit , best fit and worst fit.

c) Replacement Strategy

• When memory is to full to accommodate a new program, the system


must remove some (or all) of a program or data that currently resides
in memory.
• This strategy determines which piece to remove.
3.1.1 Comparison between resident and
transient routine.
RESIDENT TRANSIENT ROUTINES

Routines that directly Stored on disk and read


support application into memory only when
programs as they run needed

Example: routine that control physical Example: routine that formats disks, any
I/O, antivirus, clock & time program/ software/application/web
browser
Cont…
• Generally, the operating
system occupies low memory
beginning with address 0.
• Key control information
comes first followed by the
various resident operating
system routines.
• The remaining memory,
called the transient area, is
• where application programs
and transient operating
system routines are loaded.
Virtual vs Physical addresses
•“virtual addresses”used by the program
•“physical addresses”that represent places in the
machine’s “physical”memory.
Address Translation
• Logical Addresses
• With a virtual memory system, the main memory can be viewed as a
local store for a cache level whose lower level is a disk. Since it is
fully associative there is no need for a set field. The address just
decomposes into an offset field and a page number field. The number
of bits in the offset field is determined by the page size. The remaining
bits are the page number.
• An Example
• A computer uses 32-bit byte addressing. The computer uses paged
virtual memory with 4KB pages. Calculate the number of bits in the
page number and offset fields of a logical address.
• Answer
• Since there are 4K bytes in a cache block, the offset field must
contain 12 bits (212 = 4K). The remaining 20 bits are page number
bits. Thus a logical address is decomposed as shown below.
32 bit
20 bit 12 bit
page number offset
Page Tables
• Virtual memory address translation
uses page tables. These are simple
arrays in memory indexed by page
number. A page table base register
(PTBR) holds the base address for
the page table of the current process.
• Each page table entry contains
information about a single page. The
most important part of this
information is a frame number —
where the page is located in physical
memory.
• Address translation combines the
frame number with the offset part of a
logical address to form a physical
address.
Dynamic address translation
• Dynamic address translation, or DAT, is the process of
translating a virtual address during a storage reference
into the corresponding real address.
• If the virtual address is already in central storage, the DAT
process may be accelerated through the use of a
translation lookaside buffer. If the virtual address is not in
central storage, a page fault interrupt occurs, z/OS® is
notified and brings the page in from auxiliary storage.
Paging Supervisor
• In a VS operating system, all non-resident programs exist in
complete form only on auxiliary storage, which is where the OS
loads them first. and the total virtual storage size of all
executing programs usually exceeds the size of the real
storage of the computer.
• In OS/390, both virtual and real storage are divided into 4096-
byte chunks. The chunks are called pages on auxiliary storage,
and page frames in real RAM.
• Paging is the name of the mechanism used to maintain the
contents of real memory:
• When a program is first loaded, it is copied into contiguous
virtual storage pages on auxiliary storage, not into real storage
page frames in RAM.
• From that initial DASD storage location, a page is copied as
needed into a real storage page frame by the OS/390 Paging
Supervisor.
PART 2
3.2 Apply Virtual Memory and Cache Memory Management
3.2.1 Illustrate a Model of Virtual Memory
a. Demand Paging
b. Swapping
c. Shared Virtual Memory
3.2.2 Discover the following related to memory management terminologies
a. Fixed – partition memory management
b. Dynamic memory management
c. Segmentation
d. Paging
VIRTUAL MEMORY
• If I can see it and I can touch it,
it’s real.
• If I can’t see it but I can touch it,
it’s invisible.
• If I can see it but I can’t touch it,
it’s virtual.
• And if I can’t see it and I can’t
touch it’s…gone!
Virtual Memory :Analogy

• When a piece of shopping item on demand is


not available in the aisle then store staff bring
few pieces of the item from the warehouse
and make them available to the shopper. The
process is transparent to the shopper except a
delay
• Problems if there is no Virtual Memory:
• If a program is too large to accommodate in small DRAM
then
• Either it can not be run, or
• DRAM must be increased, or
• Programmer must module the program to minimize
references to a portion not in the RAM, and take special
measure to swap modules.
• If more than one process is running, there must not
be any overlapping use of same physical memory
locations. The programmer (or compiler) of those
program must ensure it.
Virtual memory
• Doesn’t physically exist on a memory chip.
• It is an optimization technique and is implemented by the
operating system in order to give an application program
the impression that it has more memory than actually
exists.
• Virtual memory –a technique that allows the execution of
processes that are not completely in memory.
Types of memory:
• Real memory
• Main memory (RAM)
• Virtual memory
• Memory on disk
• Allows for effective multiprogramming and relieves the user of tight
constraints of main memory
Illustration of a Virtual Memory
• Components
• Memory Management Unit (MMU)
mapping function between logical addresses and physical addresses
• Operating System
controls the MMU
• Mapping tables
guide the translation
Illustration of a Virtual Memory
PAGING
• A page is a unit of logical memory of a program
• A frame is a unit of physical memory (RAM)
• All pages are of the same size
• All frames are of the same size
• A frame is of the same size as a page
• Physical memory is divided into fixed-sized blocks called
frames (size is power of 2 ).
• Logical memory is divided into blocks of same size called
pages.
• The OS keeps track of all free (available) frames, and
allocated frames in the page table.
Cont…
• To run a program of size n pages, the OS needs n free
frames to load program.
• The OS sets up a page table for every process
• The page table is used for converting logical addresses to
physical addresses.
• There is a small amount of internal fragmentation.
• The frames allocated to the pages of a process need not
be contiguous; in general, the system can allocate any
empty frame to a page of a particular process.
• There is no external fragmentation
• There is potentially a small amount of internal
fragmentation that would occur on the last page of a
process.
Cont…
Cont…

LOGICAL ADDRESS PHYSICAL ADDRESS

THE OFFSET NUMBER FOR BOTH LOGICAL


AND PHYSICAL ADDRESS ARE THE SAME
Example of Paging
Demand Paging
• A demand paging system is quite similar to a paging
system with swapping where processes reside in
secondary memory and pages are loaded only on
demand, not in advance.
• When a context switch occurs, the operating system does
not copy any of the old program’s pages out to the disk or
any of the new program’s pages into the main memory
Instead, it just begins executing the new program after
loading the first page and fetches that program’s pages as
they are referenced.
While executing a
program, if the
program
references a page
which is not
available in the
main memory
because it was
swapped out a little
ago, the processor
treats this invalid
memory reference
as a page
fault and transfers
control from the
program to the
operating system
to demand the
page back into the
memory.
Advantages
• Following are the advantages of Demand Paging −
• Large virtual memory.
• More efficient use of memory.
• There is no limit on degree of multiprogramming.

Disadvantages
• Number of tables and the amount of processor overhead
for handling page interrupts are greater than in the case
of the simple paged management techniques.
Swapping
• When you load a file or program, the file is stored in the
random access memory (RAM).
• Since RAM is finite, some files cannot fit on it.
• These files are stored in a special section of the hard
drive called the "swap file". "Swapping" is the act of using
this swap file.
• A swapping is a mechanism in which a process can be
swapped temporarily out of memory to a backing store
and then brought back into memory for continued
execution.
Memory Swapping Technique
• A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution

• Backing store – fast disk large enough to accommodate


copies of all memory images for all users; must provide direct
access to these memory images

• Roll out, roll in – swapping variant used for priority-based


scheduling algorithms; lower-priority process is swapped out
so higher-priority process can be loaded and executed

• Major part of swap time is transfer time; total transfer time is


directly proportional to the amount of memory swapped

• Modified versions of swapping are found on many systems


(i.e., UNIX, Linux, and Windows)
Schematic View of Swapping
Shared Virtual Memory
• When programs are written, programmers take great care
to make sure that code is not needlessly repeated.
• Subroutines to handle particular functions are written and
used wherever possible. Subroutines that are useful to
many programs are gathered together into libraries.
• These libraries are shareable so that running programs do
not load several copies of the same code into memory.
Instead only one copy of the code is loaded and all of the
programs share that copy.
Virtual memory makes it easy for processes to share
memory as all memory accesses are decoded using page
tables.
• For processes to share the same virtual memory, the
same physical pages are referenced by many processes.
• The page tables for each process contain the Page Table
Entries that have the same physical PFN.
Memory Allocation
MEMORY ALLOCATION

CONTIGOUS MEMORY NON CONTIGOUS


ALLOCATION MEMORY ALLOCATION

-Fixed Partition -Segmentation


-- Dynamic -Paging
Contigous Memory Allocation
• Implies that a program’s data and instructions are assured
to occupy a single contiguous memory area.
• It is further subdivided into Fixed-partition storage
allocation strategy and variable-partition/dynamic partition
storage allocation strategy.

• There are 2 techniques for contiguous allocation:


1. Fixed
2. Dynamic / Variable
1. Fixed Partition Memory Management
• Process with small address space use small
partitions and process with large address space
use large partitions. This is known as fixed
partition contiguous storage allocation.
EXAMPLE

Operating System

Job queue for partition 1


PARTITION 1

Job queue for partition 2


PARTITION 2

Job queue for partition 3


PARTITION3
2. Dynamic/Variable Partition Memory Management

• This notion is derived from parking of vehicles on the


sides of streets where the one who manages to enter will
get the space.
• Two vehicles can leave a space between them that
cannot be used by any vehicle.
• This means that, whenever a process needs memory, a
search for the space needed by it, is done.
• If contiguous space is available to accommodate that
process, then the process is loaded into memory.
• This phenomenon of entering and leaving the memory
can cause the formation of unusable memory holes( like
the unused space between two vehicles). This is known
as External Fragmentation.
3 Strategies to allocate memory to
Dynamic/variable partition

1. First Fit

2. Best Fit

3. Worst Fit
First Fit
• Allocate the first free block that is large enough for the new
process.
• This a fast algorithm.

Example:
Program Allocation size for a program to process
357 200 400 600 500 300 250
210
460
491

- put 357 into 400 space, will have 43 unused space.


- next put 210 into 600 space, it will have 390 unused space.
- next put 460 into 500 space, it will have 40 unused space.
- Lastly put 491 into 300, but it cannot allocate because the
space is not enough, so the 491 program cannot be process.
Best fit
• Allocate the smallest block among those that are large enough for the
new process.
• OS search entire list, or it can keep it sorted and stop it when it has
an entry which has a size larger than the size of process.
• This algorithm produces the smallest left over block.
Example:
Program Allocation size for a program to process
357 200 400 600 500 300 250
210
460
491

- put 357 into 400 space, will have 43 unused space.


- next put 210 into 250 space, it will have 40 unused space.
- next put 460 into 500 space, it will have 40 unused space.
- Lastly put 491 into 600,it will have 109 unused space.Now all
the program can be process.
Worst fit
• Allocate the largest block among those that are large enough for the
new process.
• Search or sorting of the entire list is needed.
• This algorithm produces the largest left over block

Example:
Program Allocation size for a program to process
357 200 400 600 500 300 250
210
460
491

- put 357 into 600 space, it will have 243 unused space.
- next put 210 into 500 space, it will have 290 unused space.
- next put 460 into 400 space, but cannot fit.
- Lastly put 491 into 300,also cannot fit to proceed the
process. So it only 2 programs will be process.
Compaction: • Method to overcome the external
fragmentation problem.
• All free blocks are brought together as
one large block of free space.
• For example, that a 640K program has
just finished executing.If there are no
640K programs available, the system
might load a 250K program and a 300K
program, but note that 90K remains
unallocated.If there are no 90K or
smaller programs available, the space
will simply not be used. The little
chunks of unused space will be spread
throughout memory, creating a
fragmentation problem.
Non-Contigous Memory Allocation
• to resolve the problem of external fragmentation and to
enhance the degree of multiprogramming to greater
extent, it was decided to sacrifice the simplicity of
allocating contiguous memory to every process.
• It was decided to have un contiguous physical address
space of a process so that a process could be allocated
memory wherever it was available.

• There are 2 techniques for Non-contiguous allocation:


1. Paging
2. Segmentation
3. SEGMENTATION
Divide each program into unequal size
blocks called segments

A program is a collection of segments.


A segment is a logical unit such as:
• main program,
• procedure/function,
• object,
• local variables,
• global variables,
• stack,
• symbol table, arrays
• When a program is loaded into memory, the operating
system builds a segment table listing the (absolute) entry
point address of each of the program’s segments
• Segment tables:
• base –contains the starting physical address where the
segments reside in memory.
• limit –specifies the length of the segment.
SEGMENTATION:
USER VIEW OF A PROGRAM
Examine Segment
Examine Paging
PART 3
3.2.3 Apply cache operation to perform in operating system
a. CPU cache
b. Disk cache
c. Web cache
What is cache?

-Cache located on each processor in today’s system

-Processor may reference programs and data directly from its


cache.

-Cache memory is extremely expensive compared to main


memory.
Cache Memory
• The memory cache is high speed memory available inside
the CPU in order to speed up access to data and
instructions stored in RAM memory.
• Cache memory is the memory space of your processor
which it uses to queue the incoming processing
requests. It’s used to stored temporary data that usually
used by processor.
Higher is the cache memory, the better will be the
system performance.
• Cache memory like Librarians Basket (analogy)
Borrowing books
• Student A ask for book, librarian get it from book rack
• Student A return a book, librarian put back in the rack
• Student B borrow a same book, librarian get it from book
rack
• Student A return a book, librarian put back in the rack
Book Librarian Student A/B
rack
Borrowing books (Cache memory)
Book rack Librarian Student A/B

Basket
case
 Student A ask for book, librarian get it from book rack
 Student A return a book, librarian put back in basket case
 Student B borrow a same book, librarian get it from basket
case
CPU Cache

Small memories on or close to the CPU


can operate faster than the much larger
main memory
CPU cache
Disk cache

• A portion of RAM used to speed up access to data on a


disk. The RAM can be part of the disk drive itself
(sometimes called a hard disk cache or buffer)
• The most recently accessed data from the disk is stored in
a memory buffer. When a program needs to access data
from the disk, it first checks the disk cache to see if the
data is there.
• Disk caching can improve the performance of applications
significantly, because accessing data in RAM is much
faster than accessing a byte on a hard disk.
• The purpose of optimizing I/O performance.
Web Cache
• A web cache is a mechanism for the temporary storage
(caching) of web documents, such as HTML pages and
images, to reduce bandwidth usage, server load, and
perceived lag.
• A web cache stores copies of documents passing through
it; subsequent requests may be satisfied from the cache if
certain conditions are met.
• Google's cache link in its search results provides a way of
retrieving information from websites that have recently
gone down and a way of retrieving data more quickly than
by clicking the direct link.
• Some search engines will cache links in its search results
to serve up results pages faster. Clearing your cache
differs by browser. This is still a popular feature or search
request because caching does remember your login
information and events and progress in certain flash and
web applications. Cached versions of sites can be viewed
by a browser when offline, however new site uploads will
not be able to load properly.
Web cache
Self Review
1. Define swapping technique which is usually used in
memory management. (4marks)

1. The operating system is a collection of software


routines. Describe the following:
i. Resident routines
ii. Transient routines
(4 marks)
Self Review
3. Briefly explain Paging in memory management.
(4 marks)
4. Define the following
i. Logical memory
ii. Physical memory (4 marks)

5. In dynamic memory management, a process is


loaded into a free partition by using first fit, best
fit or worst fit allocation algorithms. Briefly
explain any TWO(2) of these algorithms.
(4 marks)
Self Review
6. Explain the concept of virtual memory in an operating
system. (4 marks)

7. Cache memory is a high speed memory kept in


between processor and RAM to increase the data
execution speed. Differentiate between Level 1 cache
and Level 2 cache in a CPU cache.
(4 marks)

You might also like