0% found this document useful (0 votes)
64 views13 pages

Large and Fast: Exploiting Memory Hierarchy: Topics To Be Covered

The document discusses memory hierarchy and caching techniques used in computers. It describes the principle of locality and how caches exploit it to improve performance. Various cache concepts like hit rate, miss rate, block size, and direct mapping are explained along with an example of tracking cache hits and misses for a series of memory references.

Uploaded by

Md Iqbal Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views13 pages

Large and Fast: Exploiting Memory Hierarchy: Topics To Be Covered

The document discusses memory hierarchy and caching techniques used in computers. It describes the principle of locality and how caches exploit it to improve performance. Various cache concepts like hit rate, miss rate, block size, and direct mapping are explained along with an example of tracking cache hits and misses for a series of memory references.

Uploaded by

Md Iqbal Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 13

CHAPTER 7

LARGE AND FAST: EXPLOITING MEMORY


HIERARCHY

Topics to be covered
– Principle of locality

– Memory hierarchy

– Cache concepts and cache organization

– Virtual memory concepts


PRINCIPLE OF LOCALITY
Principle of Locality states that programs access a relatively
small portion of their addresses at any instant in time.
Two types of locality inherent in programs are:
 Temporal locality
– Items accessed recently are likely to be accessed again
soon
– e.g., most programs contains loops, so instructions and
data are likely to be accessed repeatedly, showing high
amounts of temporal locality.
 Spatial locality
– If an item is referenced, items whose addresses are close
by will tend to be referenced soon.
– E.g., sequential instruction access, array data
MEMORY HIERARCHY

 Consists of multiple levels of memory with different


speeds and sizes.

 Goal is to provide the user with much memory at a


low cost, while providing access at the speed
offered by the fastest memory.
MEMORY HIERARCHY (Continued)

CPU

Speed Size Cost/bit Implemented


Using

Memory hierarchy
consists of multiple
levels but data is Fastest Smallest Highest SRAM

copied between only Memory


  levels
two adjacent
at a time.
1.Upper level DRAM
2.Lower level Memory

Slowest Biggest Lowest Magnetic


Memory Disk

Memory hierarchy in a computer


CACHE MEMORY
Cache represents the level of memory hierarchy between
the main memory and the CPU.
 upper level of the memory

Terms associated with cache

Hit: The item/data requested by the processor is found in some


block in the cache.

Miss: The item requested by the processor is not found in the


cache.
Terms associated with cache (Continued)

Hit rate: The fraction of the memory access found in the cache.
Used as a measure of performance of the cache.

Hit rate = (Number of hits)  Number of access


= (Number of hits)  (# hits + # misses)

Miss rate: The fraction of memory access not found in cache.


Miss rate = 1.0 - Hit rate
Terms associated with cache (Continued)

Hit time: Time to access the cache memory which


Includes the time needed to determine
whether the access is a hit or miss.

Miss penalty:
Time to replace a cache block with the corresponding
block from the lower level the time to deliver this block
to the processor
Accessing a Cache Location and
Identifying a Hit
We need to know

1. Whether a cache block has valid information


- done using valid bit

and

2. Whether the cache block corresponds to the requested


word
- done using tags
CONTENTS OF A CACHE MEMORY BLOCK

A cache memory block consists of the data bits, tag bits and
a valid (V)bit.
The index of a cache block and the tag contents of that
block uniquely specify the memory address of the word
contained in the cache block.
V bit is set only if the cache block has valid information.

Cache V Tag Data


index
CACHE AND MAIN MEMORY STRUCTURE

Cache Memory
index V Tag Block address Data
 

0 0
1 1
2 Block (K words)

K-1

Block
length
(K words)

Word
length
CACHE CONCEPT

CPU
 

Word transfer

Cache
 

Block transfer

Main Memory
A Cache Example
Q. The series of memory address references given as word addresses are
22, 26, 22, 26, 16, 3, 16, and 18. Assume a direct-mapped cache with 8
one-word blocks that is initially empty. Label each reference in the list as a
hit or miss and show the contents of the cache after each reference.
Answer:

You might also like