Introduction To Cache
Introduction To Cache
Mapping
& Numericals
Presented by Group 19
Group Members:
Saransh Prajapati 22BCE10795
Nathan Joseph Savio Pereira
22BCE11187
Sahil mall 22BCE11363
Prakhar Swarnkar 22BCE10282
What is Cache?
Cache memory, also known as CPU cache
memory, is a temporary storage area in
a computer that stores frequently
accessed data and instructions.
Cache Analogy
Cache Analogy for
Common Terms
1.Memory Size: Think of this as the total capacity of the entire library building
2.Cache Size: Here L1 & L2 capacity is the cache size, size determines how
many "quick access" books you can keep ready. And the quickest & easiest data
retrivel will have the least size.
3.Access Time
4.Tag Bits: Each book has a unique identifi er/address label, precisely numbered
tags that instantly tell you:
a. Which section the book belongs to
b. Exact shelf location
c. Book's specifi c category
5.Number of Lines: Think of lines as individual shelves in the library. More
shelves mean more places to store books
6.Line/Block Off set: Precise Book Location Within a particular shelf. Imagine
each standardized book shelf is like a folder; Line off set is the exact position of
a specifi c page within that folder
Cache Mapping
Techniques
Direct Mapping
What is Direct
CacheMapping
Direct Cache Mapping (or Direct-Mapped Cache) is a
technique used in computer architecture to organize cache
memory.
How Mapping is done?
Memory Address
Breakdown
A memory address is divided into three parts when using
direct cache mapping.1) Block offset 2)Index 3)Tag
Advantages of Direct Cache Mapping:
Simplicity: It is relatively simple to implement because each block of
memory has exactly one possible location in the cache.
Fast Access: Cache lookup is fast because the cache index is directly
computed from the memory address.
Disadvantages of Direct Cache Mapping:
Cache Misses: Since multiple memory blocks can map to the same
cache line, there can be more cache misses, especially if memory
accesses are not well distributed.
Limited Flexibility: A memory block can only map to one cache line,
so the cache may not be used as efficiently as in other mapping
techniques (like associative or set-associative mapping).
Example
lets say,
We have a main memory of size 32 words with size of each block
is 4 words & cache memory of size 16 words find Physical address
bits?
P.A Bits: size = 32 Therefore, log2(32)= 5 bits
Block Offset Bits:size of block = 4.Therefore log2(4) = 2 bits
Block Number Bits: No. of block = size of main memory/size of
each block = 32/4 =8 .Therefore log2(8)=3 bits.
Line number Bits: size of cache memory/size of each line = 16/4
= 4.
Therefore log2(4) = 2 bits.
Tag bits: Block Number Bits - Line Bits = 3-2 = 1 bit.
Fully Associative
Mapping
Fully Associative Mapping is a cache mapping
technique where any memory block can be placed in
any available cache line.
Key Features:
-Flexibility: Data is not restricted to specific cache
lines, enabling storage in any unused block.
-Efficient Utilization: Memory blocks can occupy any
cache line, optimizing cache space usage
Advantages:
• No Conflict Misses: Any block can occupy any
cache line.
• Higher Hit Rate: Flexible placement improves
cache utilization.
Drawbacks:
• Longer Comparison Time: All tags must be
compared for every access.
• Complex Hardware: Requires simultaneous 22BCE11
Example
A fully associative cache memory system is designed with
the following characteristics:
• Cache memory: Contains 64 blocks.
• Main memory: Contains 8,192 blocks, and each block
consists of 128 words.
• Word size: 16 bits (each word is 16 bits wide).
Questions:
1.How many bits are required to address the main
memory?
2.How many bits are needed to represent the Tag and
Word fields in this system? (Ignore the "block field" since
it’s fully associative mapping.)
22BCE11
Example Contd..
1. Addressing the Main Memory
Total words in the main memory =
8,192×128=1,048,5768,192 * 128 = 1,048,576 words.
The number of bits required to address 1,048,576 words
is:
Bits required=log2(1,048,576)=20 bits.
2. Breaking the Address into Fields
Word field: Determines the specific word within a block.
Since each block has 128128128 words, the number of
bits required for the Word field is:
log2(128)=7 bits.
Tag field: The remaining bits in the address are used as
the Tag field. In fully associative mapping, there’s no
need for a Set field because any block from the main
memory can map to any block in the cache. Thus, the Tag
field is:
Tag bits=Total address bits−Word bits=20−7=13 bits.
Bits required to address the main memory: 20 bits.
22BCE11
Set Associative Mapping
Set-associative mapping is a cache memory mapping
technique that combines the benefits of both direct
mapping and fully associative mapping.
In this approach:
• The cache is divided into a fixed number of sets, and
each set contains a specific number of lines (called K-way
associative, where K is the number of blocks in each
set).
• A main memory block maps to a specific set but can
occupy any line within that set.
• To locate data in the cache, the set index is used to
Example
Eg: Main Memory Size: 128 B ,Cache Size: 32 B , Block
Size: 4 B
2 Way Set Associative
Solution: Physical Address Space: 128 B = 2^7 B
This means that that log2^7 base 2 bits are required for
addressing the main memory.
Thus, 7 bits of Physical Address are needed.
Block Size= 4 B = 2^2 B
This forms 2 bits of block offset.
Number of Main memory blocks = 2^7/2^2=2^5 = 32
Cache Size = 2^5 B . Line Size = Block Size = 2^2 B
Thus, Number of lines in cache = 2^5 / 2^2 = 2^3 = 8 lines
Example Contd....
So we have 4 sets in cache and 32
blocks in the main memory. Now the
task is to decide which block will be
placed in which set.
To do so we apply a simple formula -
Block Number % Total Number of sets
The result of this gives the set in
which the block is to be mapped.
For example, if block number 0 is
considered, then 0%4=0 , so block 0
is mapped with 0th set, similarly block
1 will be mapped with 1st set since
Thank you
very
much!