0% found this document useful (0 votes)
12 views

Cache Memory

Cache Memory in Assembly language

Uploaded by

muzammilsohail76
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Cache Memory

Cache Memory in Assembly language

Uploaded by

muzammilsohail76
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Cache Memory

Outline
• Cache Memory Principles
• Elements of Cache Design
• Cache Addresses
• Cache Size
• Mapping Function
• Direct
• Associative
• Set Associative
Cache Memory Principles
• Cache memory is intended to give memory speed approaching that of
the fastest memories available, and at the same time provide a large
memory size at the price of less expensive types of semiconductor
memories.
• There is a relatively large and slow main memory together with a
smaller, faster cache memory. The cache contains a copy of portions
of main memory.
• When the processor attempts to read a word of memory, a check is
made to determine if the word is in the cache. If so, the word is
delivered to the processor.
• If not, a block of main memory, consisting of some fixed number of
words, is read into the cache and then the word is delivered to the
processor. Because of the phenomenon of locality of reference, when
a block of data is fetched into the cache to satisfy a single memory
reference, it is likely that there will be future references to that same
memory location or to other words in the block.
Cache and Main Memory
• Figure depicts the use of multiple levels of cache. The L2 cache is
slower and typically larger than the L1 cache, and the L3 cache is
slower and typically larger than the L2 cache.
Structure of cache and main
memory
Figure depicts the structure of a cache/main-memory system.
Main memory consists of up to 2n addressable words, with each
word having a unique n-bit address.

For mapping purposes, this memory is considered to consist of a


number of fixed length blocks of K words each.
• That is, there are M=2n /K blocks in main memory.
• The cache consists of m blocks, called lines.
• Each line contains K words, plus a tag of a few bits.
• Each line also includes control bits (not shown),
such as a bit to indicate whether the line has been modified since being loaded into the cache.
• The length of a line, not including tag and control bits, is the line size.
• The line size may be as small as 32 bits, with each “word” being a single
byte; in this case the line size is 4 bytes.
• The number of lines is considerably less than the number of main memory
blocks (m<<M). At any time, some subset of the blocks of memory resides
in lines in the cache. If a word in a block of memory is read, that block is
transferred to one of the lines of the cache. Because there are more blocks
than lines, an individual line cannot be uniquely and permanently
dedicated to a particular block. Thus, each line includes a tag that
identifies which particular block is currently being stored. The tag is usually
a portion of the main memory address, as described later in this section.
Cache Read Operation

• The processor generates the


read address (RA) of a word to
be read. If the word is contained
in the cache, it is delivered
to the processor.
Otherwise, the block containing
that word is loaded into the
cache,
and the word is delivered to the
processor.
ELEMENTS OF CACHE DESIGN
Cache Addresses
• A logical cache, also known as a virtual cache, stores data using
virtual addresses. The processor accesses the cache directly, without
going through the MMU. A physical cache stores data using main
memory physical addresses.
Mapping Function
• Direct Mapping
• Associative Mapping
• Set Associative Mapping/ K-way set associative Mapping
DIRECT MAPPING
• The simplest technique, known as direct mapping, maps each block of
main memory into only one possible cache line. The mapping is
expressed as
• i = j modulo m
where
• i =cache line number
• J= main memory block number
• m =number of lines in the cache
• For purposes of cache access, each main memory address can be
viewed as consisting of three fields. The least significant w bits
identify a unique word or byte within a block of main memory; in
most contemporary machines, the address is at the byte level.
• The remaining s bits specify one of the 2^s blocks of main memory.
The cache logic interprets these s bits as a tag of s-r bits (most
significant portion) and a line field of r bits. This latter field identifies
one of the m= 2r lines of the cache
• The direct mapping technique is simple and inexpensive to implement. Its main
disadvantage is that there is a fixed cache location for any given block. Thus, if a
program happens to reference words repeatedly from two different blocks that
map into the same line, then the blocks will be continually swapped in the cache,
and the hit ratio will be low (a phenomenon known as thrashing).
• Example 4.2 For all three cases, the example includes the following elements:
• The cache can hold 64 KBytes. Cache Size=64KB=2r+w=216 =>r+w=16
• Data are transferred between main memory and the cache in blocks of 4
bytes each. 2w=22 =>w=2
This means that the cache is organized as 16K 214 lines => r=14 of 4 bytes each.
• The main memory consists of 16 Mbytes, with each byte directly addressable
by a s+w=24-bit address (224=16M).Thus, for mapping purposes, we can
consider main memory to consist of 4M blocks of 4 bytes each.
s=24-2=22
Tag = s-r = 8 lines= r = 14 Word = w = 2

S Data Field
Address field
Cache Size=2r+w=216
0 0 16384

1
1

2
2

. .

. .

. .

32767 1677721
16383= 5=2s
16383= 214 -1
214 -1

Cache Tags 0 1 2 …. Main Memory 2 8 -1=255


• 4.3:For the hexadecimal main memory addresses 111111, 666666,BBBBBB, show the following information, in hexadecimal format:
a. Tag, Line, and Word values for a direct-mapped cache, using the format of Figure 4.10
b. Tag and Word values for an associative cache, using the format of Figure 4.12
c. Tag, Set, and Word values for a two-way set-associative cache, using the format of Figure 4.15
a.
• Tag = s-r = 8
• lines= r = 14
• Word = w = 2
1 1 1 1 1 1
0001 0001 0001 0001 0001 0001
• Tag = 11
• lines= r = 0444
• Word = w = 1
• (other parts and questions 4.8, 4.11,4.12 All are done in class)
ASSOCIATIVE MAPPING
• Associative mapping overcomes the disadvantage of direct mapping
by permitting each main memory block to be loaded into any line of
the cache In this case, the cache control logic interprets a memory
address simply as a Tag and a Word field. The Tag field uniquely
identifies a block of main memory. To determine whether a block is in
the cache, the cache control logic must simultaneously examine every
line’s tag for a match
Two Fields Only

Tag = s Word = w
SET-ASSOCIATIVE MAPPING
• Set-associative mapping is a compromise that exhibits the strengths
of both the direct and associative approaches while reducing their
disadvantages.
• In this case, the cache consists of a number sets, each of which
consists of a number of lines. The relationships are
• Main memory Format
Tag = s – d Set = d Word = w
4.1. A set-associative cache consists of 64 lines, or slots, divided into four-line sets. Main memory
contains 4K blocks of 128 words each. Show the format of main memory addresses.
k = 4,
• m = 64
• m = v k => v = m / k, v = 64 / 4 = 16
• v = 2d => 16 = 2d => 24 = 2d => d = 4
• 4 K Blocks of 128 words each
• Block Size = 128 = 2w = 27 => w = 7
• Number of Blocks in memory = 2s = 4 K = 212 s = 12
• Tag = s-d =8 Tag = s-d =8 Set d = 4 Word w= 7
• Set = 4
• Word = 7
(Questions 4.2,4.3,4.4(c) ,4.5,4.6 done in class)

You might also like