0% found this document useful (0 votes)
55 views24 pages

Cache Memory

Cache memory is a small, high-speed memory located close to the CPU that stores frequently accessed instructions and data. It compensates for the speed difference between the CPU and main memory. There are different mapping techniques used to determine where data is stored in cache, including direct mapping, set-associative mapping, and associative mapping. Maintaining consistency between caches in multiprocessor systems is important and requires solutions to cache coherence problems.

Uploaded by

Aryan Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views24 pages

Cache Memory

Cache memory is a small, high-speed memory located close to the CPU that stores frequently accessed instructions and data. It compensates for the speed difference between the CPU and main memory. There are different mapping techniques used to determine where data is stored in cache, including direct mapping, set-associative mapping, and associative mapping. Maintaining consistency between caches in multiprocessor systems is important and requires solutions to cache coherence problems.

Uploaded by

Aryan Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Cache Memory

 A special very-high speed memory called a cache is some times used to


increase the speed of processing by making current programs and data
available to the CPU at a rapid rate.

 The cache memory is employed in computer system to compensate for the


speed differential between main memory access time and processor logic.

 The CPU logic is usually faster than main memory access time, with the
result that processing speed is limited primarily by the speed of main
memory.

 A technique used to compensate for the mismatch into operating speeds is


to employ an extremely fast, small cache between the CPU and
main memory whose access time is close to that of processor.

 The cache is used for storing segments of program currently being executed
in the CPU and temporary data frequently needed in the present
calculations.
 If the active portions of the program and data are placed in a fast small memory,
the average memory access time can be reduced, thus reducing the total
execution time of the program. Such a fast small memory is referred to as a cache
memory.
 The basic operation of the cache is as follows. When the CPU needs to access
memory, the cache is examined.

 If the word is found in the cache, it is read from the fast memory.

 If the word addressed by the CPU is not found in the cache, the main memory
is accessed to read the word. A block of words containing the one just accessed
is then transferred from main memory to cache memory.

 Hit/Miss: When the CPU refers to memory and finds the word in cache, it is said to
produce a hit. If the word is not found in cache, it is in main memory and it counts
as a miss .

 Hit Ratio: The performance of cache memory is frequently measured in terms of a


quantity called hit ratio. The ratio of the number of hits divided by the total CPU
references to memory (hits plus misses) is the hit ratio.
Mapping:

 The transformation of data from main memory to cache memory is


referred to as a mapping process.

 Three types of mapping procedures are of practical interest when


considering the organization of cache memory:

1. Associative mapping

2. Direct mapping

3. Set-associative mapping

 Let the main memory can store 32K words of 12 bits each. The cache is
capable of storing 512 of these words at any given time.

 For every word stored in cache, there is a duplicate copy in main memory.
 The CPU communicates with both memories. It first sends a 15-bit address to
cache. If there is a hit, the CPU accept the 12-bit data from cache. If there is a miss,
the CPU reads the word from main memory and the word is then transferred to
cache.
Associative Mapping:

 The fastest and most flexible cache organization uses an associative memory.
 The associative memory stores both the address and content (data) of the
memory word. This permits any location in cache to store any word from main
memory.

 The diagram shows thee words presently stored in the cache. The address value of
15 bits is show as a five-digit octal number and its corresponding 12-bit word is
shown as a four-digit octal number.

 A CPU address of 15 bits is placed in the argument register and the associative
memory is searched for a matching address.

 If the address is found, the corresponding 12-bit data is read and sent to the CPU.

 If no match occurs, the main memory is accessed for the word. The address-data
pair is then transferred to the associative cache memory.

 If the cache is full, an address-data pair must be displaced to make room for a pair
that is needed and not presently in the cache.

 The decision as to what pair is replaced is determined from the replacement


algorithm that the designer chooses for the cache.
Direct Mapping:

 The CPU address of 15 bits is divided into two fields.

 The nine least significant bits constitute the index field and the remaining six bits for
the tag field.
 The number of bits in the index field is equal to the number of address bits
required to access the cache memory.

 In the general case, there are 2^k words in cache memory and 2^n words in main
memory. The n-bit memory address is divided into two fields:

 k bits for the index field


 and n - k bits for the tag field.

 The direct mapping cache organization uses the n-bit address to access the main
memory and the k-bit index to access the cache.

 Each word in cache consists of the data word and its associated tag.

 When a new word is first brought into the cache, the tag bits are stored alongside
the data bits.

 When the CPU generates a memory request, the index field is used for the
address to access the cache.
 The tag field of the CPU address is compared with the tag in the word read from
the cache. If the two tags match, there is a hit and the desired data word is in
cache.

 If there is no match, there is a miss and the required word is read from main
memory. It is then stored in the cache together with the new tag, replacing the
previous value.

 The disadvantage of direct mapping is that the hit ratio can drop considerably if
two or more words whose addresses have the same index but different tags are
accessed repeatedly.

Example for direct mapping organization:

 As shown in previous figure, the word at address zero is presently stored in the
cache (index = 000, tag = 00, data = 1220). Suppose that the CPU now wants to
access the word at address 02000. The index address is 000, so it is used to access
the cache. The two tags are then compared. The cache tag is 00 but the address
tag is 02, which does not produce a match.
 Therefore, the main memory is accessed and the data word 5670 is transferred to
the CPU. The cache word at index address 000 is then replaced with a tag of 02 and
data of 5670.

Block size concept:

 The index field is now divided into two parts: the block field and the word field
 In a 512-word cache there are 64 blocks of 8 words each, since 64 x 8 = 512.

 The block number is specified with a 6-bit field and the word within the block is
specified with a 3-bit field.

 The tag field stored within the cache is common to all eight words of the same
block.

 Every time a miss occurs, an entire block of eight words must be transferred from
main memory to cache memory.
Set-Associative Mapping:

 It was mentioned previously that the disadvantage of direct mapping is that two words
with the same index in their address but the different tag values cannot reside in
cache memory at the same time.

 Set-associative mapping is an improvement over the direct mapping organization in


that each word of cache can store two or more words of memory under the same
index address.
 Each data word is stored together with its tag and the number of tag-data items in
one word of cache is said to form a set.

 In Figure, each index address refers to two data words and their associated tags.
Each tag requires six bits and each data word has 12 bits, so the word length is 2(6
+ 12) = 36 bits.

 A index address of nine bits can accommodate 512 words. Thus the size of cache
memory is 512 x 36. It can accommodate 1024 words of main memory since each
word of cache contains two data words.

 In figure, The words stored at addresses 01000 and 02000 of main memory are
stored in cache memory at index address 000. Similarly, the words at addresses
02777 and 00777 are stored in cache at index address 777.

 When the CPU generates a memory request, the index value of the address is used
to access the cache. The tag field of the CPU address is then compared with both
tags in the cache to determine if a match occurs.

 The hit ratio will improve as the set size increases because more words with the
same index but different tags can reside in cache.
Problem 1:

A digital computer has a memory unit of 64K x 16 and a cache memory of 1K


words. The cache uses direct mapping with a block size of 4 words.

a. How many bits are there in the tag, index, block and word fields of the
address format?

b. How many bits are there in each word of the cache, and how are they
divided into functions? Include the valid bit.

c. How many blocks can the cache accommodate?


Cache Coherence:

 The primary advantage of cache is its ability to reduce the average access
time in uniprocessors. When the processor finds a word in cache during a
read operation, the main memory is not involved in the transfer.

 If the operation is to write, there are two commonly used procedures to


update memory.

 Write-through policy: Both cache and main memory are updated with every
write operation.

 Write-back policy: Only the cache is updated and the location is marked so
that it can be copied later into main memory.
 In a shared memory multiprocessor system, all the processors share a
common memory. In addition, each processor may have a local memory, part
or all of which may be a cache.

 The compelling reason for having separate caches for each processor is to
reduce the average access time in each processor.

 The same information may reside in a number of copies in some caches and
main memory. To ensure the ability of the system to execute memory
operations correctly, the multiple copies must be kept identical. This
requirement imposes a cache coherence problem.

 A memory scheme is cohered if the value returned on a load instruction is


always the value given by the latest store instruction with the same address.
Without a proper solution to the cache coherence problem, caching cannot
be used in bus-oriented multiprocessors with two or more processors.
Conditions for Incoherence:

 Cache coherence problems exist in multiprocessors with private caches


because of the need to share writable data. Read-only data can safely be
replicated without cache coherence enforcement mechanisms.
 To illustrate the problem, consider the three-processor configuration with
private caches shown in previous slide. Sometime during the operation an
element X from main memory is loaded into the three processors, P1, P2,
and P3. As a consequence, it is also copied into the private caches of the
thee processors.

 For simplicity, we assume that X contains the value of 52. The load on X to
the three processors results in consistent copies in the caches and main
memory.

 If one of the processors performs a store to X, the copies of X in the caches


become inconsistent. A load by the other processors will not return the
latest value. Depending on the memory update policy used in the cache, the
main memory may also be inconsistent with respect to the cache. This is
shown in next slide.
 A store to X (of the value of 120) into the cache of processor P1 updates
memory to the new value in a write-through policy. A write-through policy
maintains consistency between memory and the originating cache, but the
other two caches are inconsistent since they still hold the old value. In a
write-back policy, main memory is not updated at the time of the store. The
copies in the other two caches and main memory are inconsistent. Memory
is updated eventually when the modified data in the cache are copied back
into memory.

Solutions to the Cache Coherence Problem:

 Various schemes have been proposed to solve the cache coherence


problem in shard memory multiprocessors. We discuss some of these
schemes briefly here.

 A simple scheme is to disallow private caches for each processor and have
a shard cache memory associated with main memory. Every data access is
made to the shard cache. This method violates the principle of closeness
of CPU to cache and increases the average memory access time. In effect,
this scheme solves the problem by avoiding it.
 For performance considerations it is desirable to attach a private cache to each
processor. One scheme that has been used allows only nonshared and read-only
data to be stored in caches. Such items are called cachable. Shared writable data are
noncachable. The compier must tag data as either cachable or noncachable, and the
system hardware makes sure that only cachable data are stored in caches. The
noncachable data remain in main memory. This method restrict the type of data
stored in caches and introduces an extra software overhead that may degradate
performance.

You might also like