0% found this document useful (0 votes)
11 views3 pages

Cache Memory With Associative Memory 2.2.5

This document discusses different types of cache memory and cache replacement algorithms. It describes associative cache as a type of cache designed to avoid contention by allowing blocks to be stored in any cache line. A set-associative cache compromises by dividing lines into sets, requiring fewer tag comparisons than fully associative. Common replacement algorithms described are LRU, FIFO, LFU and random. Replacement is unnecessary for direct mapping but required for set-associative and fully associative caches to select a block to replace when the cache is full. LRU is considered optimal but difficult to implement, while FIFO can cause Belady's anomaly of increased misses with larger caches.

Uploaded by

harshdeep singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views3 pages

Cache Memory With Associative Memory 2.2.5

This document discusses different types of cache memory and cache replacement algorithms. It describes associative cache as a type of cache designed to avoid contention by allowing blocks to be stored in any cache line. A set-associative cache compromises by dividing lines into sets, requiring fewer tag comparisons than fully associative. Common replacement algorithms described are LRU, FIFO, LFU and random. Replacement is unnecessary for direct mapping but required for set-associative and fully associative caches to select a block to replace when the cache is full. LRU is considered optimal but difficult to implement, while FIFO can cause Belady's anomaly of increased misses with larger caches.

Uploaded by

harshdeep singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Cache memory with associative memory 2.2.

5
Associative Cache
A type of CACHE designed to solve the problem of cache CONTENTION that plagues the
DIRECT MAPPED CACHE. In a fully associative cache, a data block from any memory
address may be stored into any CACHE LINE, and the whole address is used as the cache
TAG: hence, when looking for a match, all the tags must be compared simultaneously with
any requested address, which demands expensive extra hardware. However, contention is
avoided completely, as no block need ever be flushed unless the whole cache is full, and
then the least recently used may be chosen.
A set-associative cache is a compromise solution in which the cache lines are divided into
sets, and the middle bits of its address determine which set a block will be stored in: within
each set the cache remains fully associative. A cache that has two lines per set is called two-
way set-associative and requires only two tag comparisons per access, which reduces the
extra hardware required. A DIRECT MAPPED CACHE can be thought of as being one-way
set associative, while a fully associative cache is n-way associative where n  is the total
number of cache lines. Finding the right balance between associativity and total cache
capacity for a particular processor is a fine art-various current cpus employ 2 way, 4-way
and 8-way designs.
REPLACEMENT ALGORITHMS OF CACHE MEMORY

Replacement algorithms are used when there is no available space in a cache in which to
place a data. Four of the most common cache replacement algorithms are described below:

Least Recently Used (LRU):

The LRU algorithm selects for replacement the item that has been least recently used by the
CPU.

First-In-First-Out (FIFO):

The FIFO algorithm selects for replacement the item that has been in the cache from the
longest time.

Least Frequently Used (LRU):

The LRU algorithm selects for replacement the item that has been least frequently used by
the CPU.

Random:

The random algorithm selects for replacement the item randomly

NEED OF REPLACEMENT ALGORITHM

In direct mapping,

 There is no need of any replacement algorithm.


 This is because a main memory block can map only to a particular line of the cache.
 Thus, the new incoming block will always replace the existing block (if any) in that
particular line.

In Set Associative mapping,

  Set associative mapping is a combination of direct mapping and fully associative


mapping.
 It uses fully associative mapping within each set.
 Thus, set associative mapping requires a replacement algorithm.

Fully associative mapping,

 A replacement algorithm is required.


 Replacement algorithm suggests the block to be replaced if all the cache lines are
occupied.
 Thus, replacement algorithm like FCFS Algorithm, LRU Algorithm etc is employed.

LRU (LEAST RECENTLY USED)


 The page which was not used for the largest period of time in the past will get
reported first.
 We can think of this strategy as the optimal cache- replacement algorithm looking
backward in time, rather than forward.
 LRU is much better than FIFO replacement.
 LRU is also called a stack algorithm and can never exhibit belady's anamoly.
 The problem which is most important is how to implement LRU replacement. An
LRU page replacement algorithm may require a sustainable hardware resource.

Example: Let we have a sequence 7, 0 ,1, 2, 0, 3, 0, 4, 2, 3 and cache memory has 3 lines.

There are a total of 6 misses in the LRU replacement policy.

FIRST IN FIRST OUT POLICY


 The block which has entered first in the main be replaced first.
 This can lead to a problem known as "Belady's Anamoly", it starts that if we increase
the no. of lines in cache memory the cache miss will increase.

Belady's Anamoly: For some cache replacement algorithm, the pages fault or miss rate
increase as the number of allocated frame increase.
Example: Let we have a sequence 7, 0 ,1, 2, 0, 3, 0, 4, 2, 3, and cache memory has 4 lines.

There are a total of 6 misses in the FIFO replacement policy.

LEAST FREQUENTLY USED (LFU):

This cache algorithm uses a counter to keep track of how often an entry is accessed. With the
LFU cache algorithm, the entry with the lowest count is removed first. This method isn't used
that often, as it does not account for an item that had an initially high access rate and then was
not accessed for a long time.

References
Reference Books:
 J.P. Hayes, “Computer Architecture and Organization”, Third Edition.
 Mano, M., “Computer System Architecture”, Third Edition, Prentice Hall.
 Stallings, W., “Computer Organization and Architecture”, Eighth Edition, Pearson
Education.
Text Books:
 Carpinelli J.D,” Computer systems organization &Architecture”, Fourth Edition,
Addison Wesley.
 Patterson and Hennessy, “Computer Architecture”, Fifth Edition Morgaon Kauffman.
Other References
 What is Associative Cache? - Computer Notes (ecomputernotes.com)
 http://www.eazynotes.com/notes/computer-system-architecture/slides/cache-
memory.pdf
 file:///C:/Users/91987/Desktop/COA/lect11-cache-replacement.pdf
 https://searchstorage.techtarget.com/definition/cache-algorithm
 https://www.includehelp.com/cso/types-of-cache-replacement-policies.aspx

You might also like