CacheNotes by Vansh Neggi
CacheNotes by Vansh Neggi
When the processor core writes to memory, the cache controller has two alternatives for
its write policy:--
The controller can write to both the cache and main memory, updating
the values in both locations; this approach is known as writethrough
.Alternatively, the cache controller can write to cache memory and not update main memory, this is
known as writeback or copyback.
This approach ensures that the data in the cache and the backing store are always consistent.
Because of the write to main memory, a writethrough policy is slower than a writeback policy.
Writeback
When a cache controller uses a writeback policy, it writes to valid cache data memory
and not to main memory.
Consequently, valid cache lines and main memory may contain different data.
The cache line holds the most recent data, and main memory contains older data, which has not been
updated
Caches configured as writeback caches must use one or more of the dirty bits in the
cache line status information block. When a cache controller in writeback writes a value to
cache memory, it sets the dirty bit true
Advantages:
• Performance: Write operations are faster as they are initially done only on the cache.
• Reduced Memory Traffic: Fewer write operations are sent to the main memory,
reducing bandwidth usage.
e.
In detail, show the measuring of cache efficiency
There are two terms used to characterize the cache efficiency of a program:
the cache hit rate
and the cache miss rate.
HIT RATE
The hit rate is the number of cache hits divided by the
total number of memory requests over a given time interval. The value is expressed as
a percentage:
Significance: A higher hit rate indicates better cache performance as more memory accesses are serviced
quickly from the cache.
MISS RATE
Definition: The percentage of memory accesses that are not found in the cache and must be
fetched from main memory.
Significance: A lower miss rate indicates better cache efficiency, as fewer accesses result in
time-consuming memory fetches.
How cache replacement policy selects a line during a cache miss
• On a cache miss, the cache controller selects a cache line (victim) from the set to
replace with new data from main memory.
• The cache line selected for replacement is known as victim.
• If the victim contains valid, dirty data, it must first be written back to main memory
before replacing.
• Replacement Policy:
▪ Round-Robin: Sequentially selects the next cache line in the set for
replacement. Uses a victim counter that increments with each allocation.
Resets when reaching a maximum value.
▪ Pseudorandom: Randomly selects the next cache line in the set for
replacement. Uses a non-sequential incrementing victim counter. Resets
when reaching a maximum value.
o Fetch the requested data from main memory into the newly selected cache line:
▪ Update the cache line with the new data.
▪ Mark the cache line as valid and potentially dirty depending on the write
policy.
▪
o Once the cache line is updated with the new data, resume processing the instruction
or data access that triggered the cache miss.
IMP Write the steps for transferring the data between processor and main memory
Steps for Transferring Data Between the Processor and Main Memory
1. Processor Core Requests Data:
o The processor core needs data or instructions to execute.
o It first checks if the data is available in the cache.
2. Cache Check:
o Cache Hit: If the data is found in the cache (called a cache hit), the
processor uses it directly from the cache, which is very fast.
o Cache Miss: If the data is not in the cache (called a cache miss), the
processor needs to fetch it from the main memory.
3. Fetching Data from Main Memory:
o Data Transfer: The cache controller requests the data from the slower
main memory.
o Cache Line Transfer: Main memory sends a block of data called a cache
line to the cache. This transfer is done in blocks rather than individual
pieces to speed up the process.
4. Storing Data in Cache:
o The cache stores the fetched data temporarily.
o This allows the processor to access it quickly for future requests.
5. Write Buffer (if used):
o Write Operation: If the processor writes data to memory, the data is first
stored in a small, fast write buffer.
o Delayed Write: The write buffer gradually transfers this data to the main
memory at a slower pace, freeing up space in the cache for new data.
6. Accessing Data:
o The processor core accesses the data from the cache or waits for it to be
fetched and stored in the cache.