OS Project Replacement Policies
OS Project Replacement Policies
A cache replacement policy is a method for deciding which items to evict (remove) from a cache
when the cache reaches its capacity. This decision is crucial because it affects the efficiency and hit
rate of the cache. There are several replacement policies, each with different advantages and
implementations, depending on the use case and workload.
“If a block has not been used recently, it is often less likely to be accessed in the near future”
O(1) access to cache entries. Each time an item is accessed, its frequency count is incremented, and
the min-heap is updated.
FIFO can be implemented using a simple queue. When an item is added to the cache, it is placed at
the end of the queue. When the cache is full, the item at the front of the queue (oldest) is evicted.
Random Replacement
This policy removes a random item from the cache. This can be implemented by simply selecting a
random item to evict whenever the cache is full. In an array-based implementation, choosing a
random index is straightforward.
Most Recently Used (MRU)
This policy removes the most recently accessed item, under the assumption that the least recently
accessed items are more likely to be reused. MRU can also be implemented with a stack or doubly
linked list and hashmap. The most recently used item is kept at the top of the stack, and when the
cache is full, this item is removed.
Performance: Write-back caching reduces the frequency of writes to the underlying storage,
which can improve performance, especially in systems with high write loads.
Risk of Data Loss: Since changes are kept in the cache until eviction, there’s a risk of data loss
if the system crashes before the modified data is written back to the main storage.
Dirty Bit: Each cache entry has a "dirty" bit that indicates whether the data in the cache is
different from the underlying storage. If the dirty bit is set, it means that the cache entry has
been modified but not yet written back.
1. Data Consistency: Since data is written to both the cache and main memory at the same
time, main memory is always up-to-date. This minimizes the risk of data loss if the system
crashes because all changes are immediately saved to memory.
2. Increased Write Latency: Every write operation is sent to main memory, which can increase
latency compared to write-back caching. This can be mitigated by using write buffers, which
temporarily hold data to be written to main memory, allowing the CPU to proceed without
waiting for the write to complete.
Write Queue: Write-behind often uses a queue to hold the "dirty" data (data that
has been modified in the cache) until it's time to write it to main memory.