0% found this document useful (0 votes)
13 views3 pages

OS Project Replacement Policies

Uploaded by

nlearn.narayana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views3 pages

OS Project Replacement Policies

Uploaded by

nlearn.narayana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Cache replacement is necessary because caches have limited storage capacity.

Cache replacement is crucial because it allows the cache to:

Stay within storage limits.

Maintain high performance by retaining relevant data.

Adapt to changing patterns in data access.

Maximize cost-efficiency without requiring excessive cache memory.

Reduce cache pollution by evicting rarely accessed or outdated data.

A cache replacement policy is a method for deciding which items to evict (remove) from a cache
when the cache reaches its capacity. This decision is crucial because it affects the efficiency and hit
rate of the cache. There are several replacement policies, each with different advantages and
implementations, depending on the use case and workload.

Optimal policy (Belady’s MIN algorithm)


Replace the block that is accessed furthest in the future. Solution: Predict the future by looking at
recent accesses and applying the principle of locality.

“If a block has not been used recently, it is often less likely to be accessed in the near future”

Least Recently Used (LRU)


This policy removes the item that has not been accessed for the longest time. An efficient way to
implement LRU is using a doubly linked list and a hashmap. The hashmap stores references to nodes
in the linked list, so we can update and move elements to the front (most recent) in O(1) time. When
the cache reaches capacity, the least recently used item (at the end of the linked list) is removed.

Least Frequently Used (LFU)


This policy evicts the item that has been used the least number of times. This can be implemented
using a min-heap to keep track of items by their access frequency, along with a hashmap for

O(1) access to cache entries. Each time an item is accessed, its frequency count is incremented, and
the min-heap is updated.

First In, First Out (FIFO)


This policy removes the oldest item in the cache, regardless of how often or recently it was accessed.

FIFO can be implemented using a simple queue. When an item is added to the cache, it is placed at
the end of the queue. When the cache is full, the item at the front of the queue (oldest) is evicted.

Random Replacement
This policy removes a random item from the cache. This can be implemented by simply selecting a
random item to evict whenever the cache is full. In an array-based implementation, choosing a
random index is straightforward.
Most Recently Used (MRU)
This policy removes the most recently accessed item, under the assumption that the least recently
accessed items are more likely to be reused. MRU can also be implemented with a stack or doubly
linked list and hashmap. The most recently used item is kept at the top of the stack, and when the
cache is full, this item is removed.

Adaptive Replacement Cache (ARC)


ARC dynamically adjusts between LRU and LFU to optimize for workloads that change over time. ARC
is complex, involving multiple lists to track recently and frequently accessed items and dynamically
adapting the balance between them. It's typically implemented in specialized caching systems.

Write-Back Cache Policy


In a write-back policy, when data in the cache is modified, the changes are not immediately written
to the underlying storage. Instead, they are written back (or flushed) to the storage only when the
data is evicted from the cache or under specific conditions (like when a checkpoint is reached or
when the system needs to ensure data consistency).

Key Characteristics of Write-Back Caching

 Performance: Write-back caching reduces the frequency of writes to the underlying storage,
which can improve performance, especially in systems with high write loads.

 Risk of Data Loss: Since changes are kept in the cache until eviction, there’s a risk of data loss
if the system crashes before the modified data is written back to the main storage.

 Dirty Bit: Each cache entry has a "dirty" bit that indicates whether the data in the cache is
different from the underlying storage. If the dirty bit is set, it means that the cache entry has
been modified but not yet written back.

Write-Through Cache Policy


Write-through caching writes changes to both cache and memory immediately, keeping them in sync
but increasing write operations.

Key Characteristics of Write-Through Caching

1. Data Consistency: Since data is written to both the cache and main memory at the same
time, main memory is always up-to-date. This minimizes the risk of data loss if the system
crashes because all changes are immediately saved to memory.

2. Increased Write Latency: Every write operation is sent to main memory, which can increase
latency compared to write-back caching. This can be mitigated by using write buffers, which
temporarily hold data to be written to main memory, allowing the CPU to proceed without
waiting for the write to complete.

3. Simplicity: Write-through caching is simpler to implement since it doesn’t require "dirty"


flags or tracking modified entries in the cache.
Write-Behind Cache Policy
The write-behind policy delays writing to main memory, updating it periodically
instead of immediately or on eviction.
 Delayed Write: Instead of writing to main memory immediately, write-behind
caches delay the write operation, often consolidating multiple changes into a single
batch write.
 Improved Performance: By reducing the frequency of writes to main memory,
write-behind caches can improve system performance, especially for applications that
perform frequent, repetitive writes.
 Data Loss Risk: The delay in writing to main memory introduces a risk of data
loss if the system crashes before the data is written back.

 Write Queue: Write-behind often uses a queue to hold the "dirty" data (data that
has been modified in the cache) until it's time to write it to main memory.

You might also like