0% found this document useful (0 votes)
34 views5 pages

(Os Ii) 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views5 pages

(Os Ii) 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

AL-QALAM UNIVERSITY KATSINA, FACULTY OF COMPUTING AND INFORMATION

TECH.DEPARTMENT OF COMPUTER SCIENCE CSC331 (OPERATING SYSTEM II) 2023\2024


B
LECTURE NOTE PREPARED BY: MAL USMAN MAHMUD (WARU) & Mal MARDIYYA A. BAGIWA

Introduction:
Placement and replacement policies are fundamental concepts in the field of
computer science, particularly in the realm of caching and memory
management. These policies dictate how data is stored, accessed, and replaced
in various data structures such as caches, page tables, and virtual memory
systems. Understanding these policies is crucial for designing efficient and
high-performance computer systems.
1. Placement Policies:
Placement policies determine where newly arrived data should be stored
within a given data structure. The goal of placement policies is to optimize
data placement to improve access times and reduce latency. Common
placement policies include:
a. Direct Mapping:
 Direct mapping assigns each data item to a specific location in
the cache or memory based on a hashing function.
 It is simple and efficient but may lead to high collision rates,
resulting in poor performance.
b. Fully Associative Mapping:
 Fully associative mapping allows data to be placed anywhere in
the cache or memory.
 It provides flexibility but requires additional hardware to search
for data, leading to higher hardware complexity and access times.
c. Set Associative Mapping:
 Set associative mapping divides the cache or memory into
multiple sets, with each set containing multiple lines or blocks.
 Data items are mapped to a specific set using a hashing function,
and replacement policies determine which line within the set to
place the data.
2. Replacement Policies:
Replacement policies dictate which data item should be evicted or
replaced when new data needs to be stored in a location that is already
occupied. The primary goal of replacement policies is to maximize cache
or memory utilization while minimizing the frequency of cache misses.
Common replacement policies include:
a. Least Recently Used (LRU):
 LRU replaces the least recently accessed data item in the cache
or memory.
 It leverages temporal locality by assuming that recently accessed
data is more likely to be accessed again in the near future.
b. First-In-First-Out (FIFO):
 FIFO replaces the oldest data item in the cache or memory.
 It is simple to implement but may not always provide optimal
performance, especially in scenarios where access patterns do
not align with arrival times.
c. Least Frequently Used (LFU):
 LFU replaces the least frequently accessed data item in the cache
or memory.
 It prioritizes data items that are accessed infrequently, assuming
that they are less likely to be accessed in the future.
d. Random Replacement:
 Random replacement selects a random data item for eviction.
 While simple to implement, it does not consider access patterns
or frequencies, potentially leading to suboptimal performance.
a. Least Recently Used (LRU): - Evicts the least recently accessed data item.
- Example: Suppose a cache has 4 blocks and data items A, B, C, D are
accessed in sequence. When a new item E is accessed, LRU replaces the block
containing the least recently accessed item, which could be A if it was
accessed first.
b. First-In-First-Out (FIFO): - Evicts the oldest data item. - Example: Using
the same cache as above, FIFO would replace the block containing item A
because it was stored first.
c. Least Frequently Used (LFU): - Evicts the least frequently accessed data
item. - Example: If data item A is accessed more times than B, C, or D, LFU
would retain A over the others when space is needed.
d. Random Replacement: - Selects a random data item for eviction. -
Example: Randomly choosing a block for replacement regardless of its access
history.
Conclusion:
Placement and replacement policies play a critical role in optimizing the
performance of computer systems by efficiently managing data storage and
access. By understanding these policies and their implications, students can
design and implement more efficient caching and memory management
strategies in various computing environments.
Working example on Replacement Policies
.
Understanding Working Sets and Thrashing in Computer Systems
Introduction:
Working sets and thrashing are critical concepts in computer systems,
especially in memory management. They play a crucial role in optimizing
system performance and resource utilization.
1. Working Sets:
The working set of a process refers to the collection of memory pages that the
process actively uses during a particular time interval. Here's a more detailed
exploration:
a. Temporal Locality: - Processes often exhibit temporal locality, meaning
they repeatedly access a subset of memory pages over short periods. - The
working set captures this behavior by identifying the pages accessed within a
specific timeframe.
b. Dynamic Size: - The size of a process's working set can vary dynamically
based on its execution phase, input data, and resource demands. - Processes
may have different working set sizes depending on their computational
requirements.
c. Working Set Model: - The Working Set Model, proposed by Peter
Denning, formalizes the concept of working sets. - It defines the working set
of a process as the set of pages accessed within a recent time window
(typically denoted by Delta Δ).
d. Importance in Memory Management: - Efficiently managing working
sets is crucial for reducing page faults and optimizing memory utilization. -
By keeping frequently accessed pages in memory, systems can minimize the
need for costly disk accesses.
2. Thrashing:
Thrashing occurs when a computer system spends a significant amount of
time swapping pages between main memory (RAM) and secondary storage
(e.g., disk) due to excessive paging activity. Let's explore this phenomenon in
more detail:
a. Causes: - Thrashing often results from overcommitting physical memory,
where the total memory demand of active processes exceeds available
physical memory capacity. - It can also occur when system resources are over
utilized or when there's inefficient memory management.
b. Symptoms: - Symptoms of thrashing include a noticeable slowdown in
system performance, increased disk I/O activity, and high page fault rates. -
Users may experience delayed response times and sluggishness in
applications.
c. Mitigation Strategies: - To mitigate thrashing, system administrators can
adjust paging parameters, such as increasing the size of the page file or
allocating more physical memory. - Prioritizing processes based on their
working set sizes can help ensure that critical processes receive sufficient
memory resources. - Optimizing memory allocation algorithms and resource
scheduling policies can also alleviate thrashing.
d. Detection: - Thrashing can be detected by monitoring system performance
metrics, including page fault rates, disk utilization, memory usage, and system
responsiveness. - An increase in paging activity coupled with a decrease in
overall system performance is indicative of thrashing.
Conclusion:
Working sets and thrashing are essential concepts in memory management,
impacting system performance and responsiveness. By understanding the
principles behind working sets and the causes/effects of thrashing, students
can develop strategies to optimize memory usage and mitigate performance
degradation in computing environments

You might also like