Problem Bank 06: Assignment I
Problem Bank 06: Assignment I
Problem Bank 06
Assignment Description:
The assignment aims to provide deeper understanding of cache by analysing it’s
behaviour using cache implementation of CPU- OS Simulator. The assignment has three
parts.
• Part I deals with Cache Memory Management with Direct Mapping
• Part II deals with Cache Memory Management with Associative Mapping
• Part III deals with Cache Memory Management with Set Associative Mapping
Submission: You will have to submit this documentation file and the name of the file should
be GROUP-NUMBER.pdf. For Example, if your group number is 1, then the file name
should be GROUP-1.pdf.
Submit the assignment by 7 th June 2020, through canvas only . File submitted by any means
outside CANVAS will not be accepted and marked.
In case of any issues, please drop an email to the course TAs, Ms. Michelle Gonsalves
([email protected]).
Caution!!!
Assignments are designed for individual groups which may look similar and you may not
notice minor changes in the assignments. Hence, refrain from copying or sharing documents
with others. Any evidence of such practice will attract severe penalty.
Evaluation:
Contribution Table:
Contribution (This table should contain the list of all the students in the group. Clearly
mention each student’s contribution towards the assignment. Mention “No Contribution” in
cases applicable.)
a) Execute the above program by setting block size to 2, 4, 8, 16 and 32 for cache size =
8, 16 and 32. Record the observation in the following table.
2 8 171 171 50 50
From the graph, we can say that as the cache size is increasing, hit ratio is also increasing.
Part II: Associative Mapped Cache
a) Execute the above program by setting block size to 2, 4, 8, 16 and 32 for cache size = 8,
16 and 32. Record the observation in the following table.
b) Plot a single graph of cache hit ratio Vs Block size with respect to cache size = 8, 16, and
32. Comment on the graph that is obtained.
Comment: From the graph shown below, we can conclude the following observations:
1. Cache size = 8: Performance of the following block size (2, 4, 8) are almost similar
with block size two slightly outperforming block size (4, 8) by 1.5% (first bar plot)
2. Cache size = 16: Performance of the following block size (4, 16) is highest, with block
size four slightly outperforming block size 16 by 0.3% (second bar plot)
3. Cache size = 32: Performance of the following block size (16, 32) is highest (third bar
plot)
Summary: We can summarize the findings in the following two points:
1. In order to improve the performance of the cache (i.e., reduce the number of misses),
we can increase the cache size. This is evident from the plot, we can see as the cache
size increased (8 => 16 => 32), the hit ratio also increased, but the trade-off is that
the access time will also increase as the cache size increases.
2. Another point to note is that within each cache size, as the block size is increasing,
the hit ratio is also increasing. This is because the increased block size takes
advantage of spatial locality. However, be careful; the larger block size can increase
the miss penalty.
c) Fill up the following table for three different replacement algorithms and state
which replacement algorithm is better and why?
From the table we can conclude that on an average LRU has a higher hit ratio than Random
and FIFO algorithms. The results make sense, since LRU is taking advantage of spatial locality.
In binary search the algorithm is always comparing for the key element against the other
elements in an smaller subarray which means that the elements of this subarray are present in
the block where the algorithm is looking, which in turn increases hit ratio.
c) Plot the graph of Cache Hit Ratio Vs Cache size with respect to different replacement
algorithms. Comment on the graph that is obtained.
Summary: From the plot shown below, we can conclude the followings points:
1. As the cache size increase, the hit ratio also increases irrespective of the algorithm used.
2. Random replacement algorithm performed better with cache size (4, 16 and 64), while
LRU performed better with cache size (8 and 32).
Part III: Set Associative Mapped Cache
Execute the above program by setting the following Parameters:
• Number of sets (Set Blocks ) : 2 way
• Cache Type : Set Associative
• Replacement: LRU/FIFO/Random
a) Fill up the following table for three different replacement algorithms and state which
replacement algorithm is better and why?
Replacement Algorithm : Random
Block Size Cache size Miss Hit Hit ratio
2 4 191 151 44.15
2 8 125 217 63.45
2 16 83 259 75.73
2 32 54 288 84.21
2 64 43 299 87.43
Replacement Algorithm: FIFO
Block Size Cache size Miss Hit Hit ratio
2 4 196 146 42.69
2 8 108 234 68.42
2 16 86 256 74.85
2 32 58 284 83.04
2 64 47 295 86.26
Replacement Algorithm: LRU
Block Size Cache size Miss Hit Hit ratio
2 4 196 146 42.69
2 8 109 233 68.13
2 16 85 257 75.15
2 32 56 286 83.63
2 64 47 295 86.26
If we consider the total number of misses made by each algorithm (496, 495, 493), then random
performed better than FIFO and LRU, but on an average, in terms of hit ratio LRU slightly
outperforms the other two algorithms.
b) Plot the graph of Cache Hit Ratio Vs Cache size with respect to different replacement
algorithms. Comment on the graph that is obtained.
Summary: From the plot shown below, we can conclude the followings points:
1. As the cache size increase, the hit ratio also increases irrespective of the algorithm used.
2. Random replacement algorithm performed better with cache size (4, 16, 32 and 64),
while FIFO performed better with cache size (8).
c) Fill in the following table and analyse the behaviour of Set Associate Cache. Which one is
better and why?
Replacement Algorithm : LRU
Block Size, Set Blocks Miss Hit Hit ratio
Cache size
2, 64 2 – Way 47 295 86.26
2, 64 4 – Way 45 297 86.84
2, 64 8 – Way 47 297 86.84
The N way set-associative cache is a provides N blocks in each set providing high degree of
conflict reduction. Which in turn reduces the miss rates. Although it reduces the miss rate, the
speed of the cache is slower and are more expensive to build.
By closely looking at the hit ratio, we get similar figures for 2-way ,4-way and 8-way
associations. So, we can deduce that having a 2- way set associative cache would be more
optimal option for our execution scenario.