0% found this document useful (0 votes)
1K views10 pages

Problem Bank 06: Assignment I

LRU replacement algorithm performs better than FIFO and Random replacement algorithms. The key observations are: 1) For all cache sizes, LRU algorithm consistently provides higher hit ratios compared to FIFO and Random. 2) As the cache size increases, the performance gap between LRU and other algorithms also increases indicating LRU is able to better utilize the cache space. 3) LRU tracks the least recently used block and replaces it. This better captures the temporal locality in memory references compared to FIFO and Random. Hence, LRU replacement algorithm is better as it is able to maximize cache hit ratio by selecting the least recently used block for replacement.

Uploaded by

PuneethHj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views10 pages

Problem Bank 06: Assignment I

LRU replacement algorithm performs better than FIFO and Random replacement algorithms. The key observations are: 1) For all cache sizes, LRU algorithm consistently provides higher hit ratios compared to FIFO and Random. 2) As the cache size increases, the performance gap between LRU and other algorithms also increases indicating LRU is able to better utilize the cache space. 3) LRU tracks the least recently used block and replaces it. This better captures the temporal locality in memory references compared to FIFO and Random. Hence, LRU replacement algorithm is better as it is able to maximize cache hit ratio by selecting the least recently used block for replacement.

Uploaded by

PuneethHj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Assignment I

Problem Bank 06
Assignment Description:
The assignment aims to provide deeper understanding of cache by analysing it’s
behaviour using cache implementation of CPU- OS Simulator. The assignment has three
parts.
• Part I deals with Cache Memory Management with Direct Mapping
• Part II deals with Cache Memory Management with Associative Mapping
• Part III deals with Cache Memory Management with Set Associative Mapping

Submission: You will have to submit this documentation file and the name of the file should
be GROUP-NUMBER.pdf. For Example, if your group number is 1, then the file name
should be GROUP-1.pdf.

Submit the assignment by 7 th June 2020, through canvas only . File submitted by any means
outside CANVAS will not be accepted and marked.

In case of any issues, please drop an email to the course TAs, Ms. Michelle Gonsalves
([email protected]).

Caution!!!

Assignments are designed for individual groups which may look similar and you may not
notice minor changes in the assignments. Hence, refrain from copying or sharing documents
with others. Any evidence of such practice will attract severe penalty.

Evaluation:

• The assignment carries 13 marks


• Grading will depend on
o Contribution of each student in the implementation of the assignment
o Plagiarism or copying will result in -13 marks
************************FILL IN THE DETAILS GIVEN BELOW**************

Assignment Set Number: Problem Bank 06

Group Name: 146

Contribution Table:

Contribution (This table should contain the list of all the students in the group. Clearly
mention each student’s contribution towards the assignment. Mention “No Contribution” in
cases applicable.)

Sl. Name (as appears in Canvas) ID NO Contribution


No.
1 Shaksham Kapoor 2019HC04176 Solved Part II and
Provided explanation for
Part III – (a) & (b)

2 Guru Prasad Singh Gaurav 2019HC04177 Solved Part I

3 Noushad Thottiyil Ali 2019HC04185 Solved Part III

Resource for Part I, II and III:

• Use following link to login to “eLearn” portal.


o https://elearn.bits-pilani.ac.in
• Click on “My Virtual Lab – CSIS”
• Using your canvas credentials login in to Virtual lab
• In “BITS Pilani” Virtual lab click on “Resources”. Click on “Computer Organization
and software systems” course.
o Use resources within “LabCapsule3: Cache Memory”
Code to be used:
The following code written in STL Language, implements searching of an element
(key) in an array using binary search technique.
program BinarySearch
VAR a array(10) INTEGER
for n = 0 to 9
a(n) = n
writeln (a(n))
next
VAR key INTEGER
VAR first INTEGER
VAR last INTEGER
VAR middle INTEGER
VAR temp INTEGER
key = 3
writeln("Key to be searched",key)
first = 0
last = 9
middle = (first+last)/2
while first <= last
temp = a(middle)
if temp = key then
writeln("Key Found",middle)
break
end if
if key > temp then
first = middle + 1
else
last = middle - 1
end if
middle = (first+last)/2
wend
if first > last then
writeln("Key Not Found")
end if
end
General procedure to convert the given STL program in to ALP :
• Open CPU OS Simulator. Go to advanced tab and press compiler button
• Copy the above program in Program Source window
• Open Compile tab and press compile button
• In Assembly Code, enter start address and press Load in Memory button
• Now the assembly language program is available in CPU simulator.
• Set speed of execution to FAST.
• Open I/O console
• To run the program press RUN button.
General Procedure to use cache set up in CPU-OS simulator
• After compiling and loading the assembly language code in CPU simulator, press
“Cache-Pipeline” tab and select cache type as “data cache”. Press “SHOW CACHE..”
button.
• In the newly opened cache window, choose appropriate cache Type, cache size, set
blocks, replacement algorithm and write policy.

Part I: Direct Mapped Cache

a) Execute the above program by setting block size to 2, 4, 8, 16 and 32 for cache size =
8, 16 and 32. Record the observation in the following table.

Block Cache size # Hits # Misses % Miss Ratio %Hit Ratio


Size

2 8 171 171 50 50

4 189 153 44.7 54.3

8 178 164 47.9 52.1

2 16 226 116 33.9 66.1

4 237 105 30.7 69.3

8 241 105 29.5 70.5

16 251 91 26.6 73.4

2 32 242 100 29.2 70.8

4 274 68 19.8 79.2

8 284 58 16.9 83.1

16 285 57 16.6 83.4

32 300 42 12.2 77.8


b) Plot a single graph of cache hit ratio Vs Block size with respect to cache size = 8, 16
and 32. Comment on the graph that is obtained.

From the graph, we can say that as the cache size is increasing, hit ratio is also increasing.
Part II: Associative Mapped Cache

a) Execute the above program by setting block size to 2, 4, 8, 16 and 32 for cache size = 8,
16 and 32. Record the observation in the following table.

LRU Replacement Algorithm

Block Cache size # Hits # Misses % Miss Ratio %Hit Ratio


Size

2 8 183 159 46.4 53.6

4 178 164 47.9 52.1

8 178 164 47.9 52.1

2 16 200 142 41.5 58.5

4 252 90 26.3 73.7

8 234 108 31.5 68.5

16 251 91 26.6 73.4

2 32 251 91 26.6 73.4

4 274 68 19.8 80.2

8 295 47 13.7 86.3

16 300 42 12.2 87.8

32 300 42 12.2 87.8

b) Plot a single graph of cache hit ratio Vs Block size with respect to cache size = 8, 16, and
32. Comment on the graph that is obtained.

Comment: From the graph shown below, we can conclude the following observations:

1. Cache size = 8: Performance of the following block size (2, 4, 8) are almost similar
with block size two slightly outperforming block size (4, 8) by 1.5% (first bar plot)

2. Cache size = 16: Performance of the following block size (4, 16) is highest, with block
size four slightly outperforming block size 16 by 0.3% (second bar plot)

3. Cache size = 32: Performance of the following block size (16, 32) is highest (third bar
plot)
Summary: We can summarize the findings in the following two points:

1. In order to improve the performance of the cache (i.e., reduce the number of misses),
we can increase the cache size. This is evident from the plot, we can see as the cache
size increased (8 => 16 => 32), the hit ratio also increased, but the trade-off is that
the access time will also increase as the cache size increases.

2. Another point to note is that within each cache size, as the block size is increasing,
the hit ratio is also increasing. This is because the increased block size takes
advantage of spatial locality. However, be careful; the larger block size can increase
the miss penalty.

c) Fill up the following table for three different replacement algorithms and state
which replacement algorithm is better and why?

Replacement Algorithm : Random


Block Size Cache size Miss Hit Hit ratio
2 4 206 136 39.8%
2 8 188 154 45.1%
2 16 139 203 59.4%
2 32 93 249 72.9%
2 64 80 262 76.7%
Replacement Algorithm : FIFO
Block Size Cache size Miss Hit Hit ratio
2 4 210 132 38.6%
2 8 160 182 53.3%
2 16 144 198 57.9%
2 32 95 247 72.3%
2 64 85 257 75.2%
Replacement Algorithm : LRU
Block Size Cache size Miss Hit Hit ratio
2 4 210 132 38.6%
2 8 159 183 53.6%
2 16 142 200 58.5%
2 32 91 251 73.4%
2 64 85 257 75.2%

From the table we can conclude that on an average LRU has a higher hit ratio than Random
and FIFO algorithms. The results make sense, since LRU is taking advantage of spatial locality.
In binary search the algorithm is always comparing for the key element against the other
elements in an smaller subarray which means that the elements of this subarray are present in
the block where the algorithm is looking, which in turn increases hit ratio.

c) Plot the graph of Cache Hit Ratio Vs Cache size with respect to different replacement
algorithms. Comment on the graph that is obtained.
Summary: From the plot shown below, we can conclude the followings points:

1. As the cache size increase, the hit ratio also increases irrespective of the algorithm used.

2. Random replacement algorithm performed better with cache size (4, 16 and 64), while
LRU performed better with cache size (8 and 32).
Part III: Set Associative Mapped Cache
Execute the above program by setting the following Parameters:
• Number of sets (Set Blocks ) : 2 way
• Cache Type : Set Associative
• Replacement: LRU/FIFO/Random

a) Fill up the following table for three different replacement algorithms and state which
replacement algorithm is better and why?
Replacement Algorithm : Random
Block Size Cache size Miss Hit Hit ratio
2 4 191 151 44.15
2 8 125 217 63.45
2 16 83 259 75.73
2 32 54 288 84.21
2 64 43 299 87.43
Replacement Algorithm: FIFO
Block Size Cache size Miss Hit Hit ratio
2 4 196 146 42.69
2 8 108 234 68.42
2 16 86 256 74.85
2 32 58 284 83.04
2 64 47 295 86.26
Replacement Algorithm: LRU
Block Size Cache size Miss Hit Hit ratio
2 4 196 146 42.69
2 8 109 233 68.13
2 16 85 257 75.15
2 32 56 286 83.63
2 64 47 295 86.26

If we consider the total number of misses made by each algorithm (496, 495, 493), then random
performed better than FIFO and LRU, but on an average, in terms of hit ratio LRU slightly
outperforms the other two algorithms.

b) Plot the graph of Cache Hit Ratio Vs Cache size with respect to different replacement
algorithms. Comment on the graph that is obtained.
Summary: From the plot shown below, we can conclude the followings points:

1. As the cache size increase, the hit ratio also increases irrespective of the algorithm used.
2. Random replacement algorithm performed better with cache size (4, 16, 32 and 64),
while FIFO performed better with cache size (8).
c) Fill in the following table and analyse the behaviour of Set Associate Cache. Which one is
better and why?
Replacement Algorithm : LRU
Block Size, Set Blocks Miss Hit Hit ratio
Cache size
2, 64 2 – Way 47 295 86.26
2, 64 4 – Way 45 297 86.84
2, 64 8 – Way 47 297 86.84

The N way set-associative cache is a provides N blocks in each set providing high degree of
conflict reduction. Which in turn reduces the miss rates. Although it reduces the miss rate, the
speed of the cache is slower and are more expensive to build.
By closely looking at the hit ratio, we get similar figures for 2-way ,4-way and 8-way
associations. So, we can deduce that having a 2- way set associative cache would be more
optimal option for our execution scenario.

You might also like