0% found this document useful (0 votes)
26 views5 pages

Control Questions - lw4

This document discusses cache memory organization and line replacement algorithms in computer architecture. It explains that cache memory sits in the first level of the memory hierarchy. The document describes Princeton and Harvard cache memory organizations, and outlines key cache parameters like size, lines, and associativity that define its "geometry". It then discusses the three main line replacement algorithms - LRU, FIFO, and Random - and how each is implemented. The purpose of line replacement is to make space for new data blocks being loaded from main memory into the cache.

Uploaded by

mantescu mihai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views5 pages

Control Questions - lw4

This document discusses cache memory organization and line replacement algorithms in computer architecture. It explains that cache memory sits in the first level of the memory hierarchy. The document describes Princeton and Harvard cache memory organizations, and outlines key cache parameters like size, lines, and associativity that define its "geometry". It then discusses the three main line replacement algorithms - LRU, FIFO, and Random - and how each is implemented. The purpose of line replacement is to make space for new data blocks being loaded from main memory into the cache.

Uploaded by

mantescu mihai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Simulation and Investigation of Data Replacement Algorithms in Cache

Memory

Computer Architecture Laboratory Work Number 4

Control Questions

Student:Mantescu Mihai
5.1. Draw diagram of the memory hierarchy. In which level is cache memory?

As we can see in the image cache memory is located on the level 1.

5.2. Explain cache memory organization according to Princeton architecture.

Princeton architecture prefigures, that both instructions and data in cache memory are stored
togetherincommonsection[insuchcaseL1andL2cachememoryinSimpleScalar simulation
environment are denoted respectively by ul1 and ul2 (unified cache)].
5.3. Explain cache memory organization according to Harvard architecture.

According to Harvard architecture instructions and data are stored in different sections (instruction
cache,anddatacache).These sections inSimpleScalarsimulation environment are realized as separate cache
memories (for example, L1 instructions’ cache memory – il1, or L2 data cache memory–dl2).

5.4. Which of cache memory parameters form its “geometry”?

To design separate data and instructions cache memory model, simply it is selected needed
parameter of sim-cache simulator, cache memory size is specified, which is calculated by multiplying
number of lines by line size and associative degree.
When unified cache memory model is designed, near the selected data cache memory parameters,
we mustdenote, that this memoryis unified, and neededcache memory size, whereas near the instructions
cache memory must be written data cache memory abbreviation (dlx, where x is cache memory level 1 or
2).

Parameters:

-cache:dl1 configuration parameters of the first level (L1) data cache


-cache:dl2 configuration parameters of the second level (L2) data cache
-cache:il1 configuration parameters of the first level (L1) instruction cache
-cache:il2 configuration parameters of the second level (L2) instruction cache

After we set up this parameters we will know if we work with an unified


cache(Princeton) or with separate cache memories(Harvard).

5.5. Which line replacing algorithms can be used in cache memory? Discuss them.

We can use three ways to replace cache memory lines:

-LRU->Least Recently Used algorithm


Replace that block in the set which has been in cache longest with no reference to it.
LRU has larger influence only on lower size cache memory.
-FIFO->First In First Out algorithm
FIFO is easily implemented as a round-robin or circular buffer technique.

-Random
The Random technique not based on cache usage.

5.6. Why should we replace cache memory line’s content?

When computer operates, relatively large blocks (lines) of information are transferred from main
memory to cache memory, so in time the latter is filled. Thus willing to load new block ofdata from main
memoryto cache, there willbenospaceforthem,andsomelines in cache memory we need to “sacrifice” –
i.e. to replace them by new ones, taken from main memory.

5.7. How LRU line replacing algorithm is realized?

For example, for 2-way set associative, this is easily implemented. Each slot includes a USEbit. When
a slot is referenced, its USE bit is set to1andthe USEbit of the other slot inthat set isset to 0. When a block is
to be read into the set,the slot whoseUSE bit is 0 is used.Since weareassumingthatmore-recentlyused
memorylocationsaremorelikelytobereferenced, LRU should give the best hit ratio. Since many additional
bits should be allocated in tag catalog, pseudo-LRUmethod is used: only one additional bit is associated
with each line, which is periodicallycleared.Duringthereferencetotheline,itsbitissettingto1.If,while
choosingline toreplace,inanyofcandidatelinesdiscoveredvalueis0,itisconsidered,thatthislineis“least
recently used”.

5.8. How FIFO line replacing algorithm is realized?

When FIFO algorithm is used, the cache line, which was the longest in cache memory is replaced.
5.9. How Random line replacing algorithm is realized?

Just pick a slot from among the candidate slots at random. Simulations have shown that Random
replacementprovidesonlyslightlyinferiorperformancetoanalgorithmsbasedon cache usage.

You might also like