Control Questions - lw4
Control Questions - lw4
Memory
Control Questions
Student:Mantescu Mihai
5.1. Draw diagram of the memory hierarchy. In which level is cache memory?
Princeton architecture prefigures, that both instructions and data in cache memory are stored
togetherincommonsection[insuchcaseL1andL2cachememoryinSimpleScalar simulation
environment are denoted respectively by ul1 and ul2 (unified cache)].
5.3. Explain cache memory organization according to Harvard architecture.
According to Harvard architecture instructions and data are stored in different sections (instruction
cache,anddatacache).These sections inSimpleScalarsimulation environment are realized as separate cache
memories (for example, L1 instructions’ cache memory – il1, or L2 data cache memory–dl2).
To design separate data and instructions cache memory model, simply it is selected needed
parameter of sim-cache simulator, cache memory size is specified, which is calculated by multiplying
number of lines by line size and associative degree.
When unified cache memory model is designed, near the selected data cache memory parameters,
we mustdenote, that this memoryis unified, and neededcache memory size, whereas near the instructions
cache memory must be written data cache memory abbreviation (dlx, where x is cache memory level 1 or
2).
Parameters:
5.5. Which line replacing algorithms can be used in cache memory? Discuss them.
-Random
The Random technique not based on cache usage.
When computer operates, relatively large blocks (lines) of information are transferred from main
memory to cache memory, so in time the latter is filled. Thus willing to load new block ofdata from main
memoryto cache, there willbenospaceforthem,andsomelines in cache memory we need to “sacrifice” –
i.e. to replace them by new ones, taken from main memory.
For example, for 2-way set associative, this is easily implemented. Each slot includes a USEbit. When
a slot is referenced, its USE bit is set to1andthe USEbit of the other slot inthat set isset to 0. When a block is
to be read into the set,the slot whoseUSE bit is 0 is used.Since weareassumingthatmore-recentlyused
memorylocationsaremorelikelytobereferenced, LRU should give the best hit ratio. Since many additional
bits should be allocated in tag catalog, pseudo-LRUmethod is used: only one additional bit is associated
with each line, which is periodicallycleared.Duringthereferencetotheline,itsbitissettingto1.If,while
choosingline toreplace,inanyofcandidatelinesdiscoveredvalueis0,itisconsidered,thatthislineis“least
recently used”.
When FIFO algorithm is used, the cache line, which was the longest in cache memory is replaced.
5.9. How Random line replacing algorithm is realized?
Just pick a slot from among the candidate slots at random. Simulations have shown that Random
replacementprovidesonlyslightlyinferiorperformancetoanalgorithmsbasedon cache usage.