CA Lecture 4
CA Lecture 4
أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Cache Memory
Contents of Lecture:
Elements of Cache Design
Cache Addresses
Cache Size
Mapping Function
Replacement Algorithms
Summarize number(1)
Page 1 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Cache Addresses
Logical cache stores data using logical/virtual addresses.
Virtual memory is a facility that allows programs to address memory from a
logical point of view, without regard to the amount of main memory
physically available.
Memory management unit (MMU) translates each virtual address into a
physical address in main memory.
Physical cache stores data using main memory physical addresses
One advantage of the logical cache is that cache access speed is faster than a
physical cache, because the cache can respond before the MMU performs an address
translation.
Page 2 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Cache Size
We would like the size of the cache to be small enough so that the overall average
cost per bit is close to that of main memory alone and large enough so that the overall
average access time is close to that of the cache alone.
The available chip and board area also limits cache size.
Because the performance of the cache is very sensitive to the nature of the workload,
it is impossible to arrive at a single “optimum” cache size.
Table 4.3 lists the cache sizes of some current and past processors
Page 3 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Mapping Function
Because there are fewer cache lines than main memory blocks. An algorithm is
needed for mapping main memory blocks into cache lines.
Example:
For all three cases, the example includes the following elements:
Cache of 64kByte
Cache block of 4 bytes.(transfer between cache and main memory)
i.e. cache is 16k (2 ) lines of 4 bytes (64k/4 = 2 )
16MBytes main memory
24 bit address
(2 =16M)
Direct Mapping:
The simplest technique, known as direct mapping
Each block of main memory into only one possible cache line.
i.e. if a block is in cache, it must be in one specific place
Following Figure (Figure 4.8a) shows the mapping for the first m blocks of main memory.
Each block of main memory maps into one unique line of the cache
Page 4 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
24 bit address
2 bit word identifier (4 byte block)
No two blocks in the same line have the same Tag field
Check contents of cache by finding line and checking Tag
Page 5 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Example:
Page 6 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Associative Mapping:
Associative mapping overcomes the disadvantage of direct mapping by permitting each main
memory block to be loaded into any line of the cache
Following figure (Figure 4.8b) show associative mapping:
In this case, the cache control logic interprets a memory address simply as a Tag and a Word
field.
The Tag field uniquely identifies a block of main memory.
Cache searching gets expensive
Least significant 2 bits of address identify which 16 bit word is required from 32 bit data
block
Page 7 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Example:
Page 8 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Page 9 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Page 10 of 11
Computer Architecture إﺑﺘﺴﺎم أﺑﻜﺮ.أ
sem7 lecture No (4) ﻋﻠﻮم ﺣﺎﺳﻮب
Replacement Algorithms:
Replacement Algorithms (1) Direct mapping:
No choice
Each block only maps to one line
Replace that line
Summarize number(1):
Summarize the following elements of cache:
Write Policy
Write through
Write back
Line size
Number of caches
Single or two level
Unified or split
Page 11 of 11