Pertemuan 6
Pertemuan 6
William Stallings
Computer Organization
My notes in Red (please and Architecture
check for updates 10th Edition
© 2016 Pearson Education, Inc., Hoboken,
NJ. All rights reserved.
Memory access takes more than 100 clock cycles
(DDR delivers in about 50 clock cycles, DDR 3
and DDR are much faster) while processor speed
is 1 clock cycle. In early 1980s the speed of both
was about the same. So, accessing main memory
today is very expensive.
+ Chapter 4
Cache Memory
■ When CPU read Cache Hit speeds up operation, Cache Miss slows
down operation.
■ Capacity
■ Memory is typically expressed in terms of bytes
■ Unit of transfer
■ For internal memory the unit of transfer is equal to the number of electrical lines
into and out of the memory module. If transferred either as a word or block, it
will have to be further subdivided into bytes by assigning offsets.
About 4cents/GB
■ Disk cache
■ A portion of main memory can be used as a buffer to hold data temporarily
that is to be read out to disk
■ A few large transfers of data can be used instead of many small transfers of
data
■ Data can be retrieved rapidly from the software cache rather than slowly
from the disk
■ SRAM Fast. 6 transistor per cell, higher power. Does not need to be
refreshed. Retains content as long as power is provided.
Therefore least 2 significant bits will be used to keep track of cache line
and the two most significant bits will be used keep the tag.
Given any address we will know which cache line will hold the value
and at that location we can inspect the tag to see if the right address is represented
This is the principle behind direct mapping that we will discuss later.
Cache Memory
0 Education, Inc., 01
© 2016 Pearson Hoboken, NJ. Allvalue
rights4 reserved. 0100 value 4
+
Cache Mapping
■ Direct mapping – Each block from the memory can go to specific line in the
cache. Thus, different blocks will map to the same line and only one block can
be placed there at any given time.
■ Fully Associative mapping – Any block can go into any cache line
■ Set-associative mapping. Certain blocks from memory can go into a set of lines
(anywhere within the set).
17 9 6
Given an address of 1010 1100 1011 0011 0101 1000 0101 1110
tag index offset
1010 00101100110 101100001 011110
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+
Operation of Cache
■ CPU does not know cache exists. It gives the address of data required to be
loaded.
■ Cache intercepts the request, and the tag, index and offset are extracted
from the memory address as in the previous slide.
■ Cache uses the index to fine the line where the data from that address
‘should’ be located. It retrieves that data along with the tag and compares
the tag portion. If Tag in the address issued and the tag retrieved from the
line are the same then we have a hit. If different we have a miss. If it is a
hit use the offset to extract the right byte and send it to the CPU.
■ If it is a miss, evict that line and bring in the appropriate block into the line.
2-way cache index will become 9 bits and tag will become 17 bits
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+
What component manages each
memory hierarchy?
■ Registers – manipulated by CPU as instructions given by compiler or
programmer.
■ Virtual memory
■ Facility that allows programs to address memory from a logical point of
view, without regard to the amount of main memory physically available
■ When used, the address fields of machine instructions contain virtual
addresses
■ For reads to and writes from main memory, a hardware memory
management unit (MMU) translates each virtual address into a physical
address in main memory
a
Two values separated by a
slash refer to instruction and
data caches.
b
Both caches are instruction
only; no data caches.
■ Number of sets = v = 2d
■ Once the cache has been filled, when a new block is brought into the
cache, one of the existing blocks must be replaced
■ For direct mapping there is only one possible line for any particular
block and no choice is possible
■ First-in-first-out (FIFO)
■ Replace that block in the set that has been in the cache longest
■ Easily implemented as a round-robin or circular buffer technique
If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the cache multiple processors are attached to the same
then main memory must be updated by writing bus and each processor has its own local cache
the line of cache out to the block of memory - if a word is altered in one cache it could
before bringing in the new block conceivably invalidate a word in other caches
■ Write back
■ Minimizes memory writes
■ Updates are made only in the cache
■ Portions of main memory are invalid and hence accesses by I/O modules can be
allowed only through the cache
■ This makes for complex circuitry and a potential bottleneck
■ The on-chip cache reduces the processor’s external bus activity and speeds up
execution time and increases overall system performance
■ When the requested instruction or data is found in the on-chip cache, the bus access is
eliminated
■ On-chip cache accesses will complete appreciably faster than would even zero-wait state
bus cycles
■ During this period the bus is free to support other transfers
■ Two-level cache:
■ Internal cache designated as level 1 (L1)
■ External cache designated as level 2 (L2)
■ Potential savings due to the use of an L2 cache depends on the hit rates in both the
L1 and L2 caches
■ The use of multilevel caches complicates all of the design issues related to caches,
including size, replacement algorithm, and write policy
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
© 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
+
Unified Versus Split Caches
■ Has become common to split cache:
■ One dedicated to instructions
■ One dedicated to data
■ Both exist at the same level, typically as two L1 caches
■ Trend is toward split caches at the L1 and unified caches for higher levels
Intel
Cache
Evolution