SUMMARY
MEMORY SYSTEM ORGANIZATION
Memory technology Hierarchical memory systems Characteristics of the storage-device Main memory organization SRAM DRAM Cache memory
COMPUTER ARCHITECTURE
MEMORY TECHNOLOGY
Every computer system contains a variety of devices to store the instructions and data The storage devices + the algorithms (HW or SW implemented) needed to control or manage the stored information constitute the memory system of the computer Desirable: processors should have immediate and uninterrupted access to memory
maximum speed to transfer information between a processor and memory
DESIGNING MEMORY SYSTEM
To provide adequate storage capacity with an acceptable level of performance at a reasonable cost
The use of a number of different memory devices with different cost performance ratios organized to provide a high average performance at a low average cost per bit
The individual memories form a hierarchy of storage devices
Memories that operate at speeds comparable to processor speeds are relatively costly
It is not feasible to employ a single memory using just one type of technology The stored information is distributed over a variety of different memory units with very different physical characteristics
COMPUTER ARCHITECTURE 3
The development of automatic storage-allocation methods to make more efficient use of the available memory space. The development of virtual-memory concepts to free the ordinary user from memory management, and make programs largely independent of the physical memory configurations used. The design of communication links to the memory system so that all processors connected to it can operate at or near their maximum rates
increasing the effective memory processor bandwidth providing protection mechanisms
COMPUTER ARCHITECTURE 4
MAIN COMPONENTS OF THE MEMORY
1.
BLOCK DIAGRAM OF A MEMORY HIERARCHY (main components)
Direct Access Indirect Access
Internal processor memory
A small set of high-speed registers used for temporary storage of instructions and data.
2.
Main memory (or primary memory)
A relatively large & fast memory used for program and data storage during computer operation Locations in main memory can be accessed directly and rapidly by the CPU instruction set Semiconductor technology CPU Main Memory Auxiliary IO Memory (external)
3.
Secondary memory (or auxiliary memory)
Large memory capacity & much slower than main memory Store system programs, large data files, and the like which are not continually required by the CPU Information in secondary storage is accessed indirectly via input-output programs Representative technologies used for secondary memory are magnetic and optic disks
COMPUTER ARCHITECTURE 5
Registers
Cache Memory
S,p,tA c
COMPUTER ARCHITECTURE 6
LOCALITY OF REFERENCES PRINCIPLES
Programs commonly localize their memory references
Only a small portion of the memory is likely to be accessed at a given time This characteristic is not a natural law; it may be violated
But it is a behavior pattern exhibited by many programs at most times Exploiting this behavior is key to designing an effective memory hierarchy
TWO TYPES OF LOCALITY
Temporal Locality (Locality in Time)
The location of a memory reference is likely to be the same as another recent reference Programs tend to reference the same memory locations at a future point in time Due to loops and iteration, programs spending a lot of time in one section of code
Spatial Locality (Locality in Space)
The location of a memory reference is likely to be near another recent reference Programs tend to reference memory locations that are near other recently-referenced memory locations
7 COMPUTER ARCHITECTURE 8
COMPUTER ARCHITECTURE
TWO TYPES OF LOCALITY
Exploit Temporal Locality
Keep recently accessed data close to processor
MEMORY HIERARCHY PRINCIPLES
Increasing distance from the CPU represents also an increasing for the access time
Memory Level 1 CPU
Block A Block B
Exploit Spatial Locality
Move contiguous blocks of data between levels
Memory Level i Memory Level i+1
At any given time, data is copied only between two adjacent levels
Upper Level (the one closer to the processor)
Smaller, faster, and uses more expensive technology
Lower Level (the one further from the processor)
Bigger, slower, and uses less expensive technology
COMPUTER ARCHITECTURE 9 COMPUTER ARCHITECTURE 10
MEMORY HIERARCHY PRINCIPLES
The multi-level read-write memory system must satisfy two properties : Inclusion property: All information located on a upper memory level it is also stored on a lower memory level (ILi represents the information stored in memory level i):
IL1 IL2 ... ILn word miss / word hit
HIERARCHY TERMINOLOGY
Hit: Accessed data is found in upper level (for example Block A in level i)
Hit Rate = fraction of accesses found in upper level Hit Time = Time to access the upper level = Memory access time + Time to determine hit/miss
Miss: Accessed data found only in lower level (for example Block B in level i+1)
Processor waits until data is fetched then restarts the access Miss Rate = 1 (Hit Rate) Miss Penalty
Time to get block from lower level + time to replace in upper level + time to deliver the block the processor
Coherence property: An information stored at a specific address in level i must be the same on all lower memory levels
Maintain coherence:
propagation of the modified value to the lower levels ("write-through") updating information on lower levels when replaced from level i ("write-back")
COMPUTER ARCHITECTURE 11
Hit Time << Miss Penalty
COMPUTER ARCHITECTURE 12
Memory-Device Functional Characteristics
1. Storage capacity (S). Expressed in multiple of bits, or bytes.
210 bits = 1024 bits = 1 Kb; 1024 Mb = 1 Gb; 1024 Kb = 1 Mb 1024 Gb = 1 Tb 8 bits = 1 Byte = 1 B
Memory-Device Characteristics
3. Access time
The average time required to read a fixed amount of information, e.g., one word, from the memory - read access time or, more commonly, the access time of the memory (tA) The write access time is defined similarly - it is typically, but not always, equal to the read access time Access time depends on the physical characteristics of the storage medium, and also on the type of access mechanism used tA is usually calculated from the time a read request is received by the memory unit to the time at which all the requested information has been made available at the memory output terminals The access rate bA of the memory defined as 1/tA and measured in words per second
COMPUTER ARCHITECTURE 14
2. Cost. The price include not only the cost of the information storage cells themselves but also the cost of the peripheral equipment or access circuitry essential to the operation of the memory Let C be the price in dollars of a complete memory system with S bits of storage capacity, and specific cost c of the memory:
c= C dollars / bit S
COMPUTER ARCHITECTURE
13
Memory-Device Characteristics
4. Access modes - the order of sequence in which information can be accessed
Random-access memory (RAM) - if locations may be accessed in any order and access time is independent of the location being accessed
Semiconductor memories Each storage location has a separate access (addressing) mechanism
Memory-Device Characteristics
Serial-access memories - storage locations can be accessed only in certain predetermined sequences
Magnetic-tape, optical memories. The access mechanism is shared among different locations, and must be assigned to different locations at different times moving the stored information, the read-write head, or both Serial access tends to be slower than random access
Magnetic disks contain a large number of independent rotating tracks If each track has its own read-write head, the tracks may be accessed randomly, although access within each track is serial The access mode is sometimes called semi-random or, rather misleadingly, direct access
COMPUTER ARCHITECTURE
15
COMPUTER ARCHITECTURE
16
Memory-Device Characteristics
5. Alterability of information contained
ROMs - memories whose contents cannot be altered on-line
ROMs whose contents can be changed (usually offline and with some difficulty) are called programmable read-only memories (PROMs).
Memory-Device Characteristics
6. Permanence of storage - the stored information may be lost over a period of time unless appropriate action is taken Destructive readout (reading the memory destroys the stored information)
Each read operation must be followed by a write operation that restores the original state of the memory The restoration is usually carried out automatically, by write back into the location originally addressed from a buffer register
Dynamic storage
Over a period of time, a stored charge tends to leak away, causing a loss of information unless the charge is restored The process of restoring is called refreshing (dynamic memories)
RWM - memories in which reading or writing can be done on-line
Volatility
A memory is said to be volatile if the stored information can be destroyed by a power failure
COMPUTER ARCHITECTURE
17
COMPUTER ARCHITECTURE
18
Memory-Device Characteristics
7. Cycle time and data transfer rate Cycle time (tM) is the time needed to complete any read or write operation in the memory This means that the minimum time that must elapse between the initiation of two different accesses by the memory can be greater than tM Data transfer rate or bandwidth (bM) - the maximum amount of information that can be transferred to or from the memory every second (= 1/tM)
Memory-Device Characteristics
Cycle time vs access time
Memory cycle time Access time Seek Transfer time time Latency time Timp
t1
t2
t3
t4
bM =
1 tM
[ words / sec]
bM =
w tM
[bits / sec]
In cases where tA tM both are used to measure memory speed
w = memory bus width
COMPUTER ARCHITECTURE 19 COMPUTER ARCHITECTURE 20
Memory-Device Characteristics
8. Power: specific power
Ptot S
Main memory devices
p=
[W / bit ]
SRAM DRAM Cache memory
9. Geometry: only for semiconductor memories
N lines M columns
COMPUTER ARCHITECTURE
21
COMPUTER ARCHITECTURE
22
MAIN MEMORY
Main memory (primary or internal memory):
Used for program and data storage during computer operation Directly accessed by the CPU (microprocessor) Semiconductor technology Volatile memory (RWM) A random accessed memory Every cell is connected to selection lines (bit line for RD / WR) Every cell is connected to data lines (column bit lines for RD / WR)
Example cell memory
ROM-MOS
VGG
To output amplifier Column select ROM cell
Line select
Q1
COMPUTER ARCHITECTURE
23
COMPUTER ARCHITECTURE
24
Example cell memory
Static RAM (SRAM) cell
Word Line VDD
Example cell memory
Dynamic RAM
Word Line Word Line
Bit Line
pMOS
pMOS
nMOS
nMOS
Transfer Gate
Bit Line Bit Line
Sense Amplifier
Bit Line
COMPUTER ARCHITECTURE
Bit Line
25 COMPUTER ARCHITECTURE 26
Example: RAM selection cells
Bit Line Line Bit Line Bit
Simple example: 4 x 4 bits RAM
Data In Read/Write Register 1 0 1 Input Data pins Register 2
Address input
Line 0
Bit Line Line Bit Line A0 Bit A1 2 to 4 Decoder
b3 A1 A0 b3
b2
b1
b0
R/W
Line 1
Bit Line Line Bit Line Bit
2 Register 3
CS
Control signals
Address Chip Select
44 RAM b2 b1 b0
CS
Line 2
Output Data pins Register 4
Data Out
COMPUTER ARCHITECTURE
27
COMPUTER ARCHITECTURE
28
Static RAM (SRAM)
SRAM
A general approach to reducing the access circuitry cost in randomaccess memories is called matrix (array) organization The storage cells are physically arranged as rectangular arrays of cells
This is primarily to facilitate layout of the connections between the cells and the access circuitry
Bipolar or unipolar technology Operating cycles:
Read cycle Write cycle
Every cell contain a latch (1 bit)
Access circuitry:
Decode the address word Control the cell selection lines Read and write cells Control internal logic
COMPUTER ARCHITECTURE 29
The memory address is partitioned into d components so that the address Ai of cell Ci becomes a d-dimensional vector (Ai,1, Ai,2, ..., Ai,d) = Ai Each of the d parts of an address word goes to a different address decoder and a different set of address drivers A particular cell is selected by simultaneously activating all d of its address lines A memory unit with this kind of addressing is said to be a ddimensional memory
COMPUTER ARCHITECTURE 30
SRAM
The simplest array organizations have d = 1 and are called one-dimensional, or 1-D, memories If the storage capacity of the unit is N w bits, then the access circuitry typically contains a one-out-of-N address decoder and N address drivers For example, for a 8K 8 RAM memory, for this 1-D organization: a=13, N = 213 and w = 8.
Principle of 1-d addressing scheme
Storage locations Address Bus a
Address Register
Address Decoder
Internal Control Signals
Timing and control circuits
Data drivers and registers
Data Bus
CS
COMPUTER ARCHITECTURE 31
R/W
OE
32
COMPUTER ARCHITECTURE
Explanation
N = 2a The CS (chip select) input is used to enable the device
This is an active low input, so it will only respond to its other inputs if the CS line is low.
Principle of 2-D addressing
The address field is divided into two components, called X and Y, which consist of ax and ay bits, respectively
Array X Address Decoder of memory storage locations Nx
The OE (output enable) input is used to cause the memory device to place the contents of the location specified by the address inputs onto the data pins The WR (write enable) input is used to cause the memory device to write the internal location specified by the address inputs with the data currently appearing on the data pins
This is an active low signal, and the actual write occurs on the rising edge when WR goes from low back to the high state
COMPUTER ARCHITECTURE 33
Address Bus
ax
Ny Y Address Decoder
COMPUTER ARCHITECTURE 34
ay
Theoretical example of a 2-d organization
Principle of 2-D addressing
The cells are arranged in a rectangular array of rows and columns The total number of cells is N = NxNy A cell is selected by the coincidence of signals on its X and Y address lines The 2-D organization requires substantially less access circuitry than the 1-D for a fixed amount of storage For example, if Nx = Ny = N the number of address drivers needed is 2 N For N >> 4 the difference between 1-D and 2-D organizations is significant If the 1-bit storage cells are replaced by w-bit registers, then an entire word can be accessed in each read or write cycle but the bits within a word are not individually addressable For the example of 8K8 memory the 2-D organization, for 8-bits on a word can look theoretically like in next slide
COMPUTER ARCHITECTURE 35
for a 8K8 bits memory
8 8K cells
DCD X ADRESS 7
128 128
3 8K celule 2 8K celule 1 8K cells
64 64
DCD Y 6
COMPUTER ARCHITECTURE
36
Real structure example of a 2-d organization for a 8K8 bits memory
8 ADRESS DCD X 256 Array of cells 256 256 cells
Read Cycle Timing Diagram
tCR ADR
CS R/W
HiZ
MDX 0 d0 MDX 0 d1 Read / Write amplifiers 8 MDX 0 d7
32 5
32
32
tAC
HiZ VALID DATA
DATA OUT
tAA
Data Bus
37
tCR = read cycle time tAA = access time (from address change) tAC = access time (from CS active)
COMPUTER ARCHITECTURE 38
COMPUTER ARCHITECTURE
Write Cycle Timing Diagram
tCW ADR
DYNAMIC MEMORIES
DRAMs are currently the primary memory device of choice because they provide the lowest cost per bit and greatest density among solidstate memory technologies. Every cell contain MOS capacitor(s) DRAM - dynamic since needs to be refreshed periodically (8 ms, 1% time) Addresses divided (address pins multiplexed) into 2 halves (Memory as a 2D matrix):
RAS or Row Access Strobe CAS or Column Access Strobe
CS tW R/W tAW
Data BUS tDW tCW = write cycle time tDH = data hold time tDS = data set-up time tW = write width tDS tDH
tDW = delay for input data to write tAW = delay between address change and write control
E.g. 222 cells = 4 Mbit, memory array 2048 x 2048 cells, number of address IC pins = 11 Operating cycles:
Read cycle Write cycle Refreshment cycle
COMPUTER ARCHITECTURE 40
COMPUTER ARCHITECTURE
39
The Conventional DRAM (asynchronous interface)
The Conventional DRAM
The multiplexed address bus uses two control signals - the row and column address strobe signals, (RAS and CAS respectively) The row address causes a complete row in the memory array to propagate down the bit lines to the sense amps The column address selects the appropriate data subset from the sense amps and causes it to be driven to the output pins. Access transistors called sense amps are connected to the each column and provide the read and restore operations of the chip Since the cells are capacitors that discharge for each read operation, the sense amp must restore the data before the end of the access cycle. A refresh controller determines the time between refresh cycles, and a refresh counter ensures that the entire array (all rows) is refreshed
COMPUTER ARCHITECTURE 42
COMPUTER ARCHITECTURE
41
The Conventional DRAM
(typical memory access)
Conventional DRAM read cycles
1. 2. 3. 4. 5.
The row address bits are placed onto the address pins After a period of time the RAS\ signal falls, which activates the sense amps and causes the row address to be latched into the row address buffer The column address bits are set up The column address is latched into the column address buffer when CAS\ falls, at which time the output buffer is also turned on When CAS\ stabilizes, the selected sense amp feeds its data onto the output buffer /WE = 0 write, = 1 read. If became active (LOW) before /CAS (advance write cycle) data outputs remains in HiZ state If /WE became active (LOW) after /CAS the cycle is a write-read cycle
COMPUTER ARCHITECTURE
43
COMPUTER ARCHITECTURE
44
Key DRAM Timing Parameters
tRAC: minimum time from RAS line falling to the valid data output (quoted as the speed of a DRAM when buy)
A typical 4Mb DRAM tRAC = 60 ns
DRAM - fast speed op. modes
Modes: change the access to columns, to reduce the average access time Fast page mode
Page = All bits on the same ROW (spatial locality) Row-address is held constant and data from multiple columns is read from the sense amplifiers Dont need to wait for word-line to recharge Toggle CAS with new column address
tRC: minimum time from the start of one row access to the start of the next.
tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns
tCAC: minimum time from CAS line falling to valid data output.
15 ns for a 4Mbit DRAM with a tRAC of 60 ns
tPC: minimum time from the start of one column access to the start of the next.
35 ns for a 4Mbit DRAM with a tRAC of 60 ns
Nibble mode
Four successive column addresses generated internally
A 60 ns (tRAC) DRAM can
perform a row access only every 110 ns (tRC) perform column access (tCAC) in 15 ns, but time between column accesses is at least 35 ns (tPC).
COMPUTER ARCHITECTURE 45
Burst mode
More successive column addresses (in the same page) generated internally
COMPUTER ARCHITECTURE 46
Fast Page Mode DRAM (FPM DRAM)
Extended Data Out DRAM (EDO DRAM)
EDO DRAM adds a latch between the sense-amps and the output pins. This latch holds output pin state and permits the CAS to rapidly de-assert, allowing the memory array to begin precharging sooner The latch in the output path also implies that the data on the outputs of the DRAM circuit remain valid longer into the next clock phase
48
COMPUTER ARCHITECTURE
47
COMPUTER ARCHITECTURE
Extended Data Out DRAM
Nibble mode (serial four)
COMPUTER ARCHITECTURE
49
COMPUTER ARCHITECTURE
50
Burst mode
Refreshment modes
RAS only refresh (conventional refresh) CAS-before-RAS refresh (internal logic for refresh) Hidden refresh (refresh cycle is hidden in an read/write access cycle)
COMPUTER ARCHITECTURE
51
COMPUTER ARCHITECTURE
52
RAS only refresh
CAS-before-RAS refresh
COMPUTER ARCHITECTURE
53
COMPUTER ARCHITECTURE
54
Hidden refresh
Synchronous DRAM (SDRAM)
SDRAM exchanges data with the processor synchronized to an external clock signal SDRAM latches information to and from the controller based on a clock signal SDRAM employs a burst mode In burst mode, a series of data bits can be clocked out rapidly after the first bit has been accessed
This mode is useful when all the bits to be accessed are in sequence and in the same row of the array as the initial access
SDRAM has a dual-bank internal architecture that improves opportunities for on-chip parallelism (interleaved memory banks)
COMPUTER ARCHITECTURE 55 COMPUTER ARCHITECTURE 56
Synchronous DRAM (SDRAM)
SDRAM contains a programmable mode register and associated control logic that provides a mechanism to customize the SDRAM to suit specific system needs
The mode register specifies the burst length, which is the number of separate units of data synchronously fed onto the bus The register also allows the programmer to adjust the latency between receipt of a read request and the beginning of data transfer.
SDRAM Read Operation Clock Diagram
For SDRAM, there are 5 important timings:
The time required to switch internal banks The time required between /RAS and /CAS access The amount of time necessary to "prepare" for the next output in burst mode The column access time The time required to make data ready by the next clock cycle in burst mode (read cycle time)
COMPUTER ARCHITECTURE 57 COMPUTER ARCHITECTURE 58
DRAM vs SDRAM
DRAM No clock RAS control by change level One bank (array) of memory A transfer for every column address (or CAS pulse) Read delay (latency) no programmable
SDRAM Operate at the clock frequency RAS control at the clock impulse Two interleaved memory banks Sometimes static cache at the interface Burst transfer programmable (1, 2, 4, 8, or 256 transfers for a single provided column address, in the same row Read latency programmable
CACHE MEMORY
COMPUTER ARCHITECTURE
59
COMPUTER ARCHITECTURE
60
Cache principles
Cache memory is an intermediate temporary memory unit positioned between the processor registers and main memory Large and slow main memory smaller and faster cache memory The cache contains a copy of portions of main memory When the processor attempts to read a word of memory, a check is made to determine if the word is in the cache
If so, the word is delivered to the processor If not, a block of main memory, consisting of some fixed number of words, is read into the cache and then the word is delivered to the processor
Cache and Main Memory
Because of the phenomenon of locality of reference, when a block of data is fetched into the cache to satisfy a single memory reference, it is likely that future references will be to other words in the block.
COMPUTER ARCHITECTURE 61 COMPUTER ARCHITECTURE 62
STRUCTURE OF A CACHE/MAINMEMORY SYSTEM CACHE READ OPERATION
M = 2a/K blocks m<<M
COMPUTER ARCHITECTURE
63
COMPUTER ARCHITECTURE
64
HIT RATIO
The performance of cache memory can be given by a synthetic parameter, called hit ratio (HR) Hit ratio determined running benchmarks HR represents the ratio between the total numbers of hits in cache and the total number of memory accesses (hit + miss numbers) The value of HR must be greater then 0.9 From HR value we can compute the average memory access time. For example if HR = 0.9, the access time to main memory (for misses) is 100 nsec. and access time to cache memory is 10 ns, the average memory access time is:
Mapping Function
Because there are fewer cache lines than main memory blocks is needed:
An algorithm for mapping main memory blocks into cache lines A means for determining which main memory block currently occupies a cache line
The choice of the mapping function dictates how the cache is organized Three techniques can be used:
Direct mapping Associative mapping Set associative mapping
t acm
9 10 + 100 = = 19ns 10
COMPUTER ARCHITECTURE 65
COMPUTER ARCHITECTURE
66
Direct mapping
Direct mapping is the simplest technique Direct mapping maps each block of main memory into only one possible cache line The mapping is expressed as: i = j modulo m
where i = cache line number; j = main memory block number; m = number of lines in the cache
Direct mapping
Sometime the word and block fields are called index field, because the index is used to address data in cache The use of a portion of the address as a line number provides a unique mapping of each block of main memory into the cache When a block is actually read into its assigned line, it is necessary to tag the data to distinguish it from other blocks that can fit into that line The effect of this mapping is that blocks of main memory are assigned to lines of the cache as follows:
For purposes of cache access, each main memory address can be viewed as consisting of three fields
The least significant w bits identify a unique word or byte within a block of main memory (in most contemporary machines, the address is at the byte level) The remaining a-w bits specify one of the 2a-w blocks of main memory The cache logic interprets these a-w bits as a tag of t bits (most significant portion) plus a cache line field of r bits (a = t+r+w)
COMPUTER ARCHITECTURE
67
COMPUTER ARCHITECTURE
68
DirectMapping Cache Organization
Example for address 1D00002E hex
32 bits memory address 8 bits tag 20 bits block address 4 bits word addr.
COMPUTER ARCHITECTURE
69
COMPUTER ARCHITECTURE
70
Direct mapping
Advantages for direct mapping:
Simple and cheap The tag field is short
Only those bits have to be stored which are not used to address the cache
Associative mapping
Each main memory block can be loaded into any line of the cache The cache control logic interprets a memory address simply as a tag and a word field In the cache is stored also data and the corresponding address The associative mapping is implemented with associative memories (content addressable memories) as cache memories
more expensive than a random access memory each cell must have storage capacity as well comparison logic circuits for matching its content with an external argument
COMPUTER ARCHITECTURE 72
Access is very fast.
Disadvantage:
There is a fixed cache location for any given block If a program happens to reference words repeatedly from two different blocks that map into the same line, then the blocks will be continually swapped in the cache, and the hit ratio will be low.
COMPUTER ARCHITECTURE
71
Fully associative mapping
Fully associative mapping
COMPUTER ARCHITECTURE
73
COMPUTER ARCHITECTURE
74
Fully associative mapping
Set associative mapping
Set associative mapping is a compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages In this case, the cache is divided into v sets, each of which consists of k lines This is referred to as k-way set associative mapping With set associative mapping, block Bj can be mapped into any of the lines of set i In this case, the cache control logic interprets a memory address simply as three fields: tag, set, and word
With fully associative mapping, the tag in a memory address is quite large and must be compared to the tag of every line in the cache With k-way set associative mapping, the tag in a memory address is much smaller and is only compared to the k tags within a single set
Advantages:
provides the highest flexibility concerning the line to be replaced when a new block is read into the cache i = j modulo 1 where i = cache line number; j = main memory block number;
Disadvantages:
complex the tag field is long fast access can be achieved only using high performance associative memories for the cache, which is difficult and expansive.
COMPUTER ARCHITECTURE 75
COMPUTER ARCHITECTURE
76
Set associative mapping
The mapping relationships are: i = j modulo v m =vk where i = cache set number; j = main memory block number; m = number of lines in the cache; v = number of sets in cache; k = number of cache lines per set In the extreme case of v = m, k = 1, the set associative technique reduces to direct mapping In the extreme case of v = 1, k = m, it reduces to associative mapping The use of two lines per set (v = m/2, k = 2) is the most common set associative organization (two-way set associative mapping) Four-way set associative (v = m/4, k = 4) makes a modest additional improvement for a relatively small additional cost
Further increases in the number of lines per set have little effect.
Example two-way set associative
COMPUTER ARCHITECTURE
77
COMPUTER ARCHITECTURE
78
EXAMPLE
Assume that:
Main memory is divided in 32 blocks Cache cache has 8 block frames (lines) The set-associative organization has 4 sets with 2 blocks per set, called two-way set associative
EXAMPLE
Where in cache main memory block 12 can be placed, for the three categories of cache organization?
Fully associative: block 12 can go into any of the eight block frames of the cache Direct mapped: block 12 can only be placed into block frame 4 (12 modulo 8) Two-way set associative: allows block 12 to be placed anywhere in set 0 (12 modulo 4). Line 0 or line 1
COMPUTER ARCHITECTURE 79 COMPUTER ARCHITECTURE 80
Replacement Algorithms
When a new block is brought into the cache, one of the existing blocks must be replaced For direct mapping, there is only one possible line for any particular block, and no choice is possible For the associative and set associative techniques, a replacement algorithm is needed To achieve high speed, such an algorithm must be implemented in hardware
Replacement Algorithms
LRU (least recently used): Replace that block in the set that has been in the cache longest with no reference to it
For two-way set associative each line includes a USE bit When a line is referenced, its USE bit is set to 1 and the USE bit of the other line in that set is set to 0 When a block is to be read into the set, the line whose USE bit is 0 is used Because we are assuming that more recently used memory locations are more likely to be referenced, LRU should give the best hit ratio.
FIFO (first-in-first-out): Replace that block in the set that has been in the cache longest
FIFO is easily implemented as a round-robin or circular buffer technique.
LFU (least frequently used: Replace that block in the set that has experienced the fewest references
LFU could be implemented by associating a counter with each line
COMPUTER ARCHITECTURE 81
Random replacement: is the simplest to implement and results are surprisingly good.
COMPUTER ARCHITECTURE 82
Write Policy
Write through: all write operations are made to main memory as well as to the cache
Any other processor-cache module can monitor traffic to main memory to maintain consistency within its own cache The main disadvantage of this technique is that it generates substantial memory traffic and may create a bottleneck
Write Policy
Write-through with buffered write
The same as write-through, but instead of slowing the processor down by writing directly to main memory, the write address and data are stored in a high-speed write buffer; the write buffer transfers data to main memory while the processor continues its task
Dirty bits: this status bit indicates whether the block is dirty (modified while in the cache) or clean (not modified)
If it is clean, the block is not written on a miss, since the lower level has identical information to the cache.
Write back: updates are made only in the cache
Minimizes memory writes When an update occurs, an UPDATE bit associated with the line is set When a block is replaced, it is written back to main memory if and only if the UPDATE bit is set The problem with write back is that portions of main memory are invalid, and hence accesses by I/O modules can be allowed only through the cache
COMPUTER ARCHITECTURE 83
Both write back and write through have their advantages
With write back, writes occur at the speed of the cache memory, and multiple writes within a block require only one write to the lower-level memory
write back uses less memory bandwidth, making write back attractive in multiprocessors
With write through, read misses never result in writes to the lower level, and write through is easier to implement than write back
Write through also has the advantage that the next lower level has the most current copy of the data This is important for I/O and for multiprocessors
COMPUTER ARCHITECTURE 84
Write Policy
Write Policy
Possible approaches to cache coherency include the following:
Bus watching with write through: Each cache controller monitors the address lines to detect write operations to memory by other bus masters
If another master writes to a location in shared memory that also resides in the cache memory, the cache controller invalidates that cache entry
In a bus organization in which more than one device (typically a processor) has a cache and main memory is shared: If data in one cache is altered:
This invalidates the corresponding word in main memory This invalidates the corresponding word in other caches (if any other cache happens to have that same word) Even if a write-through policy is used, the other caches may contain invalid data
Hardware transparency: Additional hardware is used to ensure that all updates to main memory via cache are reflected in all caches
Thus, if one processor modifies a word in its cache, this update is written to main memory. In addition, any matching words in other caches are similarly updated.
Noncachable memory: Only a portion of main memory is shared by more than one processor, and this is designated as noncachable
All accesses to shared memory are cache misses, because the shared memory is never copied into the cache The noncachable memory can be identified using chip-select logic or highaddress bits
A system that prevents this problem is said to maintain cache coherency
COMPUTER ARCHITECTURE
85
COMPUTER ARCHITECTURE
86
Write Policy for write miss
Since the data are not needed on a write, there are two common options on a write miss: Write allocate (also called fetch on write)
The block is loaded on a write miss, followed by the write-hit actions above This is similar to a read miss.
No-write allocate (also called write around)
The block is modified in the lower level and not loaded into the cache.
Write-back caches generally use write allocate (hoping that subsequent writes to that block will be captured by the cache) Write-through caches often use no-write allocate (since subsequent writes to that block will still have to go to memory)
COMPUTER ARCHITECTURE
87