Computer Architecture - Memory System
Computer Architecture - Memory System
Memory technology Hierarchical memory systems Characteristics of the storage-device Main memory organization SRAM DRAM Cache memory
COMPUTER ARCHITECTURE
MEMORY TECHNOLOGY
Every computer system contains a variety of devices to store the instructions and data The storage devices + the algorithms (HW or SW implemented) needed to control or manage the stored information constitute the memory system of the computer Desirable: processors should have immediate and uninterrupted access to memory
maximum speed to transfer information between a processor and memory
Memories that operate at speeds comparable to processor speeds are relatively costly
It is not feasible to employ a single memory using just one type of technology The stored information is distributed over a variety of different memory units with very different physical characteristics
COMPUTER ARCHITECTURE 3
The development of automatic storage-allocation methods to make more efficient use of the available memory space. The development of virtual-memory concepts to free the ordinary user from memory management, and make programs largely independent of the physical memory configurations used. The design of communication links to the memory system so that all processors connected to it can operate at or near their maximum rates
increasing the effective memory processor bandwidth providing protection mechanisms
COMPUTER ARCHITECTURE 4
2.
3.
Registers
Cache Memory
S,p,tA c
COMPUTER ARCHITECTURE 6
COMPUTER ARCHITECTURE
At any given time, data is copied only between two adjacent levels
Upper Level (the one closer to the processor)
Smaller, faster, and uses more expensive technology
HIERARCHY TERMINOLOGY
Hit: Accessed data is found in upper level (for example Block A in level i)
Hit Rate = fraction of accesses found in upper level Hit Time = Time to access the upper level = Memory access time + Time to determine hit/miss
Miss: Accessed data found only in lower level (for example Block B in level i+1)
Processor waits until data is fetched then restarts the access Miss Rate = 1 (Hit Rate) Miss Penalty
Time to get block from lower level + time to replace in upper level + time to deliver the block the processor
Coherence property: An information stored at a specific address in level i must be the same on all lower memory levels
Maintain coherence:
propagation of the modified value to the lower levels ("write-through") updating information on lower levels when replaced from level i ("write-back")
COMPUTER ARCHITECTURE 11
Memory-Device Characteristics
3. Access time
The average time required to read a fixed amount of information, e.g., one word, from the memory - read access time or, more commonly, the access time of the memory (tA) The write access time is defined similarly - it is typically, but not always, equal to the read access time Access time depends on the physical characteristics of the storage medium, and also on the type of access mechanism used tA is usually calculated from the time a read request is received by the memory unit to the time at which all the requested information has been made available at the memory output terminals The access rate bA of the memory defined as 1/tA and measured in words per second
COMPUTER ARCHITECTURE 14
2. Cost. The price include not only the cost of the information storage cells themselves but also the cost of the peripheral equipment or access circuitry essential to the operation of the memory Let C be the price in dollars of a complete memory system with S bits of storage capacity, and specific cost c of the memory:
c= C dollars / bit S
COMPUTER ARCHITECTURE
13
Memory-Device Characteristics
4. Access modes - the order of sequence in which information can be accessed
Random-access memory (RAM) - if locations may be accessed in any order and access time is independent of the location being accessed
Semiconductor memories Each storage location has a separate access (addressing) mechanism
Memory-Device Characteristics
Serial-access memories - storage locations can be accessed only in certain predetermined sequences
Magnetic-tape, optical memories. The access mechanism is shared among different locations, and must be assigned to different locations at different times moving the stored information, the read-write head, or both Serial access tends to be slower than random access
Magnetic disks contain a large number of independent rotating tracks If each track has its own read-write head, the tracks may be accessed randomly, although access within each track is serial The access mode is sometimes called semi-random or, rather misleadingly, direct access
COMPUTER ARCHITECTURE
15
COMPUTER ARCHITECTURE
16
Memory-Device Characteristics
5. Alterability of information contained
ROMs - memories whose contents cannot be altered on-line
ROMs whose contents can be changed (usually offline and with some difficulty) are called programmable read-only memories (PROMs).
Memory-Device Characteristics
6. Permanence of storage - the stored information may be lost over a period of time unless appropriate action is taken Destructive readout (reading the memory destroys the stored information)
Each read operation must be followed by a write operation that restores the original state of the memory The restoration is usually carried out automatically, by write back into the location originally addressed from a buffer register
Dynamic storage
Over a period of time, a stored charge tends to leak away, causing a loss of information unless the charge is restored The process of restoring is called refreshing (dynamic memories)
Volatility
A memory is said to be volatile if the stored information can be destroyed by a power failure
COMPUTER ARCHITECTURE
17
COMPUTER ARCHITECTURE
18
Memory-Device Characteristics
7. Cycle time and data transfer rate Cycle time (tM) is the time needed to complete any read or write operation in the memory This means that the minimum time that must elapse between the initiation of two different accesses by the memory can be greater than tM Data transfer rate or bandwidth (bM) - the maximum amount of information that can be transferred to or from the memory every second (= 1/tM)
Memory-Device Characteristics
Cycle time vs access time
Memory cycle time Access time Seek Transfer time time Latency time Timp
t1
t2
t3
t4
bM =
1 tM
[ words / sec]
bM =
w tM
[bits / sec]
Memory-Device Characteristics
8. Power: specific power
Ptot S
p=
[W / bit ]
COMPUTER ARCHITECTURE
21
COMPUTER ARCHITECTURE
22
MAIN MEMORY
Main memory (primary or internal memory):
Used for program and data storage during computer operation Directly accessed by the CPU (microprocessor) Semiconductor technology Volatile memory (RWM) A random accessed memory Every cell is connected to selection lines (bit line for RD / WR) Every cell is connected to data lines (column bit lines for RD / WR)
Line select
Q1
COMPUTER ARCHITECTURE
23
COMPUTER ARCHITECTURE
24
Bit Line
pMOS
pMOS
nMOS
nMOS
Transfer Gate
Bit Line Bit Line
Sense Amplifier
Bit Line
COMPUTER ARCHITECTURE
Bit Line
25 COMPUTER ARCHITECTURE 26
Line 0
Bit Line Line Bit Line A0 Bit A1 2 to 4 Decoder
b3 A1 A0 b3
b2
b1
b0
R/W
Line 1
Bit Line Line Bit Line Bit
2 Register 3
CS
Control signals
44 RAM b2 b1 b0
CS
Line 2
Data Out
COMPUTER ARCHITECTURE
27
COMPUTER ARCHITECTURE
28
SRAM
A general approach to reducing the access circuitry cost in randomaccess memories is called matrix (array) organization The storage cells are physically arranged as rectangular arrays of cells
This is primarily to facilitate layout of the connections between the cells and the access circuitry
The memory address is partitioned into d components so that the address Ai of cell Ci becomes a d-dimensional vector (Ai,1, Ai,2, ..., Ai,d) = Ai Each of the d parts of an address word goes to a different address decoder and a different set of address drivers A particular cell is selected by simultaneously activating all d of its address lines A memory unit with this kind of addressing is said to be a ddimensional memory
COMPUTER ARCHITECTURE 30
SRAM
The simplest array organizations have d = 1 and are called one-dimensional, or 1-D, memories If the storage capacity of the unit is N w bits, then the access circuitry typically contains a one-out-of-N address decoder and N address drivers For example, for a 8K 8 RAM memory, for this 1-D organization: a=13, N = 213 and w = 8.
Address Register
Address Decoder
Data Bus
CS
COMPUTER ARCHITECTURE 31
R/W
OE
32
COMPUTER ARCHITECTURE
Explanation
N = 2a The CS (chip select) input is used to enable the device
This is an active low input, so it will only respond to its other inputs if the CS line is low.
The OE (output enable) input is used to cause the memory device to place the contents of the location specified by the address inputs onto the data pins The WR (write enable) input is used to cause the memory device to write the internal location specified by the address inputs with the data currently appearing on the data pins
This is an active low signal, and the actual write occurs on the rising edge when WR goes from low back to the high state
COMPUTER ARCHITECTURE 33
Address Bus
ax
Ny Y Address Decoder
COMPUTER ARCHITECTURE 34
ay
DCD X ADRESS 7
128 128
64 64
DCD Y 6
COMPUTER ARCHITECTURE
36
CS R/W
HiZ
MDX 0 d0 MDX 0 d1 Read / Write amplifiers 8 MDX 0 d7
32 5
32
32
tAC
HiZ VALID DATA
DATA OUT
tAA
Data Bus
37
tCR = read cycle time tAA = access time (from address change) tAC = access time (from CS active)
COMPUTER ARCHITECTURE 38
COMPUTER ARCHITECTURE
DYNAMIC MEMORIES
DRAMs are currently the primary memory device of choice because they provide the lowest cost per bit and greatest density among solidstate memory technologies. Every cell contain MOS capacitor(s) DRAM - dynamic since needs to be refreshed periodically (8 ms, 1% time) Addresses divided (address pins multiplexed) into 2 halves (Memory as a 2D matrix):
RAS or Row Access Strobe CAS or Column Access Strobe
CS tW R/W tAW
Data BUS tDW tCW = write cycle time tDH = data hold time tDS = data set-up time tW = write width tDS tDH
tDW = delay for input data to write tAW = delay between address change and write control
E.g. 222 cells = 4 Mbit, memory array 2048 x 2048 cells, number of address IC pins = 11 Operating cycles:
Read cycle Write cycle Refreshment cycle
COMPUTER ARCHITECTURE 40
COMPUTER ARCHITECTURE
39
COMPUTER ARCHITECTURE
41
1. 2. 3. 4. 5.
The row address bits are placed onto the address pins After a period of time the RAS\ signal falls, which activates the sense amps and causes the row address to be latched into the row address buffer The column address bits are set up The column address is latched into the column address buffer when CAS\ falls, at which time the output buffer is also turned on When CAS\ stabilizes, the selected sense amp feeds its data onto the output buffer /WE = 0 write, = 1 read. If became active (LOW) before /CAS (advance write cycle) data outputs remains in HiZ state If /WE became active (LOW) after /CAS the cycle is a write-read cycle
COMPUTER ARCHITECTURE
43
COMPUTER ARCHITECTURE
44
tRC: minimum time from the start of one row access to the start of the next.
tRC = 110 ns for a 4Mbit DRAM with a tRAC of 60 ns
tCAC: minimum time from CAS line falling to valid data output.
15 ns for a 4Mbit DRAM with a tRAC of 60 ns
tPC: minimum time from the start of one column access to the start of the next.
35 ns for a 4Mbit DRAM with a tRAC of 60 ns
Nibble mode
Four successive column addresses generated internally
Burst mode
More successive column addresses (in the same page) generated internally
COMPUTER ARCHITECTURE 46
COMPUTER ARCHITECTURE
47
COMPUTER ARCHITECTURE
COMPUTER ARCHITECTURE
49
COMPUTER ARCHITECTURE
50
Burst mode
Refreshment modes
RAS only refresh (conventional refresh) CAS-before-RAS refresh (internal logic for refresh) Hidden refresh (refresh cycle is hidden in an read/write access cycle)
COMPUTER ARCHITECTURE
51
COMPUTER ARCHITECTURE
52
CAS-before-RAS refresh
COMPUTER ARCHITECTURE
53
COMPUTER ARCHITECTURE
54
Hidden refresh
SDRAM has a dual-bank internal architecture that improves opportunities for on-chip parallelism (interleaved memory banks)
COMPUTER ARCHITECTURE 55 COMPUTER ARCHITECTURE 56
DRAM vs SDRAM
DRAM No clock RAS control by change level One bank (array) of memory A transfer for every column address (or CAS pulse) Read delay (latency) no programmable
SDRAM Operate at the clock frequency RAS control at the clock impulse Two interleaved memory banks Sometimes static cache at the interface Burst transfer programmable (1, 2, 4, 8, or 256 transfers for a single provided column address, in the same row Read latency programmable
CACHE MEMORY
COMPUTER ARCHITECTURE
59
COMPUTER ARCHITECTURE
60
Cache principles
Cache memory is an intermediate temporary memory unit positioned between the processor registers and main memory Large and slow main memory smaller and faster cache memory The cache contains a copy of portions of main memory When the processor attempts to read a word of memory, a check is made to determine if the word is in the cache
If so, the word is delivered to the processor If not, a block of main memory, consisting of some fixed number of words, is read into the cache and then the word is delivered to the processor
Because of the phenomenon of locality of reference, when a block of data is fetched into the cache to satisfy a single memory reference, it is likely that future references will be to other words in the block.
COMPUTER ARCHITECTURE 61 COMPUTER ARCHITECTURE 62
COMPUTER ARCHITECTURE
63
COMPUTER ARCHITECTURE
64
HIT RATIO
The performance of cache memory can be given by a synthetic parameter, called hit ratio (HR) Hit ratio determined running benchmarks HR represents the ratio between the total numbers of hits in cache and the total number of memory accesses (hit + miss numbers) The value of HR must be greater then 0.9 From HR value we can compute the average memory access time. For example if HR = 0.9, the access time to main memory (for misses) is 100 nsec. and access time to cache memory is 10 ns, the average memory access time is:
Mapping Function
Because there are fewer cache lines than main memory blocks is needed:
An algorithm for mapping main memory blocks into cache lines A means for determining which main memory block currently occupies a cache line
The choice of the mapping function dictates how the cache is organized Three techniques can be used:
Direct mapping Associative mapping Set associative mapping
t acm
9 10 + 100 = = 19ns 10
COMPUTER ARCHITECTURE 65
COMPUTER ARCHITECTURE
66
Direct mapping
Direct mapping is the simplest technique Direct mapping maps each block of main memory into only one possible cache line The mapping is expressed as: i = j modulo m
where i = cache line number; j = main memory block number; m = number of lines in the cache
Direct mapping
Sometime the word and block fields are called index field, because the index is used to address data in cache The use of a portion of the address as a line number provides a unique mapping of each block of main memory into the cache When a block is actually read into its assigned line, it is necessary to tag the data to distinguish it from other blocks that can fit into that line The effect of this mapping is that blocks of main memory are assigned to lines of the cache as follows:
For purposes of cache access, each main memory address can be viewed as consisting of three fields
The least significant w bits identify a unique word or byte within a block of main memory (in most contemporary machines, the address is at the byte level) The remaining a-w bits specify one of the 2a-w blocks of main memory The cache logic interprets these a-w bits as a tag of t bits (most significant portion) plus a cache line field of r bits (a = t+r+w)
COMPUTER ARCHITECTURE
67
COMPUTER ARCHITECTURE
68
COMPUTER ARCHITECTURE
69
COMPUTER ARCHITECTURE
70
Direct mapping
Advantages for direct mapping:
Simple and cheap The tag field is short
Only those bits have to be stored which are not used to address the cache
Associative mapping
Each main memory block can be loaded into any line of the cache The cache control logic interprets a memory address simply as a tag and a word field In the cache is stored also data and the corresponding address The associative mapping is implemented with associative memories (content addressable memories) as cache memories
more expensive than a random access memory each cell must have storage capacity as well comparison logic circuits for matching its content with an external argument
COMPUTER ARCHITECTURE 72
Disadvantage:
There is a fixed cache location for any given block If a program happens to reference words repeatedly from two different blocks that map into the same line, then the blocks will be continually swapped in the cache, and the hit ratio will be low.
COMPUTER ARCHITECTURE
71
COMPUTER ARCHITECTURE
73
COMPUTER ARCHITECTURE
74
Advantages:
provides the highest flexibility concerning the line to be replaced when a new block is read into the cache i = j modulo 1 where i = cache line number; j = main memory block number;
Disadvantages:
complex the tag field is long fast access can be achieved only using high performance associative memories for the cache, which is difficult and expansive.
COMPUTER ARCHITECTURE 75
COMPUTER ARCHITECTURE
76
COMPUTER ARCHITECTURE
77
COMPUTER ARCHITECTURE
78
EXAMPLE
Assume that:
Main memory is divided in 32 blocks Cache cache has 8 block frames (lines) The set-associative organization has 4 sets with 2 blocks per set, called two-way set associative
EXAMPLE
Where in cache main memory block 12 can be placed, for the three categories of cache organization?
Fully associative: block 12 can go into any of the eight block frames of the cache Direct mapped: block 12 can only be placed into block frame 4 (12 modulo 8) Two-way set associative: allows block 12 to be placed anywhere in set 0 (12 modulo 4). Line 0 or line 1
COMPUTER ARCHITECTURE 79 COMPUTER ARCHITECTURE 80
Replacement Algorithms
When a new block is brought into the cache, one of the existing blocks must be replaced For direct mapping, there is only one possible line for any particular block, and no choice is possible For the associative and set associative techniques, a replacement algorithm is needed To achieve high speed, such an algorithm must be implemented in hardware
Replacement Algorithms
LRU (least recently used): Replace that block in the set that has been in the cache longest with no reference to it
For two-way set associative each line includes a USE bit When a line is referenced, its USE bit is set to 1 and the USE bit of the other line in that set is set to 0 When a block is to be read into the set, the line whose USE bit is 0 is used Because we are assuming that more recently used memory locations are more likely to be referenced, LRU should give the best hit ratio.
FIFO (first-in-first-out): Replace that block in the set that has been in the cache longest
FIFO is easily implemented as a round-robin or circular buffer technique.
LFU (least frequently used: Replace that block in the set that has experienced the fewest references
LFU could be implemented by associating a counter with each line
COMPUTER ARCHITECTURE 81
Random replacement: is the simplest to implement and results are surprisingly good.
COMPUTER ARCHITECTURE 82
Write Policy
Write through: all write operations are made to main memory as well as to the cache
Any other processor-cache module can monitor traffic to main memory to maintain consistency within its own cache The main disadvantage of this technique is that it generates substantial memory traffic and may create a bottleneck
Write Policy
Write-through with buffered write
The same as write-through, but instead of slowing the processor down by writing directly to main memory, the write address and data are stored in a high-speed write buffer; the write buffer transfers data to main memory while the processor continues its task
Dirty bits: this status bit indicates whether the block is dirty (modified while in the cache) or clean (not modified)
If it is clean, the block is not written on a miss, since the lower level has identical information to the cache.
With write through, read misses never result in writes to the lower level, and write through is easier to implement than write back
Write through also has the advantage that the next lower level has the most current copy of the data This is important for I/O and for multiprocessors
COMPUTER ARCHITECTURE 84
Write Policy
Write Policy
Possible approaches to cache coherency include the following:
Bus watching with write through: Each cache controller monitors the address lines to detect write operations to memory by other bus masters
If another master writes to a location in shared memory that also resides in the cache memory, the cache controller invalidates that cache entry
In a bus organization in which more than one device (typically a processor) has a cache and main memory is shared: If data in one cache is altered:
This invalidates the corresponding word in main memory This invalidates the corresponding word in other caches (if any other cache happens to have that same word) Even if a write-through policy is used, the other caches may contain invalid data
Hardware transparency: Additional hardware is used to ensure that all updates to main memory via cache are reflected in all caches
Thus, if one processor modifies a word in its cache, this update is written to main memory. In addition, any matching words in other caches are similarly updated.
Noncachable memory: Only a portion of main memory is shared by more than one processor, and this is designated as noncachable
All accesses to shared memory are cache misses, because the shared memory is never copied into the cache The noncachable memory can be identified using chip-select logic or highaddress bits
COMPUTER ARCHITECTURE
85
COMPUTER ARCHITECTURE
86
Write-back caches generally use write allocate (hoping that subsequent writes to that block will be captured by the cache) Write-through caches often use no-write allocate (since subsequent writes to that block will still have to go to memory)
COMPUTER ARCHITECTURE
87