Module-3 Memory-PPT Part 1
Module-3 Memory-PPT Part 1
Part-1
• Cache Memory
• Primary Memory/Main Memory
• Secondary Memory
Primary Memory
• This is the main memory of the computer. CPU can
directly read or write on this memory. It is fixed on
the motherboard of the computer.
• Primary memory is further divided in two types:
1.RAM(Random Access Memory)
2.ROM(Read Only Memory)
RAM(Random Access Memory)
Storage
Write enable cells
Data in / D Q / / Data out
g g g
Address / FF
h C Q
0
D Q /
g
FF
C Q
Address
1
decoder
.
. WE
. D Q / D in
g D out
FF Addr
C Q CS OE
2h –1
32
WE WE WE WE
Address D in D in D in D in
D out D out D out D out
/ / Addr Addr Addr Addr
18 17 CS OE CS OE CS OE CS OE
MSB
WE WE WE WE
D in D in D in D in
D out D out D out D out
Addr Addr Addr Addr
CS OE CS OE CS OE CS OE
Output enable
Chip select
Write enable
Data in/out /
Address g
/
h Data in Data out
When data input and output of an SRAM chip are shared or connected
to a bidirectional data bus, output must be disabled during write
operations.
DRAM and Refresh Cycles
DRAM vs. SRAM Memory Cell Complexity
Word line Word line Vcc
Pass
transistor
Capacitor
Compl.
Bit Bit
bit
line line
line
(a) DRAM cell (b) Typical SRAM cell
Single-transistor DRAM cell, which is considerably simpler than SRAM cell,
leads to dense, high-capacity DRAM memory chips.
DRAM Refresh Cycles and Refresh Rate
Threshold
voltage
10s of ms
0 Stored before needing
Voltage refresh cycle Time
for 0
n Memory size
Super-
computers 1
TB
Number of memory chips
256
100 Servers GB
64
GB
16
Work- GB
stations 4
GB
Large 1
PCs GB
256
10 MB
Small 64
PCs MB
16
MB
4
MB
Trends in 1
MB
DRAM main
memory. 1
1980 1990 2000 2010
Calendar year
Hitting the Memory Wall
10 6
Relative performance
Processor
10 3
Memory
1
1980 1990 2000 2010
Calendar year
Memory density and capacity have grown along with the CPU power
and complexity, but memory speed has not kept pace.
Bridging the CPU-Memory Speed Gap
Idea: Retrieve more data from memory with each access
(a) Buffer and mult iplex er (a) Buffer and mult iplex er
at the memory side at the processor side
1001
Word
lines
0010
1101
B i t li nes
Read-only memory organization, with the fixed
contents shown on the right.
Flash Memory
S o u r c e l i n es
Control gate
Floating gate
Source
Word
lines
n
p subs-
trate
n+
B i t li nes Drain
1K
$ / GByte
100
0.1
Source: https://www1.hitachigst.com/hdd/technolo/overview/chart03.html
Cache Memory Organization
Processor speed is improving at a faster rate than memory’s
• Processor-memory speed gap has been widening
• Cache is to main as desk drawer is to file cabinet
Organization Type
• Independent
• Hierarchical
Cache, Hit/Miss Rate, and Effective Access
Time
Cache is transparent to user;
transfers occur automatically Line
Word
Main
Reg Cache
CPU file
(slow)
(fast)
memory
memory
Three level
Tavg= h1.t1+(1-h1).h2.(t1+t2)+(1-h1).(1-h2).(t1+t2+t3) [Hierarchical]
Cleaner and
CPU
easier to analyze CPU
CPU CPU
registers registers
(a) Level 2 between level 1 and main (b) Level 2 connected to “backside” bus
• Cache of 64kByte
• Cache block of 4 bytes
• i.e. cache is 16k (214) lines of 4 bytes
• 16MBytes main memory
• 24 bit address
• (224=16M)
Direct Mapping
• Each block of main memory maps to only one cache line
• i.e. if a block is in cache, it must be in one specific place
• Address is in two parts
• Least Significant w bits identify unique word
• Most Significant s bits specify one memory block
• The MSBs are split into a cache line field r and a tag of s-r (most
significant)
Mapping Function is (Which Block is placed in which line)
Block will be placed in Line No = (Block No) Mod (No of Line in cache)
Direct Mapping
Address Structure
8 14 2
• 24 bit address
• 2 bit word identifier (4 byte block)
• 22 bit block identifier
• 8 bit tag (=22-14)
• 14 bit slot or line
• No two blocks in the same line have the same Tag field
• Check contents of cache by finding line and checking Tag
Direct Mapping from Cache to Main Memory
Direct Mapping
Cache Line Table
Cache line Main Memory blocks held
0 0, m, 2m, 3m…2s-m
1 1,m+1, 2m+1…2s-m+1
…
m-1 m-1, 2m-1,3m-1…2s-1
Direct Mapping Cache Organization
Direct
Mapping
Example
Direct Mapping Summary
• Simple
• Inexpensive
• Fixed location for given block
• If a program accesses 2 blocks that map to the same line repeatedly, cache
misses are very high
Victim Cache
• Lower miss penalty
• Remember what was discarded
• Already fetched
• Use again with little penalty
• Fully associative
• 4 to 16 cache lines
• Between direct mapped L1 cache and next memory level
Associative Mapping