DICD Fall 2024 Lecture 10 Memory
DICD Fall 2024 Lecture 10 Memory
Lecture # 10
Memory
Muhammad Imran
[email protected]
Acknowledgement
2
Memory – An Introduction
SRAM
DRAM
Flash Memory
Emerging Memory Technologies
In-Memory Computing
Memory – An Introduction
Significance of Memory
5
Source: Intel
Intel Pentium-M (2001) – 2MB L3
Cache
Source: Intel
Intel 10th Gen “Comet Lake” (2020) – 20MB L3
Cache
Memory Hierarchy
6
Memory
Arrays
Size:
Bits, Bytes, Words
Timing Parameters:
Read access, write access, cycle time
Function:
Read Only (ROM) non volatile
Read Write (RWM) volatile
NVRWM Non volatile Read Write
Access Pattern:
Random Access, FIFO, LIFO, Shift Register, CAM
I/O Architecture:
Single Port, Multi port
Application:
Embedded, External, Secondary
Random Access Chip Architecture
9
ADDA-1 : ADDM
Address bus: A bits
Row Decoder
Word
W=2A Line
Q Q
Writing into a Cross-Coupled Pair
15
En
D Q Q
How should we write a ‘1’
16
BL BLB
WL WL
M3 M6
M2 M
Q
5
M1 M4
QB
6T SRAM Operation
SRAM Operation – Hold
20
BL BLB
WL WL
M M
M2 3 6
M
Q 5
M M QB
1 4
SRAM Operation – Read
21
BL BLB
WL 0 WL
VDD
M M
3 6
M M
VDD 0
2 Q 5
M M QB
0
QB
M3 1 VDD 4 BLB
M5
Q=VDD WL
QB=ΔV
M2
WL
Q
M4
BL
“nMOS”
No Change! inverter – QB
voltage rises
SRAM Operation – Read
22
BL BLB
BLB
WL 0 WL M5
VDD WL
M M
QB=ΔV
3 6
M M
VDD 0 M4
2 Q 5 Q
M M QB
0
1 VDD 4
Cell
Ratio:W4
L4
CR
L5
Cell Ration – Read Constraint
23
BLB
WL
M5
QB=ΔV
Q
M4
W4
So we need the pull down L4
transistor to be much stronger CR
than the access transistor… W L
5
5
SRAM Operation – Write
24
BL BLB
WL WL
0
VDD
M M
M2 3 0 6
M
VDD VDD 0
Q 5
M VD 0
M QB
1 D 4
BL
Q
M2 M6
WL
Q=ΔV
QB=VOLmin
M1 WL M5
QB
Same as during read –
BLB
designed so ΔV<VM Pseudo nMOS
inverter!
SRAM Operation – Write
25
BL BLB
WL WL Q
0 M6
VDD
M M
M2 3 0 6
M QB=VOLmin
VDD VDD 0
Q 5
WL M5
M VD 0
M QB BLB
1 D 4
Pull-Up
Ratio W6
L6
PR
L5
Pull Up Ration – Write Constraint
26
Q
M6
QB=VOLmin
WL M5
BLB
W6
So we need the a c c ess L6
transistor to be much PR
stronger than the pull up L5
transistor…
Summary – SRAM Sizing Constraints
27
Read
Constraint W1 W4
L1 L PDN
CR 4
access
2 W5
L2 L5
W
KPDN
K
PDNK
Kaccess K PUN
Write access
Constraint
W3 W6
Kaccess KPUN
L3
PR L6
W2 W5 access
L2 PUN
L5
Multi-Port SRAM – Dual Port SRAM
29
Dual Port
SRAM
DRAM
DRAM Invention
32
Two challenges
Capacitor loses value while read operation is performed!
Reads are destructive!
Capacitance of bit line is huge compared to cell’s capacitance!
Because bit line is attached to multiple cells!
Cell must be able to change bit line voltage consistently!
Restore and Precharge
37
At outset
DRAMs had data width of 1-bit
Din and Dout were separate buses
Single control signel nWE used to select between read/write
With more bit widths (multiple banks with multiple rows/column
organizations)
Single tri-state DQ bus wes used!
Separate nOE to select output to data pins!
With 16-bit DRAM
nUCAS (upper) and nLCAS (lower) allow addressing upper / lower byte
separately!
From Single to Multi-bit DRAMs
45
Working principle
Trapping electrons in floating gate of floating-gate MOSFET (Flash cell)
Flash Memory
50
https://www.embedded.com/flash-101-nand-flash-vs-nor-flash/
NAND vs NOR Flash
55
Resistance Resistance
Resistanc
B Error!
Amorphous `
10
0 1
State
e
11
01
A 1 Crystalline
0
State 00
Time
SLC MLC
Resistance-Based Memories
59
10 Level 4
00 01 11 10 00 01 11 10
11 Level 3
Invert Complement
01 Level 2
11 10 00 01 00 11 01 10
00 Level 1
Resistance
RESE
Amorphous
RESET
State
T
Crystalline
SET
State
Resistance-Based Memories
61
Binary
1 1 1 1 = 8+4+2+1 = (15)d = 16-1 = (15)d
Program to RESET
Intermediate State
Source: Google
In-Memory Computing - Emerging Memories
65
•Multiply & Add operation (@8-bit) in a 60x60 array consumes <0.001pJ per operation
Burr et al., Adv. Phys. X (2017), Xia and Yang, Nature Materials (2019)
In-Memory Computing - SRAM
66
SA Result = 1
SA Result = 0
operands
SA Result = 1
SA Result = 0
MAP to
cap DECIPHE
voltage R from
MAP to voltage
SRAM along
content the BL
SA Result = 1
operands
SA Result = 0
SA Result = 1
SA Result = 0
0/1