0% found this document useful (0 votes)
11 views4 pages

Lec 9

Dynamic RAM (DRAM) is a high-density, low-cost memory used in microprocessor systems, requiring periodic refreshing to maintain data integrity. It operates with RAS and CAS signals for address latching and has specific timing parameters for read and write operations. Synchronous DRAM (SDRAM) enhances performance by using a clock input and allowing multiple independent accesses, while Rambus technology offers a high-speed bus interface for improved data transfer rates.

Uploaded by

Amit khairnar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views4 pages

Lec 9

Dynamic RAM (DRAM) is a high-density, low-cost memory used in microprocessor systems, requiring periodic refreshing to maintain data integrity. It operates with RAS and CAS signals for address latching and has specific timing parameters for read and write operations. Synchronous DRAM (SDRAM) enhances performance by using a clock input and allowing multiple independent accesses, while Rambus technology offers a high-speed bus interface for improved data transfer rates.

Uploaded by

Amit khairnar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Dynamic RAM DRAM Interfacing

Dynamic RAM (DRAM) is the highest density, lowest cost Internally, DRAMs are much like other memories, except:
memory currently available. For these reasons it is univerally
used in any microprocessor-based system that requires more • RAS and CAS signals strobe latches row and column halves
than a small amount of non-volatile writable storage. of multiplexed address

• One transistor per cell (drain acts as capacitor) • CAS may also serve as output enable

• Very small charges involved • Most x1 devices have separate input and output data pins

• bit lines must be precharged to detect bit values

• voltage swing on bit lines is small; sense amp required


to convert to logic levels

decoder
A0

latch
• reads are destructive; DRAM devices internally write Memory
data back on read A1 Array

• leakage current can flip 1s to 0s: values must be RAS


refreshed (rewritten) every few milliseconds or data will
be lost
CAS
• To reduce package cost and size, DRAM devices minimize sense amps
pin count by:

latch
4:1 mux/demux
• using narrow logical configurations (x1, x4)

• multiplexing the internal row and column addresses on WE


the same pins

DIN DOUT

EECS 373 F99 Notes 9-1 © 1998, 1999 Steven K. Reinhardt


DRAM Timing: Reads DRAM Timing: Writes

• The falling edges of RAS and CAS strobe the address bits Write timing is similar. If WE is asserted on the falling edge of
into the row and column latches, respectively. These must CAS, data is written from DIN instead of being read to DOUT.
be separated by at least tRCD. Note that this is different from SRAMs, which perform writes at
the end of the cycle (rising edge of WE).
• As with other memories, multiple access times are tRC
specified, and the time to valid data out will depend on
which is the critical path. For DRAMs, there are two access
ADDR row col
times, tRAC and tCAC, for access time from valid row
address and valid column address, respectively.
tRAS tRP
tRC
RAS
ADDR row col tRCD
tCAS
tRAC tRP
CAS
RAS

tRCD WE
tCAC

CAS DIN valid

DOUT valid

• Most timing parameters are identical to the read cycle. tRAS


• Setup and hold times for both row and column addresses and tCAS are minimum pulse widths that also apply to the
exist but are not shown. read cycle but were left out of that diagram for clarity.

• The read cycle time (tRC) is typically much larger than the • Required setup and hold times on DIN and WE not shown.
access time due to the required precharge time tRP (not
drawn to scale).

EECS 373 F99 Notes 9-2 © 1998, 1999 Steven K. Reinhardt


Refresh Interface Optimizations
Each cell must be refreshed every few milliseconds to avoid Problem: DRAM bandwidth (bits/second) has not kept up with
losing data. Whenever a row is read, the sense amps CPU speed increases over the years.
automatically write back entire row, so we only need to access
every row once during the refresh interval. Simple observation: reading out entire row and throwing out all
but one bit is inefficient. Wider chips (x4, x8, x16) help some,
• Do one row at a time (not in big burst) to avoid tying up but only as a stopgap. A number of DRAM enhancements let
DRAM for long period systems read out many bits from each row:

• Typically done in hardware using counter (to track next row nibble mode: pusling CAS 4 times w/o deasserting RAS gives 4
index) and timer (to initiate one refresh) adjacent bits (only first column address used)

• Example: Hitachi 64Mbit DRAM requires 8192 refreshes fast page mode (FPM): pulsing CAS w/o deasserting RAS
every 64 ms. Access one row every (64 ms/8192) = 7.8 µs. takes new column address (do as often as desired)

• Need not provide column address: “RAS-only refresh” static column: like FPM, but no need to deassert CAS (latch is
transparent); just change address and it flow through
• Can also insert refresh cycle at end of unrelated read access;
if CAS is not deasserted, read data remains valid: “hidden extended data out (EDO): like FPM, but data stays valid after
refresh” deasserting CAS (limited pipelining, lets you pump in
column addresses faster)
• Some DRAMs have internal counter; system needs only to
indicate when to refresh “next” row by asserting CAS then
RAS (opposite of regular access): “CAS-before-RAS
(CBR) refresh”

Other solutions?

EECS 373 F99 Notes 9-3 © 1998, 1999 Steven K. Reinhardt


Synchronous DRAM (SDRAM) Rambus
As the name implies, a synchronous DRAM has a clock input, Rambus is a revolutionary new DRAM interface that combines
and all signals are defined with respect to clock edges. multiple banks per chip with a high-speed bus interface. Two
earlier generations (“Rambus” and “Concurrent Rambus”) have
• After providing row and column addresses, a programmable been superseded by the latest “Direct Rambus” protocol. Intel
number of bits can be provided one per clock without any is pushing this to be the high-end PC memory technology by
other control inputs 2000.

• The next pair of row and column addresses can be provided • 400 Mhz clock, data transferred on both edges
while the previous access is still completing
• 16-bit bus
• SDRAMs have multiple (e.g., 4) memory arrays (banks) so
that independent accesses can be even further overlapped: • result: 1.6 Gbytes per second on one “channel”

• provide row address for bank 1 • high-end servers will have multiple channels

• provide row address for bank 2 • 16 banks per chip: if enough accesses go to different banks,
a single chip can provide the full 1.6 GB/s
• provide column address for bank 1
• nice solution to granularity problem: what is minimum
• provide column address for bank 2 memory system size/increment given very dense chips?

• get data from bank 1 • SDRAM vendors using two-edge trick in new “dual data
rate” (DDR) SDRAMs
• get data from bank 2
• Sync-Link DRAM (SLDRAM) uses similar ideas as
• Clock speeds of 66, 100, 125 MHz Rambus (with a few key changes, mostly at the electrical
level), but is standards-based instead of proprietary
• Sega Saturn was one of the earliest widespread users

• Becoming common in PCs now

EECS 373 F99 Notes 9-4 © 1998, 1999 Steven K. Reinhardt

You might also like