Memories Notes
Memories Notes
OVERVIEW
Read only memory devices are a special case of memory where, in normal system operation, the
memory is read but not changed. Read only memories are non-volatile, that is, stored informa-
tion is retained when the power is removed. The main read only memory devices are listed below:
Mask programmable read-only memories (ROMs) are the least expensive type of solid state
memory. They are primarily used for storing video game software and fixed data for electronic
equipment, such as fonts for laser printers, dictionary data in word processors, and sound data in
electronic musical instruments.
ROM programming is performed during IC fabrication. Several process methods can be used to
program a ROM. These include
The choice of these is a trade-off between process complexity, chip size, and manufacturing cycle
time. A ROM programmed at the metal contact level will have the shortest manufacturing cycle
time, as metallization is one of the last process steps. However, the size of the cell will be larger.
Figure 9-2 shows a ROM array programmed by channel implant. The transistor cell will have
either a normal threshold (enhancement-mode device) or a very high threshold (higher than VCC
to assure the transistor will always be off). The cell array architecture is NOR. The different types
of ROM architectures (NOR, NAND, etc.) are detailed in the flash memory section (Section 10) as
they use the same principle.
Figure 9-3 shows an array of storage cells (NAND architecture). This array consists of single tran-
sistors noted as devices 1 through 8 and 11 through 18 that is programmed with either a normal
threshold (enhancement-mode device) or a negative threshold (depletion-mode device).
The cell size for the ROM is potentially the smallest of any type of memory device, as it is a single
transistor. A typical 8Mbit ROM would have a cell size of about 4.5µm2 for a 0.7µm feature size
process, and a chip area of about 76mm2. An announced 64Mbit ROM, manufactured with a
0.6µm feature size, has a 1.23µm2 cell on a 200mm2 die.
The ROM process is the simplest of all memory processes, usually requiring only one layer of
polysilicon and one layer of metal. There are no special film deposition or etch requirements, so
yields are the highest among all the equivalent-density memory chips.
Ground Diffusion
Selective
Implant
To Raise
VT
ROW 1 (Polysilicon) ROW 1
T3 T4
Drain
Contacts: T4 T3
Shared By
2 Bits
Drain Diffusion
ROW 2
ROW 2 (Polysilicon)
T1 T2 T1 T2
Ground
Metal Columns
(Not Drawn)
1
WORD 1/11 11
2
WORD 2/12 12
3
WORD 3/13 13
4
WORD 4/14 14
5
WORD 5/15 15
6
WORD 6/16 16
7
WORD 7/17 17
8
WORD 8/18 18
9
CONTROL LINE 19
10
SELECT LINE 20
BIT LINE
Multimedia Card
In 1996, Siemens announced the introduction of a new solid-state memory chip technology that
enables the creation of a multimedia card that is sized 37mm x 45mm x 1.4mm, or roughly 40 per-
cent the size of a credit card. It is offered with either 16Mbit or 64Mbit of ROM.
EPROM
EPROM (UV Erasable Programmable Read Only Memory) is a special type of ROM that is pro-
grammed electrically and yet is erasable under UV light.
The EPROM device is programmed by forcing an electrical charge on a small piece of polysilicon
material (called the floating gate) located in the memory cell. When this charge is present on this
gate, the cell is “programmed,” usually a logic “0,” and when this charge is not present, it is a logic
“1.” Figure 9-4 shows the cell used in a typical EPROM. The floating gate is where the electrical
charge is stored.
First-Level
Polysilicon +VG Second-Level
(Floating) Polysilicon
Gate Oxide
Field Oxide
– – –
N+
P- Substrate
Prior to being programmed, an EPROM has to be erased. To erase the EPROM, it is exposed to an
ultraviolet light for approximately 20 minutes through a quartz window in its ceramic package.
After erasure, new information can be programmed to the EPROM. After writing the data to the
EPROM, an opaque label has to be placed over the quartz window to prevent accidental erasure.
Programming is accomplished through a phenomenon called hot electron injection. High voltages
are applied to the select gate and drain connections of the cell transistor. The select gate of the
transistor is pulsed “on” causing a large drain current to flow. The large bias voltage on the gate
connection attracts electrons that penetrate the thin gate oxide and are stored on the floating gate.
The following explanation of EPROM floating gate transistor characteristic theory also applies to
EEPROM and flash devices. Figures 9-5 (a) and (b) show the cross section of a conventional MOS
transistor and a floating gate transistor, respectively. The upper gate in Figure 9-5 (b) is the con-
trol gate and the lower gate, completely isolated within the gate oxide, is the floating gate.
Control
Gate G
Floating
Control
Gate
Gate
G
S D S D
CFG
CFS
N+ N+ N+ N+
Figure 9-5. Cross Section of a Conventional MOS Transistor and a Floating-Gate MOS Transistor
CFG and CFS are the capacitances between the floating gate and the control gate and substrate,
respectively. VG and VF are the voltages of the control gate and the floating gate, respectively.
-QF is the charge in the floating gate. (As electrons have a negative charge, a negative sign was
added). In an equilibrium state, the sum of the charges equals zero.
C FG QF
VF VG −
C FG + C FS C FG + C FS
VTC is the threshold voltage of the conventional transistor, and VTCG is the threshold voltage of
the floating gate transistor.
C FG QF
VTCG = VTC −
C FG + C FS C FG + C FS
QF
VTCG = VTO −
CG
C FG
Where VTO = VTC and C G = C FG + C FS
C FG + C FS
The threshold voltage of the floating gate transistor (VTCG) will be VTO (around 1V) plus a term
depending on the charge trapped in the floating gate. If no electrons are in the floating gate, then
VTCG = VTO (around 1V). If electrons have been trapped in the floating gate, then VTCG = VTO
-QF/CG (around 8V for a 5V part). This voltage is process and design dependent. Figure 9-6
shows the threshold voltage shift of an EPROM cell before and after programming.
–QF
VT =
CG
The programming (write cycle) of an EPROM takes several hundred milliseconds. Usually a
byte—eight bits—is addressed with each write cycle. The read time is comparable to that of fast
ROMs and DRAMs (i.e., several tens of nanoseconds). In those applications where programs are
stored in EPROMs, the CPU can run at normal speeds.
Field programmability is the EPROM’s main advantage over the ROM. It allows the user to buy
mass-produced devices and program each device for a specific need. This characteristic also
makes the EPROM ideal for small-volume applications, as the devices are usually programmed in
very small quantities. Also, the systems supplier can program any last minute upgrades to the
program just before shipment. EPROM cells may be configured in the NAND structure shown
previously, or, more commonly, in the NOR configuration shown in Figure 9-7.
BIT 1
BIT 2
Select Gate
Floating Gate
Source: ICE, "Memory 1997" 19051
EPROMs were created in the 1970s and have long been the cornerstone of the non-volatile
memory market. But the development of flash memory devices (see Section 10) will lead to a loss
of EPROM marketshare. EPROM uses a mature technology and design and is on the decline part
of its lifecycle. For this reason there is not a lot of R&D expenditure made for EPROM devices.
Figure 9-8 shows a cross section of a 1Mbit EPROM cell from two different manufacturers. The
main difference between the processes is the polysilicon gate. One manufacturer uses a polycide
to improve the speed.
The cell size of the EPROM is also relatively small. The EPROM requires one additional polysili-
con layer, and will usually have slightly lower yields due to the requirement for nearly perfect
(and thin) gate oxides.
These factors, plus the fact that an EPROM is encased in a ceramic package with a quartz window,
make the EPROM average selling price three to five times the price of the mask ROM. Figure 9-9
shows the main feature sizes of 1Mbit EPROM analyzed by ICEÕs laboratory.
In most applications, EPROMs are programmed one time and will never have to be erased. To
reduce the cost for these applications, EPROMs may be manufactured in opaque plastic packages
since the standard ceramic package of an EPROM is expensive. EPROMs that are programmed one
time for a specific use and cannot be erased are referred to as One Time Programmable (OTP) devices.
EEPROM
EEPROM (Electrically Erasable Programmable ROM) offer users excellent capabilities and per-
formance. Only one external power supply is required since the high voltage for
program/erase is internally generated. Write and erase operations are performed on a byte per
byte basis.
The EEPROM uses the same principle as the UV-EPROM. Electrons trapped in a floating gate will
modify the characteristics of the cell, and so a logic “0” or a logic “1” will be stored.
The EEPROM is the memory device that implements the fewest standards in cell design. The
more common cell is composed of two transistors. The storage transistor has a floating gate (sim-
ilar to the EPROM storage transistor) that will trap electrons. In addition, there is an access tran-
sistor, which is required for operations. Figure 9-10 shows the voltages applied on the memory
cell to program/erase a cell. Note that an EPROM cell is erased when electrons are removed from
the floating gate and that the EEPROM cell is erased when the electrons are trapped in the float-
ing cell. To have products electrically compatible, the logic path of both types of product will give
a “1” for erase state and a “0” for a programmed state. Figure 9-11 shows the electrical differences
between EPROM and EEPROM cells.
Parallel EEPROM
There are two distinct EEPROM families: serial and parallel access. The serial access represents
90 percent of the overall EEPROM market, and parallel EEPROMs about 10 percent. Parallel
devices are available in higher densities (≥256Kbit), are generally faster, offer high endurance and
reliability, and are found mostly in the military market. They are pin compatible with EPROMs
and flash memory devices. Figure 9-12 shows feature sizes of three 1Mbit parallel EEPROM from
different manufacturers, analyzed by ICE’s laboratory. Figures 9-13 to 9-15 show photographs
and schematics of the respective cells. It is interesting to see the wide differences in these cells.
Serial EEPROM
Serial EEPROMs are less dense (typically from 256 bit to 256Kbit) and are slower than parallel
devices. They are much cheaper and used in more “commodity” applications.
CL 0V CL 0V
SG 20V
CG 0V
S S
Erase Program
CL
SG CL CG S
Unselected 0 X 0 0
S
Source: ICE, "Memory 1997" 17554A
ALUMINUM
CAP
BARRIER
PRE-METAL GLASS
POLY 2
N+
POLY 1
Serial access EEPROMs feature low pin count. Typically they are packaged in an 8-pin package. As
illustrated in Figure 9-16, XicorÕs 128Kbit serial EEPROM uses the 8 pins in the following manner:
Serial EEPROMs use data transfer interface protocols for embedded control applications. These
protocols include the Microwave bus, the I2C bus, the XI2C (Extended I2C) or the SPI (Serial
Peripheral Interface) bus interfaces.
There continues to be an ongoing effort to reduce the size of serial EEPROMs. Microchip
Technology, for example, introduced a 128bit serial EEPROM in a five-lead SOT-23 package.
POLY 1
PROGRAM LINE
POLY 2
WORD
LINE
BIT Q2
Q1
144424443
FLOATING GATE
PASSIVATION
METAL 2
INTERLEVEL GLASS
FLOATING GATES
PROGRAM LINE
SELECT GATE N+ S/D
METAL POLYCIDE
POLY
Silicon Nitride
Source: ICE, “Memory 1997” 22467
Figure 9-17 shows feature sizes of three serial EEPROMs from different manufacturers that were
analyzed by ICEÕs laboratory. Note that larger cell sizes accompany low-density EEPROM
devices. When building an EEPROM chip that contains sense amplifiers, controllers, and other
peripheral circuitry, cell size is not as great a factor at low (1Kbit, 2Kbit) densities. At larger den-
sities, the size of the cell array is more critical. It becomes a larger portion of the chip. Therefore,
greater consideration must be given to the size of the cell.
WP
Start Cycle H.V. Generation
Timing and
Control
VCC
VSS
Control
Logic
Slave Address
Register EEPROM
SCL +Comparator Load Inc XDEC
Array
Word
S2 Address
Counter
S1
S0
R/W YDEC
8
CK DOUT
Data Register
DOUT
ACK
Source: Xicor/ICE, "Memory 1997" 22599
This size impact is illustrated in Figure 9-18 using a 1Kbit serial EEPROM example from SGS-
Thomson. The cell array represents only 11 percent of the total surface of the chip.
Figures 9-19 and 9-20 show additional EEPROM cells. As noted, there is no design standard for
this type of cell. In laying out the EEPROM cell, the designer must take into consideration the ele-
ments of size, performance, and process complexity.
CELL ARRAY
WL
D
2
1
PIN 1
2 BIT B
3
C
1 BIT A
TUNNEL
OXIDE DEVICE
PASSIVATION
BIT LINE
REFLOW GLASS
POLY 2 PGM LINE
POLY 2 WORD LINE
N+ S/D
POLY 1 FLOATING GATE
PRE-METAL GLASS
POLY 3 POLY 1
WORD LINE GND/PROGRAM
CATHODE
The goal of multi-level cell (MLC) is to store more than one bit of information in a single cell.
Much work has already been done regarding MLC as applied to flash memory devices. The typ-
ical development for digital flash memories is to store four different levels in the same cell, and
thus divide the number of cells by two (four data are given by two bits : 00, 01, 10, and 11).
However, for several years now, Information Storage Devices (ISD), a San Jose based company,
has proposed multi-level analog storage EEPROMs for analog storage. ISD presented a 480Kbit
EEPROM at the 1996 ISSCC conference. The multi-level storage cell is able to store 256 different
levels of charge between 0V and 2V. This means the cell needs to have a 7.5mV resolution. The
256 different levels in one cell corresponds to eight bits of information. A comparable digital
implementation requires 3.84Mbit memory elements to store the same amount of information.
The information stored will not be 100 percent accurate but is good enough for audio applications,
which allows some errors.
Invited Paper
The most relevant phenomenon of this past decade in the field to allow cell scaling below the 65-nm node is the tunnel oxide
of semiconductor memories has been the explosive growth of the thickness reduction, as tunnel thinning is limited by intrinsic and
Flash memory market, driven by cellular phones and other types extrinsic mechanisms.
of electronic portable equipment (palm top, mobile PC, mp3 audio
player, digital camera, and so on). Moreover, in the coming years, Keywords—Flash evolution, Flash memory, Flash technology,
portable systems will demand even more nonvolatile memories, ei- floating-gate MOSFET, multilevel, nonvolatile memory, NOR cell,
ther with high density and very high writing throughput for data scaling.
storage application or with fast random access for code execution
in place. The strong consolidated know-how (more than ten years of
experience), the flexibility, and the cost make the Flash memory a I. INTRODUCTION
largely utilized, well-consolidated, and mature technology for most
of the nonvolatile memory applications. Today, Flash sales repre- The semiconductor market, for the long term, has been
sent a considerable amount of the overall semiconductor market. continuously increasing, even if with some valleys and
Although in the past different types of Flash cells and architec- peaks, and this growing trend is expected to continue in the
tures have been proposed, today two of them can be considered as coming years (see Fig. 1). A large amount of this market,
industry standard: the common ground NOR Flash, that due to its
about 20%, is given by the semiconductor memories, which
versatility is addressing both the code and data storage segments,
and the NAND Flash, optimized for the data storage market. are divided into the following two branches, both based on
This paper will mainly focus on the development of the NOR Flash the complementary metal–oxide–semiconductor (CMOS)
memory technology, with the aim of describing both the basic func- technology (see Fig. 2).
tionality of the memory cell used so far and the main cell architec-
ture consolidated today. The NOR cell is basically a floating-gate – The volatile memories, like SRAM or DRAM, that
MOS transistor, programmed by channel hot electron and erased although very fast in writing and reading (SRAM)
by Fowler–Nordheim tunneling. The main reliability issues, such as or very dense (DRAM), lose the data contents when
charge retention and endurance, will be discussed, together with the the power supply is turned off.
understanding of the basic physical mechanisms responsible. Most – The nonvolatile memories, like EPROM,
of these considerations are also valid for the NAND cell, since it is
based on the same concept of floating-gate MOS transistor. EEPROM, or Flash, that are able to balance
Furthermore, an insight into the multilevel approach, where two the less-aggressive (with respect to SRAM and
bits are stored in the same cell, will be presented. In fact, the ex- DRAM) programming and reading performances
ploitation of the multilevel approach at each technology node allows with nonvolatility, i.e., with the capability to keep
the increase of the memory efficiency, almost doubling the density the data content even without power supply.
at the same chip size, enlarging the application range, and reducing
the cost per bit. Thanks to this characteristic, the nonvolatile memories offer
Finally, the NOR Flash cell scaling issues will be covered, the system a different opportunity and cover a wide range
pointing out the main challenges. The Flash cell scaling has of applications, from consumer and automotive to computer
been demonstrated to be really possible and to be able to follow
and communication (see Fig. 3).
the Moore’s law down to the 130-nm technology generations.
The technology development and the consolidated know-how is The different nonvolatile memory families can be qualita-
expected to sustain the scaling trend down to the 90- and 65-nm tively compared in terms of flexibility and cost (see Fig. 4).
technology nodes as forecasted by the International Technology Flexibility means the possibility to be programmed and
Roadmap of Semiconductors. One of the crucial issues to be solved erased many times on the system with minimum granularity
(whole chip, page, byte, bit); cost means process complexity
Manuscript received July 1, 2002; revised January 5, 2003. and in particular silicon occupancy, i.e., density or, in sim-
The authors are with the Central Research and Development Department, pler words, cell size. Considering the flexibility-cost plane,
Non-Volatile Memory Process Development, STMicroelectronics, 20041
Agrate Brianza, Italy (e-mail: [email protected]). it turns out that Flash offers the best compromise between
Digital Object Identifier 10.1109/JPROC.2003.811702 these two parameters, since they have the smallest cell size
A. Basic Concept
ation. But the Flash market did not take off until this tech-
nology was proven to be reliable and manufacturable. In the A Flash cell is basically a floating-gate MOS transistor
late 1990s, the Flash technology exploded as the right non- (see Fig. 7), i.e., a transistor with a gate completely sur-
volatile memory for code and data storage, mainly for mobile rounded by dielectrics, the floating gate (FG), and electri-
applications. Starting from 2000, the Flash memory can be cally governed by a capacitively coupled control gate (CG).
considered a really mature technology: more than 800 mil- Being electrically isolated, the FG acts as the storing elec-
lion units of 16-Mb equivalent NOR Flash devices were sold trode for the cell device; charge injected in the FG is main-
in that year. tained there, allowing modulation of the “apparent” threshold
In Fig. 6, the Flash market is reported and compared with voltage (i.e., seen from the CG) of the cell transistor.
the DRAM and SRAM one [10]. It can be seen that the Flash Obviously the quality of the dielectrics guarantees the non-
market became and has stayed bigger than the SRAM one volatility, while the thickness allows the possibility to pro-
since 1999. Moreover, the Flash market is forecasted to be gram or erase the cell by electrical pulses. Usually the gate
above $20 billion in three or four years from now, reaching dielectric, i.e., the one between the transistor channel and the
the DRAM market amount, and only smoothly following the FG, is an oxide in the range of 9–10 nm and is called “tunnel
DRAM oscillating trend, driven by the personal computer oxide” since FN electron tunneling occurs through it. The
market. In fact, portable systems for communications and dielectric that separates the FG from the CG is formed by a
consumer markets, which are the drivers of the Flash market, triple layer of oxide–nitride–oxide (ONO). The ONO thick-
are forecasted to continuously grow in the coming years. ness is in the range of 15–20 nm of equivalent oxide thick-
In the following, we briefly describe the basics of the Flash ness. The ONO layer as interpoly dielectric has been intro-
cell functionality. duced in order to improve the tunnel oxide quality. In fact, the
Fig. 9. (a) NOR Flash array equivalent circuit. (b) Flash memory cell cross section.
use of thermal oxide over polysilicon implies growth temper- contact and the sourceline. This picture can be better under-
ature higher than 1100 C, impacting the underneath tunnel stood considering the layout of a cell (see Fig. 10) and the
oxide. High-temperature postannealing is known to damage two schematic cross sections, along the direction (bitline)
the thin oxide quality. and the direction (wordline). The cell area is given by the
If the tunnel oxide and the ONO behave as ideal di- pitch times the pitch. The pitch is given by the active
electrics, then it is possible to schematically represent the area width and space, considering also that the FG must
energy band diagram of the FG MOS transistor as reported overlap the oxide field. The pitch is constituted by the cell
in Fig. 8. It can be seen that the FG acts as a potential well gate length, the contact-to-gate distance, half contact, and
for the charge. Once the charge is in the FG, the tunnel and half sourceline. It is evident, as reported in Fig. 9(b), that
ONO dielectrics form potential barriers. both contact and sourceline are shared between two adjacent
The neutral (or positively charged) state is associated with cells.
the logical state “1” and the negatively charged state, corre-
sponding to electrons stored in the FG, is associated with the B. Reading Operation
logical “0.” The data stored in a Flash cell can be determined mea-
The “NOR” Flash name is related to the way the cells are suring the threshold voltage of the FG MOS transistor. The
arranged in an array, through rows and columns in a NOR-like best and fastest way to do that is by reading the current driven
structure. Flash cells sharing the same gate constitute the by the cell at a fixed gate bias. In fact, as schematically re-
so-called wordline (WL), while those sharing the same drain ported in Fig. 11, in the current–voltage plane two cells,
electrode (one contact common to two cells) constitute the respectively, logic “1” and “0” exhibit the same transcon-
bitline (BL). In this array organization, the source electrode ductance curve but are shifted by a quantity—the threshold
is common to all of the cells [Fig. 9(a)]. voltage shift ( )—that is proportional to the stored elec-
A scanning electron microscope (SEM) cross section tron charge .
along a bitline of a Flash array is reported in Fig. 9(b), where Hence, once a proper charge amount and a corresponding
three cells can be observed, sharing two by two the drain is defined, it is possible to fix a reading voltage in such
Fig. 10. The NOR Flash cell. (a) Basic layout. (b) Updated Flash
product (64-Mb, 1.8-V Dual bank). (c) and (d) are, respectively,
the schematic cross section along bitline (y pitch) and wordline
(x pitch).
C. Data Retention
As in any nonvolatile memory technology, Flash memories
are specified to retain data for over ten years. This means the
loss of charge stored in the FG must be as minimal as pos-
sible. In updated Flash technology, due to the small cell size,
the capacitance is very small and at an operative programmed
threshold shift—about 2 V—corresponds a number of elec-
trons in the order of 10 to 10 . A loss of 20% in this number
(around 2–20 electrons lost per month) can lead to a wrong Fig. 16. Threshold voltage window closure as a function of
read of the cell and then to a data loss. program/erase cycles on a single cell.
Possible causes of charge loss are: 1) defects in the tunnel
oxide; 2) defects in the interpoly dielectric; 3) mobile ion
contamination; and 4) detrapping of charge from insulating
layers surrounding the FG.
The generation of defects in the tunnel oxide can be di-
vided into an extrinsic and an intrinsic one. The former is
due to defects in the device structure; the latter to the physical
mechanisms that are used to program and erase the cell. The
tunnel oxidation technology as well as the Flash cell architec-
ture is a key factor for mastering a reliable Flash technology.
The best interpoly dielectric considering both intrinsic
properties and process integration issues has been demon-
strated to be a triple layer composed of ONO. For several Fig. 17. Program and erase time as a function of the cycles
generations, all Flash technologies have used ONO as their number.
interpoly dielectric.
The problem of mobile ion contamination has been al-
ready solved on the EPROM technology, taking particular
care with the process control, but in particular using high
phosphorus content in intermediate dielectric as a gettering
element. [17], [18]. The process control and the interme-
diate dielectric technology have also been implemented in
the Flash process, obtaining the same good results.
Electrons can be trapped in the insulating layers sur-
rounding the floating gate during wafer processing, as a
result of the so-called plasma damage, or even during the UV
exposure normally used to bring the cell in a well-defined
state at the end of the process. The electrons can subse-
quently detrap with time, especially at high temperature. Fig. 18. Anomalous SILC modeling. The leakage is caused by
a cluster of positive charge generated in the oxide during erase
The charge variation results in a variation of the floating gate (left-hand side). The multitrap assisted tunneling is used to model
potential and thus in cell decrease, even if no leakage SILC: trap parameters are energy and position.
has actually occurred. This apparent charge loss disappears
if the process ends with a thermal treatment able to remove typical result of an endurance test on a single cell is shown in
the trapped charge. Fig. 16. As the experiment was performed applying constant
The retention capability of Flash memories has to be pulses, the variations of program and erase threshold voltage
checked by using accelerated tests that usually adopt levels are described as “program/erase threshold voltage
screening electric fields and hostile environments at high window closure” and give a measure of the tunnel oxide
temperature. aging. In real Flash devices, where intelligent algorithms are
used to prevent window closing, this effect corresponds
D. Programming/Erasing Endurance to a program and erase times increase (see Fig. 17).
Flash products are specified for 10 erase/program cycles. In particular, the reduction of the programmed threshold
Cycling is known to cause a fairly uniform wear-out of the with cycling is due to trap generation in the oxide and to
cell performance, mainly due to tunnel oxide degradation, interface state generation at the drain side of the channel,
which eventually limits the endurance characteristics [19]. A which are mechanisms specific to hot-electron degradation.
B. Reading Operation
In order to have a fast reading operation in the NOR cell, a
parallel sensing approach can be used [29]. The cell current,
obtained in reading conditions, is simultaneously compared
with three currents provided by suitable reference cells (see
Fig. 22). The comparison results are then converted to a bi-
nary code, whose content can be 11, 01, 10, or 00, due to the
Fig. 22. Parallel multilevel sensing architecture. multilevel nature. In Fig. 23, we report the threshold voltage
= =
MSB most significant bit; LSB less significant bit. distribution of a 2-b/cell memory. The 11, 10, and 01 cell dis-
tribution will give rise to a different current distribution, mea-
sured at fixed , while the 00 cell distribution does not
butions can be obtained by combining a program-and-verify
drain current as well as the programmed level of a standard
technique with a staircase ramp (see Fig. 21). In fact,
1-b/cell device. High read data rate, via page or burst mode,
this method should theoretically lead to a distribution
is normally supported by large internal read parallelism.
width for any state not larger than . Indeed, neglecting
A parallel sensing approach does not seem transferable
any error due to sense amplifier inaccuracy or voltage fluc-
to 3- or 4-b/cell generations because of the exponential in-
tuations, the last programming pulse applied to a cell will
crease, 2 1, in comparators number, respectively 7 or 15
cause its threshold voltage to be shifted above the program
per cell, that means exponential increase in sensing area and
verify decision level by an amount at most as large as .
current consumption. At this moment, a serial sensing ap-
It follows that by decreasing , it is possible to in-
proach, e.g., dichotomic, or a mixed serial-parallel is consid-
crease the programming accuracy. Obviously, this is paid in
ered the more suitable approach. Serial sensing is also useful
terms of a larger number of programming pulses together
for a 2-b/cell device when high-speed random access is not
with verify phases and, therefore, with a longer programming
necessary, e.g., in Flash Cards applications.
time. Hence, the best accuracy/time tradeoff must be chosen
for each case considering the application specification.
However, high programming throughput, equal to 1-b/cell C. Data Retention
devices, is normally achieved via a large internal program One of the main concerns about multilevel is the reduced
parallelism, which is possible because cells need a low pro- margin toward the charge loss, compared with the 1-b/cell
gramming current in ML staircase programming. To do that, approach. We can basically divide the problem of data reten-
ML devices operate with a program write buffer, whose typ- tion into two different issues.
ical length is 32–64 bytes, i.e., 128–256 cell data length. The first is related to the extrinsic charge loss, i.e., to a
Also, evolution to 3–4 b/cell will not have an impact on single bit that randomly can have different behaviors with
programming throughput. In fact, program pulses and verify respect to the average and that usually form a tail in a stan-
phases increase proportionally with the number of bits per dard distribution. It is well known that extrinsic charge loss
cell, thus keeping roughly constant the effective byte pro- strongly depends on tunnel oxide retention electric field and
gramming time. that this issue can become more critical if an enhanced cell
Despite a not-negligible programming current, another ad- threshold range has to be used to allocate the 2 levels [30].
vantage in using CHE programming for multilevel devices is This problem is usually solved with the introduction of the
to avoid the appearance of erratic bits that instead can be a error correction code (ECC), whose correction power must
potential failure mode affecting FN programming. In fact, er- be chosen as a function of the technology and of the specifi-
ratic bit behavior was observed in the FN erase of standard cation required to the memory products.
NOR memories [27] but, for its nature, it should be present in The second one is related to the intrinsic charge loss, i.e.,
every tunneling process [28]. to the behaviors of the Gaussian part of a cell distribution,
Fig. 27. Triple well structure cross section: schematic (left side) and SEM (right side).
beyond 64 Mb will be realized entering the Flash CMOS technology have also been used for Flash. In Fig. 26,
in the gigabit era. The sectorization is becoming the different cell cross sections as a function of the different
more complex, and dual or multiple bank devices technology node are reported. For every generation, the
have already been presented. In these devices, dif- main innovative introduced steps are pointed out. It turns
ferent groups of sectors ( banks) can be differently out that the evolution of the different generations has been
managed: at the same time one sector belonging sustained by an increased process complexity, from the
to a bank can be read while another one, inside a one gate oxide and one metal process with standard local
different bank, can be programmed or erased. Also, oxidation of silicon isolation at the 0.8- m technology node,
following the general trend of reducing the power to the two gate oxides, three metals, and shallow trench iso-
supply, the device supply is scaling to 1.8 V (with lation at the 0.13- m node. In between is the introduction of
the consequent difficulties of internally generating tungsten plug, of self-aligned silicided junctions and gates,
high voltages starting from this low supply voltage and the wide use of chemical mechanical polishing steps.
value) and will go down to 1.2 V. Another issue, be- But one of the most crucial technologies for Flash evolution
coming more and more important, is the high data was the high-energy implantation development that has
throughput, in particular considering the density allowed the introduction of the triple well architecture (see
increase. Burst mode is often used in order to speed Fig. 27). With this process module, further development
up the reading operation and quickly download the of the single-voltage products has been possible, allowing
software content, reaching up to 50 MB/s. the easy management of the negative voltage required to
The introduction of the different generation as well as the erase the cell and, furthermore, the possibility to completely
reduction of the cell size has been made possible by the change the erasing scheme of the cell.
developments of Flash technology and process, and of cell In fact, as reported in Fig. 28, the cell programming and
architecture. erasing applied voltages have been changed as a function of
For what concerns the process architecture, all the main the different generation, always staying inside the CHE pro-
technology steps that have allowed the evolution of the gramming and the FN erasing. The first generation of cells
Fig. 29. NOR cell scaling. The basic layout has remained
unchanged through different generations. Fig. 30. NOR Flash cell scaling trends for cell area (right y axis)
and cell aspect ratio (left y axis). Both values are normalized to
the 130-nm technology node.
was erased, applying the high voltage to the source junction
and then extracting electrons from the FG-source overlap re-
gion (source erase scheme). This way was too expensive in The next technology step for the NOR Flash will be the
terms of parasitic current, as the working conditions were 90-nm technology node in 2004–2005. The cell size is ex-
very close to the junction breakdown. Moving to the second pected to stay in the range of 10–12 , translating to a cell
generation with the single-voltage devices, the voltage drop area of 0.1–0.08 m . As reported again in Fig. 29, the cell
between the source and the FG was divided, applying a neg- basic layout and structure has remained unchanged through
ative voltage to the control gate and lowering the source bias the different generations. The area scales through the scaling
to the external supply voltage (negative gate source erase of both the and pitch. Basically, this must be done con-
scheme). temporarily reducing the active device dimensions, effective
Finally, with the exploitation of the triple well also for the length ( ) and width ( ), and the passive elements,
array, the erasing potential is now divided between the neg- such as contact dimension, contact to gate distance, and so
ative CG and the positive bulk (the isolated p-well) of the on.
array, moving the tunneling region from the source to the For future generation technology nodes, i.e., the 65 nm in
whole cell channel (channel erase scheme). In this way, elec- 2007 and the 45 nm in 2010, as forecasted by ITRS, the Flash
trons are extracted from the FG all along the channel without cell reduction will face challenging issues. In fact, while the
any further parasitic current contribution from the source passive elements will follow the standard CMOS evolution,
junction, consequently reducing the erase current amount of benefiting from all the technology steps and process modules
about three orders of magnitude; the latter being a clear ben- proposed for the CMOS logic (like advanced lithography for
efit for battery saving in portable low-voltage applications. contact size, cupper for metallization in very tight pitch), the
The NOR Flash cell is forecasted to scale again following active elements will be limited in the scaling. In particular,
the International Technology Roadmap of Semiconductors the effective channel length will be limited by the possibility
(ITRS) [32]. The introduction of the 130-nm technology to further scale the active dielectric, i.e., the tunnel oxide and
node has occurred in 2002–2003 with a cell size of 0.16 m the interpoly ONO. As already presented in Section III, the
[33], following the 10- golden rule for the cell area tunnel oxide thickness scaling is limited by intrinsic issues
scaling, where is the technology node. The representation related to the Flash cell reliability, in particular the charge re-
of the memory cell size in terms of number of is a usual tention one, especially after many writing cycles. Although
way to compare different technology with the same metric; the direct tunneling, preventing the ten-year retention time,
for example, the DRAM cell size is today quoted to stay in occurs at 6–7 nm, SILC considerations push the tunnel thick-
the range of 6–8 . ness limit to no less than 8–9 nm. Moreover, the effective
OVERVIEW
An SRAM (Static Random Access Memory) is designed to fill two needs: to provide a direct
interface with the CPU at speeds not attainable by DRAMs and to replace DRAMs in systems
that require very low power consumption. In the first role, the SRAM serves as cache memory,
interfacing between DRAMs and the CPU. Figure 8-1 shows a typical PC microprocessor
memory configuration.
SRAM DRAM
The second driving force for SRAM technology is low power applications. In this case, SRAMs
are used in most portable equipment because the DRAM refresh current is several orders of mag-
nitude more than the low-power SRAM standby current. For low-power SRAMs, access time is
comparable to a standard DRAM. Figure 8-2 shows a partial list of Hitachi’s SRAM products and
gives an overview of some of the applications where these SRAMs are found.
The SRAM cell consists of a bi-stable flip-flop connected to the internal circuitry by two access
transistors (Figure 8-3). When the cell is not addressed, the two access transistors are closed and
the data is kept to a stable state, latched within the flip-flop.
32K x 8 1M x 4/512K x 8
Asynchronous SRAM 128K x 8/64K x 16 Asynchronous SRAM
Access Time (ns)
20
Asynchronous SRAM
10 32K x 32/32K x 36
PC Cache Memory Asynchronous SRAM
256K x 18/128K x 36
5 32K x 36 LVCMOS SSRAM
LVCMOS/HSTL SSRAM
Non PC Cache Memory
2
64Kbit 256Kbit 1Mbit 4Mbit
Device Density
Source: Hitachi/ICE, "Memory 1997" 22607
Word Line
B B
To Sense Amplifier
Source: ICE, "Memory 1997" 20019
The flip-flop needs the power supply to keep the information. The data in an SRAM cell is volatile
(i.e., the data is lost when the power is removed). However, the data does not “leak away” like in
a DRAM, so the SRAM does not require a refresh cycle.
Read/Write
Figure 8-4 shows the read/write operations of an SRAM. To select a cell, the two access transis-
tors must be “on” so the elementary cell (the flip-flop) can be connected to the internal SRAM cir-
cuitry. These two access transistors of a cell are connected to the word line (also called row or X
address). The selected row will be set at VCC. The two flip-flop sides are thus connected to a pair
of lines, B and B. The bit lines are also called columns or Y addresses.
Sense Amplifier
(Voltage Comparator)
Write Circuitry
D Out
D In
During a read operation these two bit lines are connected to the sense amplifier that recognizes if
a logic data “1” or “0” is stored in the selected elementary cell. This sense amplifier then transfers
the logic state to the output buffer which is connected to the output pad. There are as many sense
amplifiers as there are output pads.
During a write operation, data comes from the input pad. It then moves to the write circuitry.
Since the write circuitry drivers are stronger than the cell flip-flop transistors, the data will be
forced onto the cell.
When the read/write operation is completed, the word line (row) is set to 0V, the cell (flip-flop)
either keeps its original data for a read cycle or stores the new data which was loaded during the
write cycle.
Data Retention
To work properly and to ensure that the data in the elementary cell will not be altered, the SRAM
must be supplied by a VCC (power supply) that will not fluctuate beyond plus or minus five or
ten percent of the VCC.
If the elementary cell is not disturbed, a lower voltage (2 volts) is acceptable to ensure that the cell
will correctly keep the data. In that case, the SRAM is set to a retention mode where the power
supply is lowered, and the part is no longer accessible. Figure 8-5 shows an example of how the
VCC power supply must be lowered to ensure good data retention.
,,,,
Data Retention Mode
VCC
3.0V VDR ≥ 2V 3.0V
tCDR tR
,,,,
,,
,
CE
MEMORY CELL
Different types of SRAM cells are based on the type of load used in the elementary inverter of the
flip-flop cell. There are currently three types of SRAM memory cells :
• The 4T cell (four NMOS transistors plus two poly load resistors)
• The 6T cell (six transistors—four NMOS transistors plus two PMOS transistors)
• The TFT cell (four NMOS transistors plus two loads called TFTs)
The most common SRAM cell consists of four NMOS transistors plus two poly-load resistors
(Figure 8-6). This design is called the 4T cell SRAM. Two NMOS transistors are pass-transistors.
These transistors have their gates tied to the word line and connect the cell to the columns. The
two other NMOS transistors are the pull-downs of the flip-flop inverters. The loads of the invert-
ers consist of a very high polysilicon resistor.
This design is the most popular because of its size compared to a 6T cell. The cell needs room only
for the four NMOS transistors. The poly loads are stacked above these transistors. Although the
4T SRAM cell may be smaller than the 6T cell, it is still about four times as large as the cell of a
comparable generation DRAM cell.
W
+V
B B
To Sense Amps
Source: ICE, "Memory 1997" 18470A
The complexity of the 4T cell is to make a resistor load high enough (in the range of giga-ohms) to
minimize the current. However, this resistor must not be too high to guarantee good functionality.
Despite its size advantage, the 4T cells have several limitations. These include the fact that each cell
has current flowing in one resistor (i.e., the SRAM has a high standby current), the cell is sensitive
to noise and soft error because the resistance is so high, and the cell is not as fast as the 6T cell.
A different cell design that eliminates the above limitations is the use of a CMOS flip-flop. In this
case, the load is replaced by a PMOS transistor. This SRAM cell is composed of six transistors, one
NMOS transistor and one PMOS transistor for each inverter, plus two NMOS transistors con-
nected to the row line. This configuration is called a 6T Cell. Figure 8-7 shows this structure. This
cell offers better electrical performances (speed, noise immunity, standby current) than a 4T struc-
ture. The main disadvantage of this cell is its large size.
Until recently, the 6T cell architecture was reserved for niche markets such as military or space that
needed high immunity components. However, with commercial applications needing faster
SRAMs, the 6T cell may be implemented into more widespread applications in the future.
Much process development has been done to reduce the size of the 6T cell. At the 1997 ISSCC con-
ference, all papers presented on fast SRAMs described the 6T cell architecture (Figure 8-8).
+V
B B
To Sense Amps
Manufacturers have tried to reduce the current flowing in the resistor load of a 4T cell. As a result,
designers developed a structure to change, during operating, the electrical characteristics of the
resistor load by controlling the channel of a transistor.
This resistor is configured as a PMOS transistor and is called a thin film transistor (TFT). It is
formed by depositing several layers of polysilicon above the silicon surface. The source/chan-
nel/drain is formed in the polysilicon load. The gate of this TFT is polysilicon and is tied to the
gate of the opposite inverter as in the 6T cell architecture. The oxide between this control gate and
the TFT polysilicon channel must be thin enough to ensure the effectiveness of the transistor.
The performance of the TFT PMOS transistor is not as good as a standard PMOS silicon transis-
tor used in a 6T cell. It should be more realistically compared to the linear polysilicon resistor
characteristics.
Figure 8-9 shows the TFT characteristics. In actual use, the effective resistance would range from
about 11 x 1013Ω to 5 x 109Ω. Figure 8-10 shows the TFT cell schematic.
–10–6
Vd = –4V
–10–8
Drain Current, Id (A)
Vg
–10–10
Tox = 25nm
Tpoly = 38nm
–10–12
L/W = 1.6/0.6µm
2 0 –2 –4 –6 –8
Gate Voltage, Vg (V)
Source: Hitachi/ICE, Memory 1997" 19953
Word Line
Poly-Si
PMOS
BL BL
Figure 8-11 displays a cross-sectional drawing of the TFT cell. TFT technology requires the depo-
sition of two more films and at least three more photolithography steps.
,,,,,,
,,,,,,
,,,,,,
3rd Poly-Si 4th Poly-Si
,,,,,,,,,,, 2nd Poly-Si
,,,,,,
,,,,,,,,,,, (Gate Electrode
,,,,,,,,,,,
(Channel of TFT)
,,,,,
,,,,,,,,,,
,,,,,,,,,,
(Internal Connection) Contact
,,,,,,
,,,,,,,, ,,,,,
,,,,,,,,,,, of TFT) ,,,,,,,,,,
,,,,,,,,,,
(W-Plug)
,,,,,
,,,,,,,,
,,,,,,,, ,,,,,,,,,,
,,,,,,,,,,
2nd Direct
Contact
,,,,,,,,
,,,,,,
,,,,,,,,,,,,,
,,,,,,,,,,,,,,, ,,,,,,
,,,,,,,,,,,,, ,,,,,,,,,,
,,,,,,,,,,
,,,,,, ,,,,,
,,,,
,,,,,,,,,,
,,,,,,,,,,,,,
,,,, ,,,,,,,, ,,,,,,
,,,,,,,,,,
,,,,,,,,,
,,,,,,, ,,,,,, ,,,,,,,,,,
,,,, ,,,,,,, Isolation
N+ N+ N+ N+
Access
N+ Diffusion TiSi2
Driver Transistor
Region 1st Poly-Si
Transistor
(GND Line) (Gate Electrode
of Bulk Transistor)
Development of TFT technology continues to be performed. At the 1996 IEDM conference, two
papers were presented on the subject. There are not as many TFT SRAMs as might be expected,
due to a more complex technology compared to the 4T cell technology and, perhaps, due to poor
TFT electrical characteristics compared to a PMOS transistor.
Figure 8-12 shows characteristics of SRAM parts analyzed in ICE’s laboratory in 1996 and 1997.
The majority of the listed suppliers use the conventional 4T cell architecture. Only two chips were
made with a TFT cell architecture, and the only 6T cell architecture SRAM analyzed was the
Pentium Pro L2 Cache SRAM from Intel.
As indicated by the date code of the part and its technology, this study is a presentation of what
is the state-of-the-art today. ICE expects to see more 6T cell architectures in the future.
Figure 8-13 shows the trends of SRAM cell size. Like most other memory products, there is
a tradeoff between the performance of the cell and its process complexity. Most manufactur-
ers believe that the manufacturing process for the TFT-cell SRAM is too difficult, regardless
of its performance advantages.
Figures 8-14 and 8-15 show size and layout comparisons of a 4T cell and a 6T cell using the same technol-
ogy generation (0.3µm process). These two parts were analyzed by ICE’s laboratory in 1996.
One of the major process improvements in the development of SRAM technology is the so called
self aligned contact (SAC). This process suppresses the spacing between the metal contacts and
the poly gates and is illustrated in Figure 8-16.
1,000
100
Cell Size (µm2)
6T Cell
10
1
1 Micron 0.8 Micron 0.5-0.6 Micron 0.35 Micron 0.25 Micron
Technology
Source: ICE, "Memory 1997" 19989A
CONFIGURATION
As shown in Figure 8-17, SRAMs can be classified in four main categories. The segments are asyn-
chronous SRAMs, synchronous SRAMs, special SRAMs, and non-volatile SRAMs. These are
highlighted below.
Asynchronous SRAMs
Figure 8-18 shows a typical functional block diagram and a typical pin configuration of an asyn-
chronous SRAM. The memory is managed by three control signals. One signal is the chip select
(CS) or chip enable (CE) that selects or de-selects the chip. When the chip is de-selected, the part
is in stand-by mode (minimum current consumption) and the outputs are in a high impedance
state. Another signal is the output enable (OE) that controls the outputs (valid data or high
impedance). Thirdly, is the write enable (WE) that selects read or write cycles.
Synchronous SRAMs
As computer system clocks increased, the demand for very fast SRAMs necessitated variations on
the standard asynchronous fast SRAM. The result was the synchronous SRAM (SSRAM).
@@@@@@@@e?
@@@@@@@@e?@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?
@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@
@@@@@@@@
@@h?
@@h? @@
@@
@@h?
@@h? @@
@@
@@h? @@
@@h? @@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
SIDEWALL SPACER
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
POLYCIDE
@@ @@
@@
@@
@@
@@
@@ @@
N+
@@ @@
@@ @@
1
@@
@@
@@
@@
BIT
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
WORD
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
P+
@@ @@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
4 6
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
5P
@@ @@
6P
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
GND 5.2µm
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
BIT 1N 2N BIT
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
4N 3N
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
3
@@ @@
5
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
BIT N+
@@
@@
@@
@@
2
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
@@g
@@g ?@@
?@@
@@g
@@g ?@@
?@@
@@g
@@g ?@@
?@@
@@@@@@@@
@@@@@@@@ ?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@ ?@@@@@@@@
?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@ ?@@@@@@@@
6.35µm
WORD
WORD
VCC
4
@@@@@@@@e?@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e
@@@@@@@@e?
@@h?@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@
@@@@@@@@
R2
@@h? @@
@@
R1
@@h?
@@h? @@
@@h? @@
@@
@@h? @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
R1
@@ @@
@@
@@
@@
@@
@@ @@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
1 @@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@
@@ @@
@@ @@
BIT
@@ @@
BIT
@@ @@
@@ @@
2
@@ @@
@@ @@
1
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
2.5µm
@@ @@
3 4
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
3
@@ @@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
2
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
R2
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@
@@
@@
@@
@@ @@
@@ @@
@@ @@
@@ @@
@@ @@
GND
@@ @@
@@g
@@g ?@@
@@g ?@@
?@@
@@g ?@@
@@g
@@g
?@@
?@@
@@@@@@@@ ?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@
@@@@@@@@ ?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@e?@@@@@@@@?e@@@@@@@@ ?@@@@@@@@
?@@@@@@@@
4.5µm
Synchronous SRAMs have their read or write cycles synchronized with the microprocessor clock
and therefore can be used in very high-speed applications. An important application for syn-
chronous SRAMs is cache SRAM used in Pentium- or PowerPC-based PCs and workstations.
Figure 8-19 shows the trends of PC cache SRAM.
Figure 8-20 shows a typical SSRAM block diagram as well as a typical pin configuration. SSRAMs
typically have a 32 bit output configuration while standard SRAMs have typically a 8 bit output
configuration. The RAM array, which forms the heart of an asynchronous SRAM, is also found in
SSRAM. Since the operations take place on the rising edge of the clock signal, it is unecessary to
hold the address and write data state throughout the entire cycle.
Standard Process
Transistor
Active Area
Gate
Metal Contact Metal Contact
SAC Process
Transistor
Active Area
Gate
Metal Contact Metal Contact
Burst Mode
The SSRAM can be addressed in burst mode for faster speed. In burst mode, the address for the
first data is placed on the address bus. The three following data blocks are addressed by an inter-
nal built-in counter. Data is available at the microprocessor clock rate. Figure 8-21 shows SSRAM
timing. Interleaved burst configurations may be used in Pentium applications or linear burst con-
figurations for PowerPC applications.
SRAMs
N.C. 1 32 VDD
A15 2 31 A16
A14 3 30 CS2
A12 4 29 WE
I/O0 A7 5 28 A13
Input Buffer
A6 6 27 A8
A10 I/O1
A5 7 26 A9
A9
Row Decoder
A8 I/O2 A4 8 25 A11
Sense Amps
A7 A3 9 24 OE
A6 I/O3
512 x 512 A2 10 23 A10
A5
A4 Array
I/O4 A1 11 22 CS1
A3
A0 12 21 I/O8
A2
I/O5 I/O1 13 20 I/O7
Flow-Through SRAM
Flow-through operation is accomplished by gating the output registers with the output clock. This
dual clock operation provides control of the data out window.
64-bit CPU
32-bit CPU
16-bit CPU
With Cache
Standard
SRAM
Sync. Burst SRAM
Non Cache
/ADSC
/ADSP
Address 15 13 15
/BWE
/BW4
/BW3
/BW2
/BW1
/ADV
/CE1
/CE3
VDD
CLK
VSS
/GW
15 CE2
/OE
Registers
A6
A7
A8
A9
A0-A14
A0 A1
100
99
98
97
96
95
94
93
92
91
90
89
88
87
86
85
84
83
82
81
ADV D0 D1 A0+
CLK Binary Q0 N.C. 1 80 N.C.
Counter
A1+ I/O 17 2 79 I/O 16
ADSC Load Q1 I/O 18 3 78 I/O 15
Registers
32
/LBO
A5
A4
A3
A2
A1
A0
N.C.
N.C.
VSS
VDD
N.C.
N.C.
A10
A11
A12
A13
A14
N.C.
N.C.
32
DQ0-DQ35
SYNCHRONOUS MODE
CLOCK
Address
Output
BURST MODE
Address ,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,
Output
Pipelined SRAMs
Pipelined SRAMs (sometimes called register to register mode SRAMs) add a register between the
memory array and the output. Pipelined SRAMs are less expensive than standard SRAMs for
equivalent electrical performance. The pipelined design does not require the aggressive manu-
facturing process of a standard SRAM, which contributes to its better overall yield. Figure 8-22
shows the architecture differences between a flow-through and a pipelined SRAM.
Figure 8-23 shows burst timing for both pipelined and standard SRAMs. With the pipelined
SRAM, a four-word burst read takes five clock cycles. With a standard synchronous SRAM, the
same four-word burst read takes four clock cycles.
Figure 8-24 shows the SRAM performance comparison of these same products. Above 66MHz,
pipelined SRAMs have an advantage by allowing single-cycle access for burst cycles after the first
read. However, pipelined SRAMs require a one-cycle delay when switching from reads to writes
in order to prevent bus contention.
Late-Write SRAM
Late-write SRAM requires the input data only at the end of the cycle.
Clock
Control
Register Dout
PIPELINED
Control
Dout
FLOW-THROUGH
Clock
,,,,,,,
Clock
The ZBT (zero bus turn-around) is designed to eliminate dead cycles when turning the bus around
between read and writes and reads. Figure 8-25 shows a bandwidth comparison between the
PBSRAM (pipelined burst SRAM), the late-write SRAM and the ZBT SRAM architectures.
DDR SRAMs boost the performance of the device by transferring data on both edges of the clock.
The implementation of cache memory requires the use of special circuits that keep track of which
data is in both the SRAM cache memory and the main memory (DRAM). This function acts like a
directory that tells the CPU what is or is not in cache. The directory function can be designed with
standard logic components plus small (and very fast) SRAM chips for the data storage. An alter-
native is the use of special memory chips called cache tag RAMs, which perform the entire func-
tion. Figure 8-26 shows both the cache tag RAM and the cache buffer RAM along with the main
memory and the CPU (processor). As processor speeds increase, the demands on cache tag and
buffer chips increase as well. Figure 8-27 shows the internal block diagram of a cache-tag SRAM.
Data Bus
Processor Main
Memory
Cache Buffer
RAM
Address Bus
Cache Tag
RAM
FIFO SRAMs
A FIFO (first in, first out) memory is a specialized memory used for temporary storage, which aids
in the timing of non-synchronized events. A good example of this is the interface between a com-
puter system and a Local Area Network (LAN). Figure 8-28 shows the interface between a com-
puter system and a LAN using a FIFO memory to buffer the data.
Synchronous and asynchronous FIFOs are available. Figures 8-29 and 8-30 show the block dia-
grams of these two configurations. Asynchronous FIFOs encounter some problems when used in
high-speed systems. One problem is that the read and write clock signals must often be specially
shaped to achieve high performance. Another problem is the asynchronous nature of the flags. A
synchronous FIFO is made by combining an asynchronous FIFO with registers. For an equivalent
level of technology, synchronous FIFOs will be faster.
A0 VCC
65,356-Bit GND
Address
Memory
Decoder
Array
A12
RESET
I/O0-7 8
I/O Control
WE
Control Compa-
OE rator
Logic
CS
Microprocessor
LAN
FIFO
Memory
Write Data
FF
Full
Write
Pulse Dual Port RAM Array Flag
Gen 4096 Words x 18 Bits Logic
Empty
FF
Read Enable
Read Address Read Data
Read Clock Counter Register
Read Data
Source: Paradigm/ICE, "Memory 1997" 20866
Write Data
Inhibit
Write Clock Write Counter
Full
Dual Port RAM Array Flag
4096 Words x 18 Bits Logic
Empty
Read Data
Source: Paradigm/ICE, "Memory 1997" 20867
Multiport SRAMs
Multiport fast SRAMs (usually two port, but sometimes four port) are specially designed chips
using fast SRAM memory cells, but with special on-chip circuitry that allows multiple ports
(paths) to access the same data at the same time.
Figure 8-31 shows such an application with four CPUs sharing a single memory. Each cell in the
memory uses an additional six transistors to allow the four CPUs to access the data, (i.e., a 10T cell
in place of a 4T cell). Figure 8-32 shows the block diagram of a 4-port SRAM.
CPU #1 CPU #2
4-Port
SRAM
CPU #3 CPU #4
Shadow RAMs
Shadow RAMs, also called NOVROMs, NVRAMs, or NVSRAMs, integrate SRAM and EEPROM
technologies on the same chip. In normal operation, the CPU will read and write data to the
SRAM. This will take place at normal memory speeds. However, if the shadow RAM detects that
a power failure is beginning, the special circuits on the chip will quickly (in a few milliseconds)
copy the data from the SRAM section to the EEPROM section of the chip, thus preserving the data.
When power is restored, the data is copied from the EEPROM back to the SRAM, and operations
can continue as if there was no interruption. Figure 8-33 shows the schematic of one of these
devices. Shadow RAMs have low densities, since SRAM and EEPROM are on the same chip.
Battery-Backed SRAMs
SRAMs can be designed to have a sleep mode where the data is retained while the power con-
sumption is very low. One such device is the battery-backed SRAM, which features a small bat-
tery in the SRAM package. Battery-backed SRAMs (BRAMs), also called zero-power SRAMs,
combine an SRAM and a small lithium battery. BRAMs can be very cost effective, with retention
times greater than five years. Notebook and laptop computers have this “sleep” feature, but uti-
lize the regular system battery for SRAM backup.
R/WP1 R/WP4
CEP1 CEP4
OEP1 OEP4
Port 1 Port 4
Address Address
A0P1-A11P1 A0P4-A11P4
Decode Decode
Logic Logic
Memory
Array
Port 2 Port 3
Address Address
A0P2-A11P2 A0P3-A11P3
Decode Decode
Logic Logic
Column Column
I/O0P2-I/O7P2 I/O0P3-I/O7P3
I/O I/O
OEP2 OEP3
CEP2 CEP3
R/WP2 R/WP3
Figure 8-34 shows a typical BRAM block diagram. A control circuit monitors the single 5V power
supply. When VCC is out of tolerance, the circuit write protects the SRAM. When VCC falls
below approximately 3V, the control circuit connects the battery which maintains data and clock
operation until valid power returns.
RELIABILITY CONCERNS
For power consumption purposes, designers have reduced the load currents in the 4T cell struc-
tures by raising the value of the load resistance. As a result, the energy required to switch the cell
to the opposite state is decreased. This, in turn, has made the devices more sensitive to alpha par-
ticle radiation (soft error). The TFT cell reduces this susceptibility, as the active load has a low
resistance when the TFT is “on,” and a much higher resistance when the TFT is “off.” Due to
process complexity, the TFT design is not widely used today.
Nonvolatile
EEPROM
Memory Array
Store
Row SRAM
Select Rows
Memory Array
Array
A Recall
Store Control
Recall Logic
Column
I/O Circuits
I/O
Input
Data Column Select
Control
I/O
A A
CS
WE
Source: Xicor/ICE, "Memory 1997" 18479
A0-A10
Lithium
DQ0-DQ7
Cell Power 2K x 8
Voltage Sense SRAM Array
and
Switching VPFD E
Circuitry
W
VCC VSS
Source: SGS-Thomson/ICE, "Memory 1997" 20831A