0% found this document useful (0 votes)
26 views66 pages

Test & Reliability Challenges in Advance Semiconductor Geometries

The document discusses the challenges and trends in testing and reliability for advanced semiconductor geometries, highlighting the increasing complexity and costs associated with integrated circuit design. It emphasizes the impact of new technologies like FinFETs on performance and power consumption, as well as the need for improved testing methodologies to address emerging defects and variations. The document also outlines the importance of embedded memory and efficient repair solutions to enhance yield and reduce costs in semiconductor manufacturing.

Uploaded by

杨谱
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views66 pages

Test & Reliability Challenges in Advance Semiconductor Geometries

The document discusses the challenges and trends in testing and reliability for advanced semiconductor geometries, highlighting the increasing complexity and costs associated with integrated circuit design. It emphasizes the impact of new technologies like FinFETs on performance and power consumption, as well as the need for improved testing methodologies to address emerging defects and variations. The document also outlines the importance of embedded memory and efficient repair solutions to enhance yield and reduce costs in semiconductor manufacturing.

Uploaded by

杨谱
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Test & Reliability Challenges in

Advance Semiconductor Geometries


Rancho Bernardo Inn

Yervant Zorian
Fellow & Chief Architect
6/9/13
Contents

• Industry Transformational Trends


• More than Moore Challenges
• Impact on Test, Yield and Reliability
• Multi-Die stacks
• Innovation – Forward Challenges
• Conclusions
Industry Transformational Trends

CLOUD SOFTWARE SHORTENED PRESSURE ON


COMPUTING APPLICATIONS TIME-TO-VOLUME R&D PRODUCTIVITY
USER EXPERIENCE
Consumers Driving “Smart” Electronics

Multi-media
Device Convergence
Digital TV

Multi-media
Access Wireless Connectivity
Anytime, Anywhere
Smart Automotive
Phone Infotainment
High Definition
Imaging Anywhere
Multi-functional
Picture Printer
Gaming Multi-media
PC
Convergence
Products: SMART Everything
Product Complexity / Capabilities

1980 1990 2000 2010 2020

“SMART”
Data Center / Cloud Computing Trends

Source: Cisco VNI, 2011-2016


Data Traffic by Forecasts
• By 2016, total data traffic will be 4x 2011
• By 2016, global IP traffic will reach 1.3 zettabytes annually (110 exabytes per month)
• By 2016, there will be nearly 19 billion global network connections (fixed and mobile); the
equivalent of two and a half connections for every person on earth.
• By 2016, there will be about 3.4 billion Internet users, which is more than 45% of the world’s
projected population.

Monthly Traffic Monthly Traffic


System
Application Application Application Application

Application Framework
Software
Infrastructure Libraries O/S Runtimes
O/S

Drivers Configuration Test Programs


Bare-Metal
Software Power
Scheduler … Management

Mixed Signal Low Power Advanced Node


Silicon
Giga-Gates/GHz Memories 3D-IC

8
Integration
• Before 32nm, new process was introduced every other year
 Since then, a new process every year

180nm HK/MG
130nm

90nm
65nm
p‐SiON
45nm
32nm
28nm
20nm
14nm

2000 2002 2004 2006 2008 2010 2012 2014


* Source : ITRS, Samsung Electronics Co.
Silicon Complexity
250nm

Timing Closure!
180nm

Signal Integrity
Power 130nm
Verification
90nm
Power !
Verification! 65nm
TestYield ! !
& Yield
45/40nm
Clocks
32/28nm
Power !!
22/20nm
Verification !! 14nm
3D
Power !!!
Verification !!! Power !!!
Software Verification !!! Power !!!
& Yield Software !! Verification !!!
TestYield
Variability
TestYield
Software !!!
Variability
! ! TestYield
& Yield &Yield
!! !!
!!!
3D! 3D!!
IC Design Expensive and Difficult

32/28nm node 22/10nm node

Fab costs $3B $4B – 7B

Process R&D costs $1.2B $2.1B – 3B

Design costs $50M – 90M $120M – 500M

Mask costs $2M – 3M $5M – 8M

EDA costs $400M – 500M $1.2 – 1.5B


Source: IBS, May 2011

• Intensive customer/partner collaborative developments


Top Semiconductor IP Vendors

Rank Company 2010 2011 Growth 2011 Share


1 ARM Hol di ngs 575.8 732.5 27.2% 38.3%
2 Synops ys 191.8 236.2 23.2% 12.4%
3 Ima gi na ti on Te 91.5 126.4 38.1% 6.6%
4 MIPS Technol og 85.3 72.1 ‐15.5% 3.8%
5 Ceva 44.9 60.2 34.1% 3.2%
6 Si l i con Ima ge 38.5 42.8 11.2% 2.2%
7 Ra mbus 41.4 38.9 ‐6.0% 2.0%
8 Tens i l i ca 31.5 36.3 15.2% 1.9%
9 Mentor Gra phi c 27.3 23.6 ‐13.8% 1.2%
10 AuthenTec 19.6 22.8 16.3% 1.2%
Source: Gartner, March 2012
Test & Yield

• Increasing design complexity


UDL
– Many AMS and Interface IP cores Scan
– Many existing third-party IPs Hard IP

BIST + Mux I/O Wrapper


– Large number of memory instances Hard IP Mux I/O Wrapper
Mux I/O Wrapper

Memory
• Exploding digital logic size

BIST + Scan Wrapper


– Greater than 100M gate designs

Hard IP
– Global design teams
UDL
Scan
Memory
BIST Wrapper
• Increasing test & yield impact
Memory
– Quality – DPPM Scan Wrapper
– Total memory bit count
UDL UDL
– Yield Optimization Scan Scan

– Designer productivity Hard IP


Scan Wrapper
– Time-to-market and time-to-volume
SoC Test Challenges

Higher Test Costs Lower Productivity & Efficiency

• Trend: large designs & • Trend: complex designs and


increased on-chip memory implementation flows

• Need: less test data, faster • Need: Hierarchical SoC


execution on the tester implementation and validation
solution for memories

Functional Performance Degradation Slow Ramp-up, Higher DPPM

• Trend: small geometries, subtle &


• Trend: high performance cores
hard to detect defects
• Need: test with minimal impact
• Need: advanced fault models,
on functional performance, area
efficient volume diagnostics &
efficient solution
yield analyses
New Defects

Double Patterning Voltage Scaling Random Dopant Fluctuation

Variation from overlay shift Local variation with Voltage Global and local Vth variation

Process variation at 20nm is significant, causing bit failures


Sample SOC: DesignWare® IP based

DDR PCIe HDMI USB ADCs Audio Video


CPU PHY PHY PHY PHY DACs Codec Front End
CPU II IIP IIP
RISC/ IIP P IIP
Host DDR PCIe HDMI USB Signal ARC Audio ARC Video
DSP
controller controller controller controller processing processor processor

IIP
MIPI DigRF,
Ethernet SATA SD/MMC
CSI, DSI Embedded
controller controller controller controller Embedded
Memories
Memories
(SRAM,
XAUI SATA MIPI D-PHY ROM, NVM)
I2C GPIO UART PHYII PHY M-PHY
IIP IIP
P
Datapath

Logic Libraries

Infrastructure
Digital IP Physical IP
IP
SOC Test Solution
Accelerate Higher Quality, Lower Cost Test

Logic Test Memory Test & BIST of High- Yield Analysis


Repair Speed I/O IP

Comprehensive Solution for SoC and Core-Based Designs

®
• Easy integration and • High-speed SERDES • Physical-aware
• Pin-limited compression verification of self-test IP interfaces (PCI Express®, diagnostics
USB 3.0, etc.)
• Advanced fault models • High defect coverage • Fast identification of
• Verification IP for systematic yield loss
• Power-aware test
integration test mechanisms
Hierarchical Design & Verification
Embedded Memory Is Growing
Key Driver Of Design Success
40
100
35 CPU/DSP/Ctrlr ARM® Processor Cache Size 90
% Area Memory
70 80
30 All Other IP Blocks,
% Area Reused Logic
Combined 70
60 % Area New Logic
25 60
50
50

I/D Cache (Kb)


20
40 40
15
30
30
20
10 20
10
10 0
5

0
0 ARM3
ARM2 ARM7 ARM9 Cortex
2005 2006 2007 2008 2009 2010 2011 2012 2013 2014

Source: Semico, October 2011 Source: Wikipedia Source: Semico, June 2010

Number of Cache Size / SoC Memory Memory


Processors is Processor is is Growing Dominating Chip
Growing Growing Area

Need an efficient solution to test, repair and diagnose


thousands of on-chip memories
Process Miniaturization Challenges

• Higher susceptibility and new and speed related fault


types
 Requires expanded test to detect new fault types
 Requires high speed test capability
• Higher level of miniaturization
 Needs fault classification and localization
 Needs on-the-fly monitoring and analysis of volume diagnosis data
 Requires better support for yield learning and production ramp up
Cost of Unit Out
$100,000 10

Normalized cost per unit out


$10,000 1
Capital cost ($M)

$1,000 0.1

$100 0.01

$10 0.001

$1 0.0001
1970 1980 1990 2000 2010 2020

Source: IC Knowledge, 2005 & IBS 2008


Dramatic Rise in Systematic Yield Issues
Systematic Design-based
yield issues
Litho-based
yield issues
Yield Random
Defect-based
Fallout yield issues

Technology Node (nm)

• Cheating with physics induces more process variability at each nanometer node
• Some layout features react strongly to variability causing systematic yield issues
• Each successive nanometer node faces more systematic yield loss
**Chart data source - IBS
Embedded Test & Diagnosis
High Manufacturing Test Quality

• Out-of-box enhanced test algorithms


• Fully characterized for each advanced node Parallel / Serial
Fault Types
Architecture
• Provides 100% fault coverage Static/dynamic
– New fault types appear at advanced nodes Write mask
– Resistive faults Weak
– Performance faults Address decoder

– Bridging faults Bit-line leakage


Intra & Inter-port
– Parametric variation
Delay coupling
– Generic algorithms are not as granular, resulting
Data setup/hold
in test escapes
• Fault injection based analysis
Process Variation- Read Failures in SRAM
Read Failures should be tested in (VDD_max,T_max) corner
– More than 22% variation of Vth brings to a failure

Vth ↑

Vth ↓

Vth ↓
Vth ↑

T\V VDD_min VDD_typ VDD_max


(0.765V) (0.85V) (0.935V)

-40

25 X

125 X X X

for 30% L and W variation


Repair Solution Impacts Memory Yields

1 Mb 2 Mb 4 Mb 8 Mb 16 Mb 24 Mb 32 Mb
Embedded
100 Test & Repair
90

80
Memory Yield (%)

70

60

50 Memory
40 without
30 Repair
20

10

0 3 5 11 22 43 65 86

Amount of Memory on the die (Mb)


Repair Efficiency

• Redundancy allocation algorithm maximizes available


repair resources
– Numerous types and amount of redundancies

• Repair methodologies to maximize repair


– Multi-corner cumulative repair
– Multi-zone fuse containers
– In-system periodic repair capability
– Fastest system recovery with multi-power island chips
Why FinFETs
As predicted for
many years, but
often postponed,
the device level of
the chip is finally
changing

• “Conventional” planar transistors are reaching the limits of scaling


and have become “leaky”: They use too much power

• FinFETs enable products with higher performance and lower power


consumption

• There are alternatives, but FinFETs promise better continuation of


Moore’s Law
How FinFETs Work
Field effect transistors: The field from the gate controls the channel

Planar FET FinFET

“Multiple” gate surrounds a thin


channel and can “fully deplete” it of
Single gate channel control is limited carriers. This results in much better
at 20nm and below electrical characteristics.
FinFET Advantage: Intel’s Perspective
Reduced Leakage Current Lower Operating Voltage Faster Device

Source: Mark Bohr, Intel Developer Forum 2011

• Benefit of Intel’s FinFET with respect to Intel’s 32nm planar technology


– Tri-Gate transistors provide an unprecedented 37% delay improvement at low
voltage.
– Tri-Gate transistors can operate at lower voltage, providing ~50% active
power reduction
FinFET Impact on Design for Test

SoC Designers The impact of FinFETs is


largest below Metal 1
Libraries and tools
will minimize the
impact on digital Double patterning and restricted
design. design rules, while often
associated with FinFETs, are
not unique to them and also
necessary for planar
technologies

IP Designers
Foundries Standard cells, memory compilers
Significantly impacted. and custom design are impacted.
FinFETs in SRAMs

• Special focus on low voltage operation


– Read assist and write assist circuitry to improve
robustness
– Compile time options to maximize
• Large SRAM macros provide alternatives
to embedded DRAMs
• Enhancing Memory Test & Repair to Source: IBM Research, 2010
Symposium on VLSI Technology
handle FinFET related failures
– Fault models for planar FETs need to be
extended to cover FinFETs
– Further enhancements in compression of test
and repair algorithms
Realistic Faults in FinFET SRAMs

• Traditional faults
− stuck-at fault, stuck-open fault, transition fault,
address decoder fault, coupling fault, etc.
• Process variation faults
− Transistor threshold voltage is affected by
gate length (L) and fin thickness (Tfin).
• FinFET specific faults
− Opens in FinFET transistor back gate cause
delay and leakage faults (transistor threshold
voltage is affected by back gate voltage)
Detection Programmable

TAP
STAR processor
tck STAR processor
JPC TBOX
tms STAR processor
TBOX
tdi p1500
p1500
tdo TBOX
p1500
trstn p1500

New MARCH WGL


based test pattern Tester
algorithm
W(P)R(P)W(~P) Select test algorithm register for
serial access

R(~P)…… TBOX_SEL
Load alternative test algorithm

Load generated
TEST algorithm

Continue with
test flow
Automated Rapid Fault Isolation and
Identification

Tester Log Bit Map

Tester

Identity
folder

Coordinate Identification
SoC with
ET&R
Silicon
Debugger

WGL/ Vector
STIL Generator
Multi-level Precision Diagnostics

Level 1: Memory Instance Failure

Level 2: Logical Address of Failure

Level 3: Physical Address of Failure (row,


col)

Level 4: Physical X,Y coordinates of


failing bit

Level 5: Defect classification


(sing bit, paired bit, col, row etc.)

Level 6: Fault Classification


(stuck-at, transition, coupling, etc.)

Level 7: Fault localization


(aggressor/victim cell coordinates etc.)
Low Cost Failure Diagnostics Solution

• Visualization of Test Results


Silicon Browser • Memory Dump

Identity
folder

SoC Design with Chip with • Diagnostics


ET&R STAR Memory • Fault localization
Interactive data System • Memory Characterization
exchange
Yield Optimization

Tester • Embedded Test & Repair • Yield Optimization


• Test/Repair IP Insertion • Failure Visualizations
• Vector Generation • Cross-Domain Correlations
• Localization & Signatures • Dominant Failure Modes

Fail Identity
Data Folder Visualizations Correlations

Bitmap

Coordinates
Statistical Prioritization of Failure Modes
Silicon Debugger Yield Optimization
Identifying Dominant Failure Mechanism
How to get the largest yield improvement

• Rise in systematic defects


– Very few dirt particles or fall-on defects

• 100s of failed dies in first silicon

• 10-50 fault candidates per failed


and diagnosed die

• 15-20 metal segments & Via per


candidate net

• 100x10x10 = 10k sites for FA • FA cycle time per site: 4-8 hours
– For each silicon lot during ramp
– Can manage <10 sites only
How Does Volume Diagnostics Help?

• Volume Diagnostics
– Statistical Analysis of Diagnostics results from multiple failing
chips
– Identifies systematic, yield-limiting issues by using design data
– Produce outputs for Physical Failure Analysis (PFA)

Systematic Yield Problems


Count

Defect Type

Category 1 Category 2 Category 3 Category 4


Design-Centric Volume Diagnostics
Multi-Tool Manual Flow
2-3 Weeks

Low Yield Cell Fail By Failing Cell Spatial Failing Cells Failing Nets PFA Sites
Lot Test Map Trends and Nets On Layout Low Probability

Yield Optimization Automated


• An order of magnitude faster
Flow
2-3 Days systematic failure localization
• Prioritization of failure types based
on yield impact
Low Yield PFA Sites • Success in capturing dominant
Lot High Probability systematic failure mechanisms
*PFA – Physical Failure Analysis
Shortest Time to Volume
Tape Out to First Silicon Bring Production
Silicon Up

Pattern Generation Pattern Optimization Defect Analysis for


& Silicon Debug Yield Optimization
Tester FAB
Weeks Months Months
IP
Typical Time-to-Volume DESIGN
Using ET&R
Reduced Time-to-Volume

Days Days Weeks


Trends for 3D Stacking

All these technologies


will co-exist!

Source: Yole Développement 2008


3D Packaging in cell phones
• 3D packaging used in cell phones for several years
– Stacked dies with Wire bond

Memory

– Package on Package (PoP)


2003 STM « world record » Memories

Processor
DRAM,
Flash
Digital Stud Bump Flip Chip
Baseband
Source:
Prismark
Beyond SoC: SiP Alternatives

• SoC: System-on-Chip. Integrate combinations of logic,


processor, SRAM, DSP, A/RF, DRAM, NVM
• SiP: 3D Stacked Dies
1. Non-TSV
– bare die stacking: wirebond, flipchip, embedded die substrate
– package stacking: PoP, PiP
2. TSV
– via first, via middle, via last
Evolution in 3D Technologies
non-TSV TSV

Limitations Benefits
 Peripheral bonds only  Area placement
 Long wire bonds (high  Excellent electrical
inductance, high crosstalk, low characteristics
speed interconnect)  High densities
  Limited to low-density   Orders of magnitude higher
interconnects and with specific I/O interconnect densities between
pad routing dies
Through Silicon Via Pros and Cons

• Pros
– Allow even smaller package outline
– No pad extension needed
– Lower sensitivity to foreign material at
Camera assembly
– Wire bonding compatible layout
– Reflow process compatible
– Better interconnect routing capability
• Cons
– More complex technology
– Glass
– Silicon
– Back-end processes
Through via contacts
– Cost From top to bottom
3D Stacking is Not New…But TSVs Are!
Chip 1 Chip 2
• Multi-Chip Packaging
– Dense integration
board
– Heterogeneous technologies
Printed Circuit Board (PCB)

• Vertical Stacking
board
– Denser integration
Multi-Chip Package (MCP)
– Smaller footprint
• Through-Silicon Vias (TSVs)
board
– Even denser integration
System-in-Package (SiP) – Increased bandwidth
– Increased performance
– Lower power dissipation
board
– Lower manufacturing cost
TSV-Based 3D-SIC
Yield Implication Due to 3D Levels
Tests for 3D Induced Effects

• Test Coverage for TSV Interconnect


• Defect Coverage for due to
– Thinning Process
– Thermal Dissipation
Known Good Die Challenge:
 Conventional burn-in challenge
 Full speed test and burn-in prior to packaging
 Higher pin count with finer pitch
 Increased functionality and frequency

 KGD requires
 Extra stress during probe, carriers, or WLBI

 Necessary for SiP production


Wafer Level Burn-in and Test
 Greatly simplifies backend IC fabrication line
Conventional 2D Test Flow
Conventional 2D
wafer fab • Main role of Final Test (FT):
guarantee outgoing product quality
wafer test • Main role of Wafer Test (WT):
prevent unnecessary package cost

• WT executed only if benefits exceed costs:


(1-y)  d  p  t
with
assembly &
packaging y: fabrication yield
d: fraction of faulty products that
final test
the WT can detect (‘test quality’)
p: preventable product cost
t: test execution cost

52
2D Test Flow vs. 3D Test Flow
Conventional 2D 3D-SIC
wafer fab wafer fab 1 wafer fab 2 … wafer fab n

wafer test KGD test 1 KGD test 2 … KGD test n

stacking stacking stacking


1+2 (1+2)+… (1+2+…)+n

KGS test KGS test KGS test


1+2 (1+2)+… (1+2+…)+n

assembly & assembly &


packaging packaging

final test final test

• Terminology
– KGD : Known-Good Die test Test access is distinctly different!
– KGS : Known-Good Stack test Test contents might be different.
• Better name would have been “Known-Bad Die/Stack” test 
Required Infrastructure

• Language for test description transfer


– Core Test Language, CTL (IEEE Std. 1450.6) [Kapur – 2002]
• On-chip Design-for-Test for electrical test access
– Test wrappers
• Around cores: IEEE Std. 1500 [Da Silva et al. – 2006]
• Around dies: to be developed NEW!

• Around full-stack product: IEEE Std. 1149.1 [Parker – 2003]


– Test Access Mechanisms
• Intra-die: test bus, TestRail [Marinissen et al. – ITC’98]
• Inter-die: TestElevator NEW!

• EDA support for automated ‘test expansion’


from module-level test into chip-level test
All What Is Known – And Some More…

• All manufacturing defects that can occur in conventional 2D


chips,
can also occur in 3D-SICs
• Hence, we need to apply all known test methods
– Logic: stuck-at, transition, delay, VLV, …
– Memory: array, decoder, control, data-lines, …
– Analog: INL, THD, …

• In addition:
1. Tests for new intra-die defects
2. TSV interconnect tests
Advanced TSV-Interconnect Test
• Advanced fault models for TSV interconnects
– Delay faults

• Testing of infrastructure TSV interconnect


– Power/ground TSV interconnects
– Clock TSV interconnects

• TSV interconnect Redundancy & Repair


– Crank up bonding yield
– Evaluate benefit/cost trade-offs
Wrapper Style: P1838
IEEE Std. 1149.1 (‘JTAG’) IEEE Std. 1500
• Interface • Interface
– Single-bit for data and control: – Mandatory single-bit: WSI-WSO
TDI-TDO – Optional n-bit: WPI-WPO
• Wrapper cells with double FFs • Scalable wrapper cells
– No ripple-through during shift – Single-FF cell most common
• Control via TAP Controller: • Control via flexible
fixed-protocol Finite State Machine ? instruction shift register

SO SO
G G
PI 1 PI 1
PO PO
1 1
G G

1 1D Q 1D Q 1 1D Q
1 1

Clk Clk Clk

SI SI
The Role of Advanced DfT Techniques
• RPCT – Reduced Pad-Count Testing
Reduce width of scan-test interface
– Useful to limit additional probe pads for KGD testing
– Same test data volume: smaller interface  longer test length

• TDC – Test Data Compression


Reduce off-chip test data volume by on-chip (de-)compression
– Definitely applicable to 3D-SIC ‘super chips’
– Great combination with RPCT
• BIST – Built-In Self-Test
On-chip stimulus generation and response evaluation
– Reduces off-chip test data volume to (virtually) zero
– Narrow TAMs / TestElevators
– Protection of proprietary test contents – execute and trust
– Especially attractive for memory dies – MBIST
3D Test Resource Partitioning
• 3D-SICs offer new opportunities to system architects
• 3D-SICs offer new opportunities to DfT architects
– Which DfT resource to put in which die?
 Test Resource Partitioning

• Example: DRAM-on-Logic
1. MBIST in DRAM Die
– “3D-Prepared” DRAM
– Proprietary memory content does not need to be released

2. MBIST in Logic Die


– Drop-in MBIST module provided by DRAM vendor
– MBIST implemented in logic process technology
– Communication over TSV-based interconnects
Reliability Faults

• Intermittent Faults:
– unstable hardware activated by environmental changes
(lower voltage, temperature)
– often become permanent faults
– identifying requires characterization
– process variation – main cause of IF
• Transient Faults:
– occur because of temporary environmental conditions
– neutrons and -particles
– power supply and interconnect noise
– electromagnetic interference
– electrostatic discharge
Reliability Faults (cont)

• Infant Mortality
– rate worsens due to transistor scaling effects and new process
technology and material
• Aging Induced Hard Failures
– performance degradation over time (burn-in shows)
– degradation varies over chip-chip and core-core
• Soft Errors
– Random logic still at risk
– RAM decreasing SEU per bit
• Low Vmin increases bit failures in memories
• Transient Errors, such as timing faults, crosstalk are major
signal integrity problems
Field Reliability Challenge
100000

2008 The error bars


10000
account for the
2005
range of supply
2012 2001 voltage. The SER
1000
increases
exponentially at
100 2.1-2.2 decades/volt,
2003 e.g 200X in 2005.
10 1999

1
1997
0.1
0 0.05 0.1 0.15 0.2 0.25

From AMD, Intel, Compaq, 1999


MCU Growth Over Technology Nodes

1400

1200

1000 SBU Average/node

800 MCU Average/node

600 Expon. (SBU


Average/node)
400 Expon. (MCU
Average/node)
200

0
400 300 200 100 0

Source: iRoC
SER Growth at SOC Level

1000

100

10
Memory SER
Logic SER (Seq + Comb)
1
1000 800 600 400 200 0 Expon. (Memory SER)
Expon. (Logic SER (Seq + Comb))
0

Source: iRoC
Robustness IP for ECC

Data Bus Data Bus Correction


Block

Memory
code IP code
ECC Error Error
Syndrome
generator bits Logic Indication
bits Generator

• Standard ECC architecture provides


single bit repair
• RAM multi-bit upset probability
depends on cell to cell distance
Thank You

You might also like