2024 JMP Discovery Summit - Advanced Decision SHL Rev3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 17

2024 JMP Discovery Summit

Advanced Decisions Using


Predictive Modeling In
Semiconductor Industry

NXP Semiconductors Inc.


Su-Heng Lin
August 2024

1
Semiconductor Wafer Fab Process Flow

Oxidation
Wafer Acceptance
Test
Di
(Class Probe)
el
De ric ect ith
ol h y
p t
o p @ Individual
io osi Ph gra
n t o Device Level

300+ process
steps,

~ 9 weeks of
l
Meta it cycle time Etchi
s
Depo ng
ion

25 silicon wafers in Unit Probe Test


a wafer-lot

Cl e
@ Circuit Level for
n
tio

an
every die
Im Ion
nta

ing
pla

2
Case Study – BIN#6 Issue at Unit Probe for Part-A

All dice are tested, and they are either


categorized into a pass bin or in one of
various failing bins

3
Decisions To Make

There were 150 Part-A wafers from 6 lots at EOL Class Probe. Need to
determine how many wafers were at high risk of Unit Probe BIN#6 scrap.

I need a predictive
Determine whether to restart new materials from the beginning of line or escalate
priority of materials inlinemodel to help
accordingly with line down risk for
to minimize
customers.
risk assessment!

Identify the key class probe parameters responsible for BIN#6 fallout, and
thus recommend inline process tweak for better process window.

4
The 1st Step of Building Predictive Model 
Define Responses and Predictors
• Predictors: wafer median of Class Probe parameters; total 173 continuous factors
− There are auto-correlations among the 173 factors
• Responses: Bin#6 fallout at Unit Probe test
− Total 2096 rows of data
− Continuous: normalized BIN#6 fallout on each wafer
− Categorical: classified as good or bad wafer based on the 90%-tile of normalized BIN#6 distribution (0.187)

Bad:
209

Good:
1887

5
Flow of Data Mining and Predictive Modeling

Model Screening
Data Extraction, Cleanup and Compare (4) Modeling Types 
Identify key parameters from
Define Response & Predictors Construct Predictive Model
173

• Classification
Unit Probe
Good vs Bad
Response Decision Trees
• Categorical

Unit Probe • Regression


Normalized Response Bootstrap Forest
BIN#6 • Continuous

Class Probe • 173 Predictors Stepwise Linear


Parametric • Continuous Regression

6
Training vs. Validation Dataset
• Current Class Probe and Unit Probe data for completed lots will be randomly
assigned to a training dataset (~75%) and to a validation dataset (~25%) for
both categorical and continuous response analysis.

Analyze  Predictive Modeling  Make Validation Column

Including the 150 wafers at Class Probe without UP yield:


• Training dataset: They were used to build the prediction models @ 70.0% / 1572
• Validation dataset: They were used to validate the prediction models @ 23.4% /
524
• Test dataset: They are the 150 wafers (6.7%) with unknown Unit Probe outcome,
and will be used to test the selected prediction models
7
JMP Demo
Classification - Decision Trees
Analyze  Predictive Modeling  Partition

8
JMP Demo
Classification - Bootstrap Forest
Analyze  Predictive Modeling  Bootstrap Forest

9
JMP Demo
Regression – Stepwise Fit Linear Regression
Analyze  Fit Model

10
JMP Demo
Regression – Bootstrap Forest
Analyze  Predictive Modeling  Bootstrap Forest

11
JMP Demo
R-square Comparison of the 4 Predictive Models

• Classification model: Bootstrap Forest model outperform the Tree Decision model
and will be used for Good vs Bad BIN#6 Classification for the 150 wafers at EOL Class
Probe.
− It predicted total 47 wafers to be BAD (higher than 0.187 of normalized BIN#6 fallout).
• Regression model: Bootstrap Forest model outperform the Stepwise linear regression
model and will be used for normalized BIN#6 fallout prediction.
− Based on the estimated values, NONE of the 47 wafers would result in scrap.

12
Decision Making

The estimated BIN#6 loss of these 150 wafers was not high enough to
warrant starting a new Part A material for backup
• Therefore, team notified Planning to prioritize Part-A materials inline to meet the die quantity
for on-time customer delivery.

Multiple predictive models identified two Class Probe parameters as


responsible for elevated fallout in BIN#6
• High RsP+: after discussion with Subject Matter Experts, team will evaluate process window to
lower resistance.
• Low ToxP: hard to move parameter due to batch process. Monitor inline SPC of Oxide Growth
Thickness to ensure process stability.

13
Test Dataset for Model Accuracy Follow-up
• When all 6 lots completed unit probe,
the measured normalized BIN#6 fallout
was averaged at 0.162, with 2 lots
higher than 0.187 (at risk criteria).

• The R-square was ~71% btw the


Predictor and measured normalized
BIN#6.

• Main deviation was due to Lot-C that


used for a process change evaluation
(non-standard). Excluding Lot-C, R-
square improved to 80%.
14
Summary and Conclusions
• Looking for predictive models to help me make good decisions quickly.

• A model that handles autocorrelation without requiring much data manipulation or computer
memory constraints.

Response Type Number of Predictors


Part-A Bin#6 Auto-Correlated
Case Study Predictors
Classification Regression Small Large

Yes
Tree Partition Yes Ok Excellent Yes
(but didn’t use)

Bootstrap Forest Yes Yes Ok Excellent Yes

Yes
Linear Regression No Yes Good Good
(but didn’t use)

15
Acknowledgement

• The author would like to thank Akira Abe (NXP 6-Sigma Master Black Belt), Mehul Shroff
(NXP 6-Sigma Black Belt) and Eric Sallquist (Device Staff Manager) for their guidance
when writing this presentation for external audiences.

• The author would like to thank Douglas Montgomery (Regents’ Professor of Industrial
Engineering and Statistics, Arizona State University) and Cheryl L. Jennings (Associate
Teaching Professor | Industrial Engineering and Engineering Management Programs) for their
technical feedbacks.

16
17

You might also like