Chapter 8 Group 4 Q
Chapter 8 Group 4 Q
Chapter 8 Group 4 Q
EXAMPLE:
The data in Figure 8.9 is about surface roughness. It
has a skewed distribution, suggesting non-normality.
With USL = 32 microinches. With x̄ =10.44 and S = 3.053,
implying that ,Ĉpu=2.35 and Table 8.2 would suggest
that the fallout is less than one part per billion, they
are likely inaccurate due to the non-normal data.
• In order to handle this result, they transform
the data into a new, transformed metric the
data that have a normal distribution
appearance. In this example, reciprocal
transformation was used.
• figure 8.10 presents a histogram of reciprocal
value x* = 1/x. With x̄ * =0.1025, s*=0.0244, USL
=1/32=0.03125. So we got Cˆ pl = 0.97, which
implies that about 1,350 ppm are outside of
specifications, much more realistic than the
first one.
So Luceño (1996) introduced the index Cpc to deal with non- normal data :
• Aiming for broader applicability, attempts were made to adjust PCRs
for Pearson & Johnson families, covering both normal & non-normal
cases.
With T= 12(USL-LSL),
When the distribution is normal
then =7.52 to make it equal
to the 6𝛔
• Another approach is to extend the definition of the standard capability indices
to the case of non-normal process. An example of this approach is Cp (q)
which is constructed based on the idea to use appropriate quantile of the
process.
• where xα the α -th quantile of the process.
Cpk was developed to deal with process with mean μ that is not centered between
the specification limits. Cpk alone is still an inadequate measure of process centering
Estimation of Cpm:
Point estimate
of Cp:
Distribution Chi-squared Z T
Confidence interval uses s rather than R̃ /d2 to estimate σ. This further emphasizes that the
process must be in statistical control for PCRs to have any real meaning. If the process is
not in control, s and could be very different, leading to very different values of the PCR.
TEST HYPOTHESIS
A practice that is becoming increasingly
common in industry is to require a supplier
demonstrate that the process capability
ratio Cp meets or exceeds some particular
target value
Kane (1986)
EXAMPLE
A customer has told his supplier that, in order to
qualify for business with his company, the supplier TEST HYPOTHESIS
must demonstrate that his process capability
exceeds Cp = 1.33. Thus, the supplier is interested in
establishing a procedure to test the hypotheses
α = β = 1 - 0.9 = 0.1
Kane (1986)
TEST HYPOTHESIS
n = 70
Kane (1986)
There are 7 quality
management tools (7 QC Tools)
1. An error on an invoice
2. An incorrect or incomplete shipment
3. An incorrect or incomplete customer
order
4. A call that is not satisfactorily
completed
FORMULA
1.Determine how much of total observed variability is due to the gauge or instrument.
2. Isolate the components of variability in the measurement system.
3. Assess whether the instrument or gauge is capable (that is, is it suitable for the
intended application)
1.Repeatability (Do we get the same observed value if we measure the same unit several times
under identical conditions?)
2.Reproducibility (How much difference in observed values do we experience when units are
measured under different conditions, such as different operators, time periods, and so forth?)
The basic ideas of measurement systems analysis (MSA), consider a simple but
reasonable model for measurement system capability studies:
y=x+ε
The P/T Ratio is often used to assess the measurement capability of a system or device in
relation to accuracy requirements
The popular choices for the constant k are k = 5.15 (95% tolerance interval that contains at
least 99% of a normal population) and k = 6 (the usual natural tolerance limits of a normal
population)
Values of the estimated ratio P/T of 0.1 or less often are taken to imply adequate gauge
capability
Signal-to-noise ratio
SNR is typically used when we want to evaluate the accuracy of a test or measurement.
SNR is defined as the ratio of the power of a signal (meaningful input) to the power of
background noise (meaningless or unwanted input). A high SNR indicates that the signal is
significantly greater than the noise, which helps increase the accuracy of the test or
measurement.
=> A value of 5 or greater is recommended, and a value of less than 2 indicates inadequate
gauge capability.
Discrimination ratio
DR is typically used when you want to evaluate the discriminatory power of a measurement tool
A high DR indicates that the measurement tool can effectively distinguish between different
parts
=> For a gauge to be capable the DR must exceed 4
Example 8.7 (Cont)
The part used in example 8.7 has USL = 60 and LSL = 5, and we take k is 6 so we have
the P/T ratio is:
=> The P/T ratio < 0.1 so the gauge capability is good to imply adequate
Example 8.7 (Cont)
With the SNR, we need some step to calculate it
Step 1: Calculate the total variability
The gauge in Example 8.7 would not meet the suggested requirement of an SNR of at least 5
Example 8.7 (Cont)
There is other measure of gauge capability that have been proposed:
The ratio of measurement system variability to total variability:
=> The variance of the measuring instrument contributes about 7.86% of the total
observed variance of the measurements
Example 8.7 (Cont)
=> Due to the DR ratio greater than 4 so the gauge in the example 8.7 is capable
Finally, measurement systems capability studies are to investigate 2 components of
measurement error, which called the repeatability and the reproductivity of the gauge.
Repeatability is reflecting the basic inherent precision of the gauge itself
Reproducibility is the variability due to different operators using the gauge
Factorial experiment
The example of a gauge R&R study is taken from the paper by Houf and Berman (1988)
The model parameters Pi, Oj, (PO)ij, and eijk are all independent random variables
that represent the effects of parts, operators, the interaction or joint effects of parts
and operators, and random error.
Then we assume that those random variables are normally distributed with mean 0 and their
variances are: V(Pi) = σ^2p , V(Oj) = σ^2o , V[(PO)ij] = σ^2po, and V(eijk) = σ^2.
Balanced ANOVA
routine in Minitab
Negative variance component estimates can occur with the ANOVA method -> Drawback
To handle this, we can assume that the negative estimate implies the variance component is
zero and set it to zero, leaving other nonnegative estimates unchanged.
For example, if σpo is negative, it will usually be because the interaction source of variability is
nonsignificant. We should take this as evidence that σ^2po really is zero, that there is no
interaction effect, and fit a reduced model of the form
Typically we think of σ^2 as the repeatability variance component, and the gauge
reproducibility as the sum of the operator and the part × operator variance components.
=> This gauge would not be considered capable because the estimate of the P/T ratio exceeds 0.1
• The gauge R & R study and the
ANOVA procedure in the previous
sections only resulted in specific
values of the experimental model
variance components (including
the σ2 Gauge, σ2 Repeatability, σ2
Reproducibility)
(*): Burdick, R. K., Borror, C. M., & Montgomery, D. C. (2005). Design and analysis of gauge R&R studies: making decisions with confidence intervals in random and mixed ANOVA models.
Society for Industrial and Applied Mathematics.
Estimating the variance components and constructing
The method of moments
confidence intervals for the gauge R&R parameters.
The likelihood-based
methods (likelihood ratio
According to Constructing confidence intervals for the gauge R&R
method, profile likelihood
parameters.
method, modified large-
Burdick et al., sample method)
2005 (*)
Constructing confidence intervals for the gauge R&R
The bootstrap method
parameters and the misclassification rates.
• Instead, to determine if a
capability of a measuring
system is effective or not. We
need to specify how well that
system discriminates between
bad and good parts.
y: Measured value of a randomly selected part.
(1): True & (2): False (1): False & (2): True
Customer’s Risk
It would be very helpful to provide
In practice, we usually can not confidence intervals for these
determine the true values of μ, , parameters in the calculation
or producer’s risk and customer’s risk
probability
It would be very helpful to provide
In practice, we usually can not confidence intervals for these
determine the true values of μ, , parameters in the calculation
or producer’s risk and customer’s risk
probability
The attribute gauge capacity analysis in this situation determines the proportion of time that
underwriter agrees with him/herself and the proportion of time that the underwriter agrees
with the correct classification.
Result in:
• While there is considerable subjectivity
in interpreting the results of attribute
gauge capacity studies, there is not great
For example: A bank uses manual underwriting to analyze mortage loan
agreement in this study.
applications
=> =>
• From Ohm’s law, we know that the voltage is V = IR
• Expand V in a Taylor series we have
Neglecting the terms of higher order then the mean and variance of voltage are
• The probability that the voltage will fall within the design specifications is
=>
Two procedures for estimating natural tolerance limits,
• One for those situations in which the normality assumption is reasonable
• A nonparametric approach useful in cases where the normality assumption is
inappropriate.
Unless the product specifications exactly coincide with or exceed the natural tolerance
limits of the process (PCR ≥ 1), an extremely high percentage of the production will be
outside specifications, resulting in a high loss or rework rate.
In ideal situations where the quality characteristics and its parameter are normally
distributed with known mean and known variance, its tolerance limits are simply
(1)
Since α = 0.05, γ = 0.99, and n = 25, we find K = 2.972 from Appendix Table VII.
Therefore, the required tolerance limits are found as x̄ ± 2.972s = 40.75 ± (2.972)(1.37)
= 40.75 ± 4.07 = [36.68, 44.82].
Nonparametric Tolerance limits - interval that are based on the distribution of the extreme
values in a sample from an arbitrary continuous distribution.
For two-side tolerant limits, sample of For one-side tolerant limits, we must take a
observations be taken to ensure probability sample of:
at least 100(1-α)% is:
A. False defective
B. Producer’s risk
C. Passed defective
D. Both A and B
Question 2: Measurements are not only expressed numerically but
can also be described using attributes.
A. True
B. False
A. Precision-to-total ratio
B. Size-to-noise ratio
C. Discrimination ratio