Chapter 8 Group 4 Q

Download as pdf or txt
Download as pdf or txt
You are on page 1of 133

Quality Management and Control

CHAPTER 8: PROCESS AND MEASUREMENT SYSTEM


CAPACITY ANALYSIS

CC01 - GROUP 4 INSTRUCTOR: ASSOC. PROF. DO NGOC HIEN


GROUP MEMBER
Lê Hoàng Phúc An - 2152367 Đào Huỳnh Gia Huy - 2153373

Nguyễn Quốc Anh - 2153167 Nguyễn Bình Khang - 2152636

Võ Hoàng Anh Duy - 2052919 Trần Khải Minh - 2153583

Hồ Tấn Đạt - 2153278 Phạm Thị Thu Thảo - 2153809


OVERVIEW
8.1. INTRODUCTION 8.6. PROCESS CAPABILITY
8.2. PROCESS CAPABILITY ANALYSIS ANALYSIS WITH ATTRIBUTE DATA
USING A HISTOGRAM OR A 8.7. GAUGE AND MEASUREMENT
PROBABILITY PLOT SYSTEM CAPABILITY STUDIES
8.3. PROCESS CAPABILITY RATIO 8.8. SETTING SPECIFICATION
8.4. PROCESS CAPABILITY ANALYSIS LIMITS ON DISCRETE
USING A CONTROL CHART COMPONENTS
8.9. ESTIMATING THE NATURAL
8.5. PROCESS CAPABILITY ANALYSIS
TOLERANCE LIMITS OF A PROCESS
USING DESIGNED EXPERIMENTS
• Process capability refers to the uniformity of the process.
• Process capability analysis helps quantify and assess the variability of
critical-to-quality characteristics in a process.
• Process capability process evaluates both instantaneous variability
within a specific time and variability over time.
• Process capability plays an important role in the DMAIC process.
The Six Sigma spread is used to
measure capability. The upper and
lower natural tolerance limits of the
process fall at:
UNTL = μ + 3σ
LNTL = μ - 3σ
The natural tolerance limits include
99.73% of the variable, and 0.27% of
the process output fall outside.
Formal studies estimate capability through probability distributions or
percentages within specifications.
True process capability studies directly observe and control the process,
while product characterization studies analyze existing product samples.
Capability analysis data helps predict future performance, aid product
development and monitoring, manage suppliers, and improve processes.
Three primary techniques are used in process capability analysis:
• Histograms and probability plots
• Control charts
• Designed experiments
The histogram can be helpful in estimating process capability. Alternatively, a
stem-and-leaf plot may be substituted for the histogram.
If the quality engineer has access to the process and can control the data-
collection effort, the following steps should be followed prior to data collection:
• Choose machines: Select representative machines or isolate variability within
them (e.g., head-to-head).
• Define conditions: Fix processing conditions like speed, feed rates, temperature
(study their impact later).
• Select operators: Choose randomly if operator variability matters.
• Monitor and record: Track data collection order for each unit produced.
Example 8.1. Estimating Process Capability with a Histogram
Figure 8.2 presents a histogram of the bursting strength of 100 glass containers.
The data are shown in Table 8.1. What is the capability of the process?
The shape of the histogram implies that the distribution of bursting strength
is approximately normal.
Analysis 100 observations gives x̄ = 264.06, s = 32.02
Consequently, the process capacity would estimated as x̄ ± 3s
or 264.06 ± 3(32.02)
Advantages of using histograms: Quickly assess process
performance by seeing its shape and spread. It may
reveal potential causes of poor performance, like
misalignment or excessive variation.
Limits of histograms: Requires a stable, in-control
process for reliable conclusions. And it doesn't directly
show if the process is statistically controlled.
• Probability plotting is an alternative to the histogram that can be used
to determine the shape, center, and spread of the distribution. It has
the advantage that it is unnecessary to divide the range of the variable
into class intervals. A probability plot is a graph of the ranked data
versus the sample cumulative frequency on special paper with a
vertical scale chosen so that the cumulative distribution of the
assumed type is a straight line.
• The normal probability plot can also be used to estimate process
yields and fallouts.
• For example, consider the following 20
observations on glass container
bursting strength: 197, 200, 215, 221,
231, 242, 245, 258, 265, 265, 271, 275,
277, 278, 280, 283, 290, 301, 318, and
346.
• After evaluate the mean of the normal
distribution and the standard deviation
of the data set 𝞵 = 264 psi and 𝞼 = 33.
• Note that they are not far from the
sample average x=264.06 and standard
deviation s=32.02.
• Figure 8.5 presents a normal
probability plot of times to failure
(in hours) of a valve in a chemical
plant. From examining this plot, we
can see that the distribution of
failure time is not normal.
• The display in Fig. 8.6 may be useful in selecting a
distribution that describes the data.
• This figure shows the regions in the 𝜷₁, 𝜷₂ plane for
several standard probability distributions, where
𝜷₁ and 𝜷₂ are the measures of skewness and
kurtosis, respectively. To use Fig. 8.6, calculate
estimates of skewness and kurtosis from the
sample—say,
In order to express the process capability, we can use a simple, quantitative way is
through the process capability ratio (PCR) Cp ( which was introduced in chap 6 ).

USL : Upper specification limits.


LSL : Lower specification limits.
Cp is widely used in industry but it is also
widely misused
In a practical application, the process standard deviation σ is always unknown
and must be replaced by an estimate σ . In order to estimate σ we typically use
either the sample standard deviation s or R(bar)/d2 ( when variables control
charts are used in the capability study ). This results in an estimate of Cp -say
EXP: An example of the calculation of Cp, we use the example 6.1 using x̄ and R
charts. The specifications on flow width are USL = 1.00 microns and LSL = 2.00
microns, a Thus, our estimate of the PCR Cp is R/d2= 0.1398.

It is assumed that flow width is approximately


normally distributed based on the histogram
below and the cumulative normal distribution
table in Appendix , we can estimate that the
process produces approximately 350ppm ( parts
per million ) defective.
Equations 1 and 2 assume that the process has both upper and lower specification
limits. For one-sided specifications, one-sided process-capability ratios are used.
One-sided PCRs are defined as follows.
The PCR Cp also has a useful practical interpretation , is the percentage of the
specification band used up by the process.
EXP: continue with the previous example we
got:
• The hard- bake process uses percent of the
specification band:
The process capability ratio is a
measure of the ability of the process
to manufacture product that meets
the specifications. This table
presents several values of the PCR Cp
along with the associated values of
process fallout, expressed in
defective parts or nonconforming
units of product per million (ppm).
The ppm quantities in the table were calculated using the following important
assumptions:
1. The quality characteristic has a normal distribution.
2. The process is in statistical control.
3. In the case of two-sided specifications, the process mean is centered between
the lower and upper specification limits.
These assumptions are absolutely critical to the accuracy and validity of the
reported numbers, and if they are not valid, then the reported quantities may be
seriously in error.
This table presents some
recommended guidelines for
minimum values of the PCR.
The process capability ratio Cp does not take into account where the process
mean is located relative to the specifications. Cp simply measures the spread of
the specifications relative to the Six Sigma spread in the process. So there is a
more accurate parameter to reflect this situation which is a new process
capability ratio (PCR) -Cpk- that takes process centering into account. This
quantity is
So with the Cp= Cpk, the process is at the midpoint of the specifications, and
when Cpk< Cp the process is off center.
The magnitude of Cpk relative to Cp is a direct measure of how off center the
process is operating. We usually say that Cp measures potential capability in
the process, whereas Cpk measures actual capability.
EXP: For the process in figure 8.8b, we would have
The magnitude of Cpk relative to Cp is a direct
measure of how off center the process is
operating.
Panels (a) and (b) in Figure 8.8 show identical
Cp's (2.0), but (b) has lower capability due to
off-center of the bands of specifications
because it is not operating at the midpoint of
the interval between the specifications.
Panel (d) of Figure 8.8 illustrates the case in
which the process mean is exactly equal to one
of the specification limits, leading to Cpk = 0.
As panel (e) illustrates, when Cpk < 0 the
implication is that the process mean lies
outside the specifications.
Usual interpretation of Cp/Cpk relies on normal distribution. If
the underlying distribution is non-normal the statements about
expected process fallout attributed to a particular value of Cp
or Cpk may be in error.

EXAMPLE:
The data in Figure 8.9 is about surface roughness. It
has a skewed distribution, suggesting non-normality.
With USL = 32 microinches. With x̄ =10.44 and S = 3.053,
implying that ,Ĉpu=2.35 and Table 8.2 would suggest
that the fallout is less than one part per billion, they
are likely inaccurate due to the non-normal data.
• In order to handle this result, they transform
the data into a new, transformed metric the
data that have a normal distribution
appearance. In this example, reciprocal
transformation was used.
• figure 8.10 presents a histogram of reciprocal
value x* = 1/x. With x̄ * =0.1025, s*=0.0244, USL
=1/32=0.03125. So we got Cˆ pl = 0.97, which
implies that about 1,350 ppm are outside of
specifications, much more realistic than the
first one.
So Luceño (1996) introduced the index Cpc to deal with non- normal data :
• Aiming for broader applicability, attempts were made to adjust PCRs
for Pearson & Johnson families, covering both normal & non-normal
cases.

With T= 12(USL-LSL),
When the distribution is normal
then =7.52 to make it equal
to the 6𝛔
• Another approach is to extend the definition of the standard capability indices
to the case of non-normal process. An example of this approach is Cp (q)
which is constructed based on the idea to use appropriate quantile of the
process.
• where xα the α -th quantile of the process.
Cpk was developed to deal with process with mean μ that is not centered between
the specification limits. Cpk alone is still an inadequate measure of process centering

• To characterize process centering satisfactorily, Cpk must be compared to Cp


• Cpk depends inversely on σ and becomes large as σ approaches zero
• A large value of Cpk does not provide information about the location of the mean in the
interval of USL and LSL
Process capability ratio Cpm

Estimation of Cpm:

Both Cpk and Cpm coincide with Cp when μ = T and


decrease as μ moves away from T. Thus, a given value of
Cpm places a constraint on the difference between μ and
the target value T
Process capability ratio Cpkm

Pearn et al. (1992)

The motivation of this new ratio is increased sensitivity


to departures of the process mean μ from the desired target T
Referred to as “The third generation” of process capability ratio
Confidence interval of Cp

Industrial use of process capability ratios focuses on computing


and interpreting the point estimate of the desired quantity. Process
capability ratios are simply point estimate and, as such, are subject
to statistical fluctuation.

Point estimate
of Cp:

100(1 − a)% Confidence


interval of Cp:
EXAMPLE
Suppose that a stable process has upper and lower specifications at USL = 62
and LSL = 38. A sample of size n = 20 from this process reveals that the
process mean is centered, and that the sample standard deviation s = 1.75.
Find a 95% CI on Cp.
Process
capability
ratio

Distribution Chi-squared Z T

Confidence interval uses s rather than R̃ /d2 to estimate σ. This further emphasizes that the
process must be in statistical control for PCRs to have any real meaning. If the process is
not in control, s and could be very different, leading to very different values of the PCR.
TEST HYPOTHESIS
A practice that is becoming increasingly
common in industry is to require a supplier
demonstrate that the process capability
ratio Cp meets or exceeds some particular
target value

Ho: Cp = Cpo (The process is not


capable)
H1: Cp ≥ Cpo (The process is capable)

Kane (1986)
EXAMPLE
A customer has told his supplier that, in order to
qualify for business with his company, the supplier TEST HYPOTHESIS
must demonstrate that his process capability
exceeds Cp = 1.33. Thus, the supplier is interested in
establishing a procedure to test the hypotheses

Ho: Cp = 1.33 (The process is not capable)


H1: Cp ≥ 1.33 (The process is capable)

The supplier wants to be sure that if the process


capability is below 1.33 there will be a high
probability of detecting this (say, 0.90), whereas if
the process capability exceeds 1.66 there will be a
high probability of judging the process capable
α = β = 1 - 0.9 = 0.1
(again, say, 0.90).
TEST HYPOTHESIS

α = β = 1 - 0.9 = 0.1

Kane (1986)
TEST HYPOTHESIS
n = 70

Thus, to demonstrate capability, the supplier


must take a sample of n = 70 parts, and the
sample process capability ratio Cp must
exceed 1.46

Kane (1986)
There are 7 quality
management tools (7 QC Tools)

Pareto Chart Control Chart


01 Check sheets 04 (Pareto Analytics) 07

02 Charts 05 Histogram Chart

Cause & Effect


03 Diagram
06 Scatter Diagram
BENEFITS OUTSTANDING POINTS
• Monitor process fluctuations within and
beyond control limits. Indicates the status of the whole
• Realize potential issues and intervene promptly. process, which products are
• Identify patterns in plotted points for potential defective and how, so that it is easy
solutions. to come up with timely and
• Predict future performance. reasonable improvements.
• Generate new ideas for quality improvement.
EXAMPLE
Table 8.5 presents the
container bursting-strength
data in 20 samples of five
observations each (with the
LSL= 200).
SOLUTION
SOLUTION
Thus, the one-sided lower
SOLUTION process capability ratio is
estimated by
The process parameters
may be estimated from
the control chart as
Designed Experiment Overview

Systematic approach to varying input Determines optimal levels


controllable variables in a process. for variable holding to
optimize performance.
Analyzes effects of these
variables on output.

Helps discover influential Useful in estimating


process variables on output. process capability.
EXAMPLE
For example, consider a machine that fills bottles with a soft-drink
beverage. Each machine has a large number of filling heads that must
be independently adjusted. The quality characteristic measured is
the syrup content (in degrees brix) of the finished product.
SOLUTION
DEFECTIVE PRODUCT UNIT
The unit is something that is delivered to a
customer and can be evaluated or judged
as to its suitability. Some examples include:
• An invoice
• A shipment
• A customer order
• An enquiry or call
THE DEFECTS OR
NONCONFORMITIES

1. An error on an invoice
2. An incorrect or incomplete shipment
3. An incorrect or incomplete customer
order
4. A call that is not satisfactorily
completed
FORMULA

Process performance is often


measured using attribute data,
such as nonconforming units or
defectives.
The purpose of most measurements capability studies is to:

1.Determine how much of total observed variability is due to the gauge or instrument.
2. Isolate the components of variability in the measurement system.
3. Assess whether the instrument or gauge is capable (that is, is it suitable for the
intended application)
1.Repeatability (Do we get the same observed value if we measure the same unit several times
under identical conditions?)
2.Reproducibility (How much difference in observed values do we experience when units are
measured under different conditions, such as different operators, time periods, and so forth?)
The basic ideas of measurement systems analysis (MSA), consider a simple but
reasonable model for measurement system capability studies:
y=x+ε

The variance of the total observed measurement, y, is then:


Example 8.7:
An instrument is to be used as part of a proposed SPC implementation. The quality-
improvement team involved in designing the SPC system would like to get an
assessment of gauge capability. Twenty units of the product are obtained, and the
process operator who will actually take the measurements for the control chart uses
the instrument to measure each unit of product twice.
The standard deviation of measurement error can be estimated as follows:
Measures of gauge capability:

Precision-to-tolerance (P/T) ratio Signal-to-noise ratio Discrimination ratio


Precision-to-tolerance (P/T) ratio

The P/T Ratio is often used to assess the measurement capability of a system or device in
relation to accuracy requirements
The popular choices for the constant k are k = 5.15 (95% tolerance interval that contains at
least 99% of a normal population) and k = 6 (the usual natural tolerance limits of a normal
population)
Values of the estimated ratio P/T of 0.1 or less often are taken to imply adequate gauge
capability
Signal-to-noise ratio

SNR is typically used when we want to evaluate the accuracy of a test or measurement.
SNR is defined as the ratio of the power of a signal (meaningful input) to the power of
background noise (meaningless or unwanted input). A high SNR indicates that the signal is
significantly greater than the noise, which helps increase the accuracy of the test or
measurement.
=> A value of 5 or greater is recommended, and a value of less than 2 indicates inadequate
gauge capability.
Discrimination ratio

DR is typically used when you want to evaluate the discriminatory power of a measurement tool
A high DR indicates that the measurement tool can effectively distinguish between different
parts
=> For a gauge to be capable the DR must exceed 4
Example 8.7 (Cont)

The part used in example 8.7 has USL = 60 and LSL = 5, and we take k is 6 so we have
the P/T ratio is:

=> The P/T ratio < 0.1 so the gauge capability is good to imply adequate
Example 8.7 (Cont)
With the SNR, we need some step to calculate it
Step 1: Calculate the total variability

Step 2: Calculate the product variability


Example 8.7 (Cont)
Step 3: Calculate the ratio of process (part) variability:

Step 4: Calculate the SNR:

The gauge in Example 8.7 would not meet the suggested requirement of an SNR of at least 5
Example 8.7 (Cont)
There is other measure of gauge capability that have been proposed:
The ratio of measurement system variability to total variability:

According to the example 8.7, we can an estimate of Pm as:

=> The variance of the measuring instrument contributes about 7.86% of the total
observed variance of the measurements
Example 8.7 (Cont)

The discrimination ratio (DR):

=> Due to the DR ratio greater than 4 so the gauge in the example 8.7 is capable
Finally, measurement systems capability studies are to investigate 2 components of
measurement error, which called the repeatability and the reproductivity of the gauge.
Repeatability is reflecting the basic inherent precision of the gauge itself
Reproducibility is the variability due to different operators using the gauge
Factorial experiment
The example of a gauge R&R study is taken from the paper by Houf and Berman (1988)

10 parts, 3 operators and 3 measurements per part


If there are a randomly selected parts and b randomly selected operators, and each
operator measures every part n times, then the measurements (i = part, j = operator, k
= measurement) could be represented by the model:

The model parameters Pi, Oj, (PO)ij, and eijk are all independent random variables
that represent the effects of parts, operators, the interaction or joint effects of parts
and operators, and random error.
Then we assume that those random variables are normally distributed with mean 0 and their
variances are: V(Pi) = σ^2p , V(Oj) = σ^2o , V[(PO)ij] = σ^2po, and V(eijk) = σ^2.

=> The variance of any observation is:

We will use ANOVA method to estimate these variance components


The procedure involves partitioning the total variability in the measurements into the following
component parts:
Next ,each sum of squares on the right-hand side of equation above is divided by its degrees of
freedom to produce mean squares:
The variance components can be estimated by equations:

Balanced ANOVA
routine in Minitab
Negative variance component estimates can occur with the ANOVA method -> Drawback
To handle this, we can assume that the negative estimate implies the variance component is
zero and set it to zero, leaving other nonnegative estimates unchanged.

For example, if σpo is negative, it will usually be because the interaction source of variability is
nonsignificant. We should take this as evidence that σ^2po really is zero, that there is no
interaction effect, and fit a reduced model of the form
Typically we think of σ^2 as the repeatability variance component, and the gauge
reproducibility as the sum of the operator and the part × operator variance components.

So the estimate for the gauge variance is:


The lower and upper specifications on this power module are LSL = 18 and USL = 58.
Therefore the P/T ratio for the gauge is estimated as

=> This gauge would not be considered capable because the estimate of the P/T ratio exceeds 0.1
• The gauge R & R study and the
ANOVA procedure in the previous
sections only resulted in specific
values of the experimental model
variance components (including
the σ2 Gauge, σ2 Repeatability, σ2
Reproducibility)

• It can be informative and accurate


to calculate a confidence interval
for each type of variance.
• The gauge R & R study and the
ANOVA procedure in the previous
sections only resulted in specific
values of the experimental model
variance components (including According to
the σ2 Gauge, σ2 Repeatability, σ2
Reproducibility) Burdick et al.,
2005 (*)
• It can be informative and accurate
to calculate a confidence interval
for each type of variance.

(*): Burdick, R. K., Borror, C. M., & Montgomery, D. C. (2005). Design and analysis of gauge R&R studies: making decisions with confidence intervals in random and mixed ANOVA models.
Society for Industrial and Applied Mathematics.
Estimating the variance components and constructing
The method of moments
confidence intervals for the gauge R&R parameters.

The likelihood-based
methods (likelihood ratio
According to Constructing confidence intervals for the gauge R&R
method, profile likelihood
parameters.
method, modified large-
Burdick et al., sample method)
2005 (*)
Constructing confidence intervals for the gauge R&R
The bootstrap method
parameters and the misclassification rates.

The analysis of variance Constructing confidence intervals for the standard


method deviation ratios in the nested model.
In this section, we only focus on the modified large sample (MLS) method.

Often produces It is relatively easy to implement


good results for the standard gauge capability
experiment (parts and operators are
considered to be random factors)
In previous sections, we have introduced several ways to summarize the capability
of a gauge. Some of them are:

P/T ratio Signal-to-noise ratio Discrimination ratio

Ratio of process Ratio of measurement


to total system to total
• None of them really describe
the capability of the gauge in a
direct way.

• Instead, to determine if a
capability of a measuring
system is effective or not. We
need to specify how well that
system discriminates between
bad and good parts.
y: Measured value of a randomly selected part.

x: True value of the part (mean = μ & variance = ) Normally &


independently
distributed random
Є: Measurement error (mean = 0 & variance = )
variables
(1): A part is in conformance (2): A part will pass the
measurement system

(1): True & (2): False (1): False & (2): True

A conforming part is A failure is misclassified as a


misclassified as a failure. This good part. This is called
is called False defective Passed defective (Customer’s
(Producer’s risk). risk).
Missed fault (MF)
and false failures
(FF) regions of a
measurement
system shown on
a bivariate normal
distribution
contour. [From
Burdick, Borror,
and Montgomery
(2003).]
How can we determine the
probability of producer’s risk or
customer’s risk for the capability of Producer’s Risk
a measurement system?

Customer’s Risk
It would be very helpful to provide
In practice, we usually can not confidence intervals for these
determine the true values of μ, , parameters in the calculation
or producer’s risk and customer’s risk
probability
It would be very helpful to provide
In practice, we usually can not confidence intervals for these
determine the true values of μ, , parameters in the calculation
or producer’s risk and customer’s risk
probability

One way to do this is to compute


those probabilities under 2 different
scenarios: “Pessimistic” and
“Optimistic”
For example, a pessimistic scenario
For example, a pessimistic
might consider the worst possible
scenario might consider the
performance for the measurement
worst possible performance for
system and the worst possible capability
the measurement system and the
for the manufacturing process.
worst possible capability for the
manufacturing process.
For the optimistic scenario, we will use
the best possible performance for the
measurement system and the worst
possible capability for the manufacturing
process.
For example, a pessimistic scenario
might consider the worst possible
performance for the measurement
system and the worst possible capability
for the manufacturing process.

For the optimistic scenario, we will use


the best possible performance for the
measurement system and the worst
possible capability for the manufacturing
process.
For example, a pessimistic scenario
might consider the worst possible
performance for the measurement
system and the worst possible capability
for the manufacturing process.

For the optimistic scenario, we will use (Max)


the best possible performance for the
measurement system and the worst
possible capability for the manufacturing
process. (Max)
(Min)
• In previous sections, we have assumed that
the measurements are numerical such as
physical dimensions, or properties.
• There are many situations where the
output of gauge is an attribute, such as
pass/fail.
• Nominal (categorical) or ordinal data also
relatively common
• Attribute gauge capacities can be
implemented in many of these situation.
For example: A bank uses manual underwriting to analyze mortage loan applications

The attribute gauge capacity analysis in this situation determines the proportion of time that
underwriter agrees with him/herself and the proportion of time that the underwriter agrees
with the correct classification.
Result in:
• While there is considerable subjectivity
in interpreting the results of attribute
gauge capacity studies, there is not great
For example: A bank uses manual underwriting to analyze mortage loan
agreement in this study.
applications

• More training may be needed to ensure


that underwriters produce more
consistent and correct decision for
mortage loan application.
• In customer–supplier relationships it is
often very important for the two parties
to be able to reliably determine that the
supplier’s product is within customer
specifications.
• If the customer and supplier
For example: A bank uses manual underwriting to analyze mortage loan applications
measurement systems are not in
agreement, then differences in opinion
about product quality can occur and the
decision to accept or reject a shipment
may become a basis for dispute between
the two parties.
• It is common practice to compare the supplier and customer measurements of a given
quantitative product characteristic in a shipment of product using a linear regression
analysis technique (R2 statistic)
• But, Nachtsheim and Becker (2011) show that the R2 statistic is never an appropriate
statistic. They suggest that an appropriate Anh approach for comparing measurement
rtage loan applications
systems can be based on one described by Hawkins (2002):

1) Compute the n sums and n differences


4)Check outliers
2)Plot the values of Di on the y-axis versus
5)Check for linear trend
the values of Si on the x-axis.
6)Check for curvature
3) Check for non-constant variance
This section discusses some aspects of setting specifications on components
to ensure that the final product meets customer specifications.
• In many cases, the dimension of an item is a linear combination of the
dimensions of the component parts. That is, if the dimensions of the
components are x₁,x₂,x₃,..., xₙ, then the dimension of the final assembly is...
Where a c are constants.
• If the x i are normally and independently distributed with mean μᵢ and variance
σᵢ² then y is normally distributed with mean and variance below:
• A linkage consists of four components
• The lengths of x1 , x2 , x3 , and x4 are shown below
• The lengths can be assumed independent (as produced on different machines)
• Determine the proportion of linkages that meet the customer specification on
overall length of 12 ± 0.10
• First find mean and variance of y which is normally distributed
• To find the fraction of linkages that are within specification, we must evaluate
the proportion due to the Normal Distribution formula
• In some problems, the dimension of interest may be a nonlinear function of
the n component dimensions x₁,x₂,x₃,..., xₙ - say
• The usual approach is to approximate the nonlinear function g by a linear
function of the xᵢ in the region of interest
• If µ₁ , µ₂, . . . µₙ are the nominal dimensions associated with the components x₁ ,
x₂ , . . . xₙ, then by expanding the right-hand side of the previous equation in a
Taylor series about µ₁ , µ₂, . . . µₙ , we obtain the equation below
where R represents the higher-order terms
• Neglecting the terms of higher order, we can apply the expected value and
variance operators to obtain
• This procedure to find an approximate mean and variance of a nonlinear
combination of random variables is sometimes called the delta method.
Equation of the variance of y is often called the transmission of error formula.
• The voltage across the points (a, b) is required to be
100 ± 2 V
• Assume that the component random variables I and R
are normally and independently distributed with
means equal to their nominal values
• Suppose that I and R are centered at their nominal
values and that the natural tolerance limits are
defined so that α = 0.0027
Since α = 0.0027 => Zα/2 = Z 0.00135 = 3

I = 25 ± 1; I ~ N(25, σ²); 24 ≤ I ≤ 26 R = 4 ± 0.06; R ~ N(4, σ²); 3.49 ≤ R ≤ 4.06

=> =>
• From Ohm’s law, we know that the voltage is V = IR
• Expand V in a Taylor series we have

Neglecting the terms of higher order then the mean and variance of voltage are
• The probability that the voltage will fall within the design specifications is

• Only 84% of the observed output


voltages will fall within the design
specifications
• The natural tolerance limits or process
capability for the output voltage and
the process capability ratio are

=>
Two procedures for estimating natural tolerance limits,
• One for those situations in which the normality assumption is reasonable
• A nonparametric approach useful in cases where the normality assumption is
inappropriate.
Unless the product specifications exactly coincide with or exceed the natural tolerance
limits of the process (PCR ≥ 1), an extremely high percentage of the production will be
outside specifications, resulting in a high loss or rework rate.
In ideal situations where the quality characteristics and its parameter are normally
distributed with known mean and known variance, its tolerance limits are simply
(1)

Suppose a random variable is normally distributed with unknown mean and


variance has n observations. Calculating sample mean and sample variance,
replace them in (1) :

Confidence limits are used to provide an interval estimate of the parameter of a


distribution
Tolerance limits are used to indicate the limits between which we can expect to
find a specified proportion of a population
Example 8.12. Constructing a Tolerance International

The manufacturer of a solid-fuel rocket propellant is interested in finding the tolerance


limits of the process such that 95% of the burning rates will lie within these limits with
probability 0.99. It is known from previous experience that the burning rate is normally
distributed. A random sample of 25 observations shows that the sample mean and
variance of burning rate are x̄ = 40.75 and s^2 = 1.87, respectively

Since α = 0.05, γ = 0.99, and n = 25, we find K = 2.972 from Appendix Table VII.
Therefore, the required tolerance limits are found as x̄ ± 2.972s = 40.75 ± (2.972)(1.37)
= 40.75 ± 4.07 = [36.68, 44.82].
Nonparametric Tolerance limits - interval that are based on the distribution of the extreme
values in a sample from an arbitrary continuous distribution.

For two-side tolerant limits, sample of For one-side tolerant limits, we must take a
observations be taken to ensure probability sample of:
at least 100(1-α)% is:

• Nonparametric tolerance limits have limited practical value


• To construct suitable intervals that contain a relatively large fraction of the distribution
with high probability → large samples are required
• If one can specify the form of the distribution, it is possible for a given sample size to
construct tolerance intervals that are narrower than those obtained from the
nonparametric approach.
Question 1: What do we call when a product is not qualified but
still passes the measurement system?

A. False defective

B. Producer’s risk

C. Passed defective

D. Both A and B
Question 2: Measurements are not only expressed numerically but
can also be described using attributes.

A. True

B. False

C. True but not enough

D. True and False at the same time


Question 3: In the probability plotting section, to choose a
suitable distribution, we must calculate the points (β ̂1, β ̂2). So β1
and β2 represent:

A. Skewness and kurtosis

B. Skewness and mod

C. Variance and kurtosis

D. Mod and IQR


Question 4: Which is the measure of gauge capability?

A. Precision-to-total ratio

B. Size-to-noise ratio

C. Discrimination ratio

D. All of the above


Question 5: What task does capability analysis CANNOT do?

A. Predict future performance

B. Guarantee zero defect

C. Aid product development

D. Monitoring, managing suppliers,


and improving processes.

You might also like