Statistical Process Control: 2.1 Quality Defined
Statistical Process Control: 2.1 Quality Defined
13
14 CHAPTER 2. STATISTICAL PROCESS CONTROL
Garvin stresses that manufacturers should not strive to be first on all eight
dimensions of quality. Rather, he should select a number of dimensions on
which to compete.
Traditionally, the quality control departments in factories compete with
regard to the conformance dimension of quality. It is their responsibility
to ensure that requirements set on a quality characteristic are met. Such
requirements are usually stated in the form of specification limits. All
parts within limits are classified as conforming. The objective then is to
produce “zero defects”. Sullivan (1984), argued that this conformance-to-
specification-limits approach effectively prevents ongoing quality improve-
ment. As long as all outcomes of a production process are within specifi-
cation limits, a process engineer would have great difficulty in convincing
his plant manager to make any investment for the improvement of quality.
Sullivan advocates defining quality as “uniformity around the target”. In
this more modern point of view, which was put forward by, among oth-
ers, Deming and Taguchi, any deviation from target reduces reliability and
increases costs, in the form of plant and customer loss. Operational objec-
tives directed towards achieving ongoing quality improvement should not
be stated in terms of specification limits such as “zero defects”. The atten-
tion for quality improvement will diminish as soon as the manufacturing
process is able to produce amply within specs. A more continuous drive
for ongoing quality improvement will be obtained if the aim is to reduce
variation around the target.
This relation between “quality” and “variation” is summarized in the
following phrase which can be found in Montgomery (1996):
Even more insightful is the schematic presentation that Snee gave of sta-
tistical thinking in quality improvement, see Figure 2.1.
2.2. REDUCING VARIABILITY 21
reduce
common - change
cause process
variation
6 ? ?
develop
all work processes analyze satisfied
is a - are - process - process reduce - improved -
custo-
know- variation quality
process variable variation mers
ledge
? 6
remove
special - control
cause process
variation
In the first two boxes of Figure 2.1, it is indicated why statistical think-
ing is a logical approach to follow in all activities that are aimed at improv-
ing quality. All work that is done can be viewed as a process. A process
can be defined as (see Nolan and Provost (1990)): “a set of causes and con-
ditions that repeatedly come together to transform inputs into outcomes”.
The inputs might include people, materials, or information. The outcomes
include products, services, behavior, or people.
In all such processes, variation is encountered. Careful analysis of this
variation, combined with knowledge of the process may lead to reduction
of variation. It is therefore important for a manager to realize that close
cooperation with those who work with the process (e.g. operators), is an
absolute necessity for successful quality improvement. The people that
work with the process posses much of the knowledge that is needed to
reduce variation.
Reduction of variation may be accomplished by one of two paths. Re-
duction of variation may be brought about by removing special causes of
variation, which is the responsibility of the people working with the process.
The other path is to reduce the effect of common causes of variation, which
requires the management to undertake action. Removal of common causes
requires a different approach than removal of special causes of variation.
22 CHAPTER 2. STATISTICAL PROCESS CONTROL
However,
out-of-control
signal XX
z r
X
Upper Control Limit (UCL)
6 r
r r
r r
statistic
value of
1
r
2 3
r
4 5 6
r
7 8 9 10
-
subgroup
r number
H0 : F1 = F2 =, · · · , = Fk ≡ F0
where c(n, k, p) is the pth percentile of the null distribution of (Ti − Mk )/Vk .
For the LCL and the UCL of a control chart, different constants c(n, k, pLCL )
and c(n, k, pUCL ) must be determined.
In situations where a location parameter or a spread parameter of the
distribution function of Ti is known, these values are used in (2.1) instead
of their estimates.
Note that the elements of the sequence {T1 , · · · , Tk } are mutually inde-
pendent, but this does not, in general, hold for Ti , Mk , and Vk , since they
are (partly) based on the same set of observations.
In most literature on SPC it is assumed that Fi is a normal distribution
function with expectation µi and variance σi2 . The mean and/or variance
of the observations may change over time due to the presence of special
causes of variation. With these assumptions, the process is in control if
and only if µi = µ for some µ and if σi = σ for some σ for all i = 1, · · · , k.
It is for this reason that in a lot of cases, a production process is monitored
using two control charts, one for the standard deviation, and one for the
mean of the process.
Shewhart did not consider statistical arguments to determine the con-
stants c(n, k, pLCL ) and c(n, k, pUCL ). He decided to choose, “based on eco-
nomic considerations”
c(n, k, pLCL ) = −3
and
c(n, k, pUCL ) = 3.
2.3. THE SHEWHART CONTROL CHART 27
These values turn out to work well in a lot of practical cases. The underlying
statistical arguments for a control chart for the mean of normal observations
are the following. Suppose that we are testing the hypothesis H0 : µ1 =
· · · = µk ≡ µ where µ is known, assuming a constant known variance σ 2 .
Furthermore, assume that the sample means Ti = 1/n(X1i + · · · + Xni )
√
are plotted in a control chart with limits LCL = µ − 3σ/ n, and UCL =
√
µ + 3σ/ n. Then we have pLCL = (1 − pUCL ) = 0.00135, so that under these
assumptions a false out-of-control signal is quite unlikely.
If an out-of-control signal is generated in Phase I, a search is initiated
for a responsible special cause of variation. If this can be found, action
should be taken to prevent it from re-occurring. In cases where a special
cause is found and removed, the corresponding sample does not provide
information about the in-control state of the process. Therefore, in such
cases, it should be removed from the data set, and the remaining k − 1
subsamples should be compared to re-estimated control limits.
This procedure should be repeated until no out-of-control signals are
generated, or when underlying special causes either cannot be found or
cannot be removed. At the end of Phase I, we have a data set at our
disposal of, say, m ≤ k subsamples that provides information concerning
the variability that can be attributed to common causes of variation. This
information is needed for Phase II, when samples are drawn online.
H0 : Ff = F0
H1 : Ff 6= F0 .
E(Yi ) = µ
and
Cov(Yi , Yi−k ) = γk ,
γ−k = γk
30 CHAPTER 2. STATISTICAL PROCESS CONTROL
since
Cov(Yi , Yi+k ) = Cov(Yi+k , Yi ) = Cov(Yi , Yi−k ).
where
σε2
σY2 = Var(Yt ) = .
1 − φ2
If we introduce the backward shift operator B, where BYt = Yt−1 , then (2.4)
can be rewritten as
φ(B)(Yt − µ) = εt ,
Yt = µ + εt − θ1 εt−1 − · · · − θq εt−q .
Yt − µ = φ1 (Yt−1 − µ) + · · · + φp (Yt−p − µ)
+ εt − θ1 εt−1 − · · · − θq εt−q for t ∈ ZZ,
where φ(B) and θ(B) are polynomials of degrees p and q, respectively. Such
processes are stationary if the roots of the polynomial φ(·) lie outside the
unit circle. The class of ARMA(p,q) models can be used to model a wide
range of stationary time series, with only a few parameters to estimate. In
practice, values of p and q larger than 2 are rarely encountered.
However, many time series encountered in practice are nonstationary.
In order to fit a stationary model it is necessary to remove the nonstationar-
ity first. In many cases, this can be obtained by taking successive differences
of the observations one or more times. That is, in case of first differences,
32 CHAPTER 2. STATISTICAL PROCESS CONTROL
Crowder, Hawkins, Reynolds and Yashchin (1997) share this view. They
argue that the cause of autocorrelation should be assessed before the data
is analyzed and interpreted. As argued in Chapter 1, we will consider
cases where autocorrelation in process data is unremovable and part of
the process. Control charts should not signal because of autocorrelation,
but give out-of-control signals because of the presence of special causes of
variation.
Consequently, the definition of an in-control process that is most com-
monly used in practice and Shewhart’s original definition do not necessary
agree. In practice, the term ‘an in-control process’ is more often than
not associated with a sequence of independently and identically distributed
observations. As was discussed in Subsection 2.2.1, Shewhart’s original def-
inition of an in-control process only requires that we can predict (within
statistically determined limits) how the process may be expected to vary
in the future. A process that exhibits serial correlation is predictable. For
this reason we extend the definition of an in-control process to include ob-
servations which may be serially correlated. Alwan (1988) refers to such
processes as being ‘in control in a broader sense’.
34 CHAPTER 2. STATISTICAL PROCESS CONTROL