2011 Enciclopedia-Mine SQCG PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/279355662

Statistical Quality Control

Chapter · January 2011


DOI: 10.1007/978-3-642-04898-2_551

CITATIONS READS

2 11,168

1 author:

Maria Ivette Gomes


University of Lisbon
410 PUBLICATIONS   3,808 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Peaks Over Random Threshold (PORT) Methodology View project

Penultimate approximations in EVT View project

All content following this page was uploaded by Maria Ivette Gomes on 18 November 2015.

The user has requested enhancement of the downloaded file.


Statistical Quality Control
M. Ivette Gomes∗
Universidade de Lisboa, DEIO and CEAUL

1 Quality: a brief introduction


The main objective of statistical quality control (SQC) is to achieve quality in production and service orga-
nizations, through the use of adequate statistical techniques. The following survey relates to manufacturing
rather than to the service industry, but the principles of SQC can be successfully applied to either. For
an example of how SQC applies to a service environment, see Roberts (2005). Quality of a product can be
defined as its adequacy to be used (Montgomery, 2009), which is evaluated by the so-called quality charac-
teristics. Those are random variables in a probability language, and are usually classified as: physical, like
length and weight; sensorial, like flavor and color; temporally oriented, like the maintenance of a system.
Quality Control (QC) has been an activity of engineers and managers, who have felt the need to work jointly
with statisticians. Different quality characteristics are measured and compared with pre-determined spec-
ifications, the quality norms. QC began a long time ago, when manufacturing began and competition
accompanied it, with consumers comparing and chosing the most attractive product. The Industrial Rev-
olution, with a clear distinction between producer and consumer, led producers to the need of developing
methods for the control of their manufactured products. On the other hand, SQC is comparatively new,
and its greatest developments have taken place during the 20th century. In 1924, at the Bell Laboratories,
Shewhart developed the concept of control chart and, more generally, statistical process control (SPC), shift-
ing the attention from the product to the production process (Shewhart, 1931). Dodge and Romig (1959),
also in the Bell Laboratories, developed sampling inspection, as an alternative to the 100% inspection.
Among the pioneers in SPC we also distinguish W.E. Deming, J.M. Juran, P.B. Crosby and K. Ishikawa
(see other references in Juran and Gryna, 1993). But it was during the Second World War that there was
a generalized use and acceptance of SQC, largely used in USA and considered as primordial for the defeat
of Japan. In 1946, the American Society for Quality Control was founded, and this enabled a huge push
to the generalization and improvement of SQC methods.
After the II World War, Japan was confronted with rare food and lodging, and the factories were in ruin.
They evaluated and corrected the causes of such a defeat. The quality of the products was an area where
USA had definitely over passed Japan, and this was one of the items they tried to correct, becoming
rapidly masters in inspection sampling and SQC, and leaders of quality around 1970. Recently, the quality
developments have also been devoted to the motivation of workers, a key element in the expansion of the
Japanese industry and economy.
Quality is more and more the prime decision factor in the consumer preferences, and quality is often pointed
out as the key factor for the success of organizations. The implementation of a production QC clearly leads

Research partially supported by FCT / OE, POCI 2010 and PTDC/FEDER.

1
to a reduction in the manufacturing costs, and the money spent with control is almost irrelevant. At the
moment, the quality improvement in all areas of an organization, a philosophy known as Total Quality
Management (TQM) is considered crucial (see Vardeman and Jobe, 1999). The challenges are obviously
difficult. But the modern SQC methods surely provide a basis for a positive answer to these challenges.
SQC is at this moment much more than a set of statistical instruments. It is a global way of thinking of
workers in an organization, with the objective of making things right in the first place. This is mainly
achieved through the systematic reduction of the variance of relevant quality characteristics.

2 Usual Statistical Techniques in SQC


The statistical techniques useful in SQC are quite diverse. In this survey, we shall briefly mention SPC,
an on-line control technique of a process production with the use of control charts. Acceptance sampling,
performed out of the line production (before it, for sentencing incoming batches, and after it, for evaluating
the final product), is another important topic in SQC (see Duncan (1986) and Pandey (2007), among
others). A similar comment applies to reliability theory and reliability engineering, off-line techniques
performed when the product is complete, in order to detect the resistance to failure of a device or system
(see Pandey (2007), also among others).
It is however sensible to mention that, additionally to these techniques, there exist other statistical topics
useful in the improvement of a process. We mention a few examples: in a line of production, we have the
input variables, the manufacturing process and the final product (output). It is thus necessary to model
the relationship between input and output. Among the statistical techniques useful in the building of
these models, we mention Regression and Time Series Analysis. The area of Experimental Design (see
Taguchi et al., 1989) has also proved to be powerful in the detection of the most relevant input variables.
Its adequate use enables a reduction of variance and the identification of the controllable variables that
enable the optimization of the production process.

Statistical Process Control (SPC). Key monitoring and investigating tools in SPC include his-
tograms, Pareto charts, cause and effect diagrams, scatter diagrams and control charts. We shall here
focus on control chart methodology.
A control chart is a popular statistical tool for monitoring and improving quality, and its success is based
on the idea that no matter how well the process is designed, there exists a certain amount of nature
variability in output measurements. When the variation in process quality is due to random causes alone,
the process is said to be in-control. If the process variation includes both random and special causes of
variation, the process is said to be out-of-control. The control chart is supposed to detect the presence of
special causes of variation.
Generally speaking, the main steps in the construction of a control chart, performed at a stable stage of
the process, are the following: determine the process parameter you want to monitor, choose a convenient
statistic, say W , and create a central line (CL), a lower control limit (LCL) and an upper control limit
(UCL). Then, sample the production process along time, and group the process measurements into rational
subgroups of size n, by time period t. For each rational subgroup, compute wt , the observed value of
Wt , and plot it against time t. The majority of measurements should fall in the so-called continuation
interval C = [LCL, U CL]. Data can be collected at fixed sampling intervals (FSI), with a size equal
to d, or alternatively, at variable sampling intervals (VSI), usually with sampling intervals of sizes d1 , d2
(0 < d1 < d2 ). The region C is then split in two disjoint regions C1 and C2 , with C2 around CL. The

2
sampling interval d1 is used as soon as a measurement falls in C1 ; otherwise, it is used the largest sampling
interval d2 . If the measurements fall within LCL and UCL no action is taken and the process is considered
to be in-control. A point wt that exceeds the control limits signals an alarm, i.e. it indicates that the process
is out of control, and some action should be taken, ranging from taking a re-check sample to the tracing and
elimination of these causes. Of course, there is a slight chance that is is a false alarm, the so-called α-risk.
The design of control charts is a compromise between the risks of not detecting real changes (β-risks) and of
α-risks. Other relevant primary characteristics of a chart are the run length (RL) or number of samples to
signal (NSS) and the associated mean value, the average run length, ARL=E(RL) = 1/(1−β), as well as the
capability indices, Ck and Cpk (see Pearn and Kotz, 2006). Essentially, a control chart is a test, performed
along time t, of the hypothesis H0 : the process is in-control versus H1 : the process is out-of-control.
Stated differently, we use historical data to compute the initial control limits. Then the data are compared
against these initial limits. Points that fall outside of the limits are investigated and, perhaps, some will
later be discarded. If so, the limits need to be recomputed and the process repeated. This is referred to as
Phase I. Real-time process monitoring, using the limits from the end of Phase I, is Phase II. There thus
exists a strong link between control charts and hypothesis testing performed along time.
Note that a preliminary statistical data analysis (usually histograms and Q-Q plots) should be performed
on the prior collected data. A common assumption in SPC is that quality characteristics are distributed
according to a normal distribution. However, this is not always the case, and in practice, if data seem
very far from meeting this assumption, it is common to transform them through a Box-Cox transformation
(Box and Cox, 1964). But much more could be said about the case of nonnormal data, like the use of
robust control charts (see Figueiredo and Gomes (2005), among others).
With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over quality
methods such as inspection, that apply resources to detecting and correcting problems in the final product
or service. In addition to reducing waste, SPC can lead to a reduction in the time required to produce the
final products. SPC is recognized as a valuable tool from both a cost reduction and a customer satisfaction
standpoint. SPC indicates when an action should be taken in a process, but it also indicates when no
action should be taken.

Classical Shewhart control charts: a simple example. In this type of charts, measurements are
assumed to be independent and distributed according to a normal distribution. Moreover, the statistics
Wt built upon those measurements are also assumed to be independent. The main idea underlying these
charts is to find a simple and convenient statistic, W , with a sampling distribution easy to find under the
validity of the in-control state, so that we can easily construct a confidence interval for a location or spread
measure of that statistic. For continuous quality characteristics, the most common Shewhart-charts are
the average chart (X-chart) and the range chart (R-chart), as an alternative to the standard-deviation
chart (S-chart). For discrete quality characteristics, the most usual charts are the p-charts and np-charts
in a Binomial(n, p) background, and the so-called c-charts and u-charts for P oisson(c) backgrounds.
Example 2.1 (X-chart). Imagine a breakfast cereal packaging line, designed to fill each cereal box with
500 grams of product. The production manager wants to monitor on-line the mean weight of the boxes, and
it is known that, for a single pack, an estimate of the weight standard-deviation σ is 10 gm. Daily samples
of n = 5 packs are takenP during a stable period of the process, the weights xi , 1 ≤ i ≤ n, are recorded,
and their average, x = ni=1 xi /n, is computed. These averages are estimates of the process mean value µ,
the parameter to be monitored. The center line is CL = 500 gm (the target). If we assume that data are
normally distributed, i.e., X _ N (µ = 500, σ = 10), the control limits can be determined on the basis that

3
√ √
X _ N (µ = 500, σ/ n = 10/ 5 = 4.472). In-control, it thus expected that 100(1 − α)% of the average
weights are between 500 + 4.472 ξα/2 and 500 − 4.472 ξα/2 where ξα/2 is the (α/2)-quantile of a standard
normal distribution. For a α-risk equal to 0.002 (a common value in English literature), ξα/2 = −3.09. The
American Standard is based on “3-sigma” control limits (corresponding to 0.27% of false alarms), while
the British Standard uses “3.09-sigma” limits (corresponding
√ to 0.2% of false alarms). √ In this case, the
3-sigma control limits are LCL = 500 − 3 × 10/ 5 = 486.584 and U CL = 500 + 3 × 10/ 5 = 513.416.

Other control charts. Shewhart-type charts are efficient in detecting medium to large shifts, but are
insensitive to small shifts. One attempt to increase the power of these charts is by adding supplementary
stopping rules based on runs. The most popular stopping rules, supplementing the ordinary rule, “one
point exceeds the control limits”, are: 2 out of 3 consecutive points fall outside warning (2-sigma) limits; 4
out of 5 consecutive points fall beyond 1-sigma limits; 8 consecutive points fall on one side of the centerline.

Another possible attempt is to consider some kind of dependency between the statistics computed at the
different sampling points. To control the mean value of a process at a target µ0 , one of the most common
control charts
P of this type is the cumulative sum (CUSUM) chart, with an associated control statistic given
by St := tj=1 (xj −µ0 ) = St−1 +(xt −µ0 ), t = 1, 2, · · · (S0 = 0). Under the validity of H0 : X _ N (µ0 , σ),
we thus have a random walk with null mean value. It is also common toPuse the exponentially weighted
moving average (EWMA) statistic, given by Zt := λxt + (1 − λ)Zt−1 = λ t−1 j t
j=0 (1 − λ) xt−j + (1 − λ) Z0 ,
t = 1, 2, . . . , Z0 = x, 0 < λ < 1, where x denotes the overall average of a small number of averages
collected a priori, when the process is considered stable and in-control. Note that it is also possible to
replace averages by individual observations (for details, see Montgomery 2009).

3 ISO 9000, Management and Quality


The main objective of this survey was to speak about statistical instruments useful in the improvement of
quality. But these instruments are a small part of the total effort needed to achieve quality. Nowadays,
essentially due to an initiative of the International Organization for Standardization (ISO), founded in
1946, all organizations are pushed towards quality. In 1987, ISO published the ISO 9000 series, with
general norms for quality management and quality guarantee, and additional norms were established later
on diversified topics. The ISO 9000 norms provide a guide for producers, who want to implement efficient
quality. They can also be used by consumers, in order to evaluate the producers’ quality. In the past, the
producers were motivated to the establishment of quality through the increasing satisfaction of consumers.
Nowadays, most of the them are motivated by the ISO 9000 certification — if they do not have it, they
will loose potential clients.
Regarding management and quality: as managers have a final control of all organization resources, man-
agement has a ultimate responsability in the quality of all products. Management should thus establish a
quality policy, making it perfectly clear to all workers (see Burrill and Ledolter, 1999, for details).

References
[1] Burril, C.W. and Ledolter, J. (1999). Achieving Quality through Continual Improvement. John Wiley
& Sons.

4
[2] Box, G.E.P. and Cox, D.R. (1964). An analisys of transformations. J. Royal Statist. Society B26,
211-256.
[3] Dodge, H.F. and Romig, H.G. (1959). Sampling Inspection Tables, Single and Double Sampling, 2nd
edition. John Wiley & Sons.
[4] Duncan A.J. (1986). Quality Control and Industrial Statistics, 5th edition. Irwin, Homehood.
[5] Figueiredo, F. and Gomes, M.I. (2004). The total median in Statistical Quality Control. Applied
Stochastic Models in Business and Industry 20:4, 339-353.
[6] Juran, J.M. and Gryna, F.M. (1993). Quality Planning and Analysis. MacGraw-Hill.
[7] Montgomery, D.C. (2009). Statistical Quality Control: a Modern Introduction, 6th edition. John Wiley
& Sons.
[8] Pandey, B.N. (2007). Statistical Techniques in Life-testing, Reliability, Sampling Theory and Quality
Control. Narosa Publishers.
[9] Pearn, W.L. and Kotz, S. (2006). Encyclopedia and Handbook of Process Capability Indices: A Com-
prehensive Exposition of Quality Control Measures. World Scientific Publishing.
[10] Roberts, L. (2005). SPC for Right-Brain Thinkers: Process Control for Non-Statisticians. Quality
Press, Milwaukee.
[11] Shewhart, W.A. (1931) Economic Control of Quality of Manufactured Product. Van Nostrand,
NewYork.
[12] Taguchi, G., Elsayed, E. and Hsiang, T. (1989). Quality Engineering in Production Systems. Mc-Graw-
Hill.
[13] Vardeman, S. And J.M. Jobe (1999). Statistical Quality Assurance Methods for Engineers. John Wiley
& Sons.

View publication stats

You might also like