My Journey With Control Charts

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

My Journey with Control Charts (SPC).

A view from the shop floor

Introduction

I should like to start by saying that I value control charts and I use them quite often, and
have done so for very many years. Provided they are used correctly they can set
baselines, identify changes, and provide very valuable information for further process
actions. Secondly, as a working engineer rather than an academic or statistician, I
describe my views as “pragmatic” meaning I am interested in all methodologies that help
my customers and myself achieve our goals. After thirty five plus years working in the
automotive, marine, nuclear, and aerospace sectors, I have seen control chart
methodology used both very well, and very badly in pursuit of all things to all people. In the
past, organisations have implemented SPC activities based on either a real or a perceived
need to satisfy some customer or industry standard. Often the end customer is not always
clear on what they want from SPC. Some believe that having wall to wall control charts is
bound to improve product quality. The purpose of this paper is discuss what I see as the
advantages and limitations of control charts starting with the key planning questions which
should be asked prior to committing a lot of time, energy, and hence cost into their
implementation. This paper is based solely on my own industry experience.

To begin, the primary purpose of the control chart is to identify variation and to distinguish
between “common cause” and “special cause” variation. There are many types of control
chart, but they fall into the two basic groups of attribute or discrete data (i.e. rejects per
unit of product, volume or time), or the more common variable control chart (X Bar and R,
XmR etc.). After deciding which group your process data sits under, the next question
asked should be, does the “time series” of the process under study have some meaningful
aspect to me? This is often misunderstood, and as such results can be confusing. If you
were to use a control chart to plot a process in which the response over time was non-
random or cyclic, say position of sun by hour, then “cyclic special cause” would become
the norm, and process capability could never be calculated. Hopefully, one would observe
that this “special cause” was due to the non-random cyclic nature of sun position relative to
time, but to understand this, a knowledge of physics and astronomy is required. Not all
special causes are so easily identified. The next considerations are subgroup (or sample
size), methods & frequency of measurement, and most importantly the response
mechanism if something goes out of control in real time. There is no point identifying a
special cause without the knowledge and the resource to take immediate corrective action.
This is one area where I have seen SPC fail previously. I will now discuss some other
early considerations which have previously confused some of my clients and customers.

Enumerative (experimental) and Analytical (observational) Data Analysis

I read that statisticians and others distinguish between data sets as either “Enumerative”
or “Analytical”, and see them as two totally different types of data. This may be
theoretically true, but in my work this difference is not important, as the same data set may
be studied by both analytical and enumerative methods. This is common in “design of
experiment” (DOE) activities. Here, the output of the analytical study can often become the
input of the enumerative study and hence can quantify results by differing methodologies
as well as help identify process optimisation relationships. It should be noted here that as
an observational tool, control charts alone cannot identify the root cause and effects of

1 (DGS-QD-001)
My Journey with Control Charts (SPC). A view from the shop floor

variation, (as my position of the sun in the sky example demonstrated). They may provide
clues by monitoring the variation in time series, however knowledge of the cause and
effect relationship is required for process optimisation, including what unknown variables
or interactions could affect the output variable being monitored in the first place!

Distributions and the Central Limit Theorem (CLT), and the water analysis

In a previous paper I was reading an analogy of water. It described a lake of unknown


depth and shape, as being like a fixed universe and subject to “enumerative” statistics,
while flowing river water was described as “analytical” as it is constantly changing over
time. If the characteristic under study was river water flow rate, you could certainly use the
variable control chart to measure and monitor it. You would be able to observe the day to
day variation and compare it both over time and compare to with water flow in the lake, but
again you may need other tools to determine the root cause of the variation in water flow.
Consider next if the characteristic under study was contamination levels, and you
monitored both lake and river. If the change in contamination was geographical rather than
time series relative, the control chart would be misleading if your sampling strategy was
non-random or representative of all locations. The difference between the size and shape
of the universe (deep lake with slow flow, or Shallow River with fast flow) may be irrelevant
to contamination. As long as the sample locations are random and representative, the
size, shape, or depth of the water (universe) “in itself” does not matter, because the
“central limit theorem” states that independent random variable data taken from any
universe will follow a “normal distribution” (you can google this).This is why many statistical
methodologies work in the first place, and, as we are advised in the AIAG SPC reference
manual, it is an assumption when interpreting process capability from the statistical
process control chart (1). I must admit however, historically, I almost never checked my
control chart data for normality, prior to the arrival of statistical software.

The Rational Subgroup.

Consider my earlier example of the position of the sun in the sky. Let us assume nice
sunny days in June. I could measure position over three days at 9, 10, 11, noon, 1, 2, and
3PM and plot the results onto an individual and moving range chart. I would see the cyclic
nature of sun position and assign cyclic (non-random) special cause. However, if I was to
take three readings at noon every day for 20 days in June, and plot them on an X Bar and
Range chart I would see something quite different. It is not that one choice of subgroup
methodology is wrong, rather a case of understanding what I am asking from the data in
terms of its time series. Remember also that any “flat line” does not mean the absence of
variation, only that one has not considered the resolution of the measurement system.

Calculating Process Capability (and the assumption of normality)

If the initial purpose of the control chart is to determine if the process is in statistical control
(no points outside control limits and no non-random patterns etc.), then the secondary
purpose is to calculate process capability by comparing the location and spread of the
variation to the engineering specification limits. This is determined by comparing standard
deviation units into area under the bell curve of the normal distribution. I should state here
that in my experience, everything which is in “statistical control” meets the normality test. I

2 (DGS-QD-001)
My Journey with Control Charts (SPC). A view from the shop floor

have never observed data in “statistical control” failing the normality test (see my appendix
on page 6). This makes me think this assumption should not need testing (even though my
AIAG book suggests otherwise?). I can see the possible need for normality testing in some
applications of experimental statistics, just not for control charts. Anyway, guidance on this
is given in the Appendix “F” of the AIAG referenced and quoted below:

“To continue, an interpretation for process capability will be discussed, under the following
assumptions:
 The process is statistically stable (i.e. only common cause variation evident):
 The individual measurements from the process conform to the normal distribution:
 The engineering and other specifications accurately represent customer needs:
 The design target is in the centre of the specification width:
 Measurement variation is relatively small” (1):

“If it is not known whether the distribution is normal, a test for normality should be made
such as reviewing a histogram, plotting on normal probability paper, or using more precise
methods” (2).

“Z values can be used with a table of the standard normal distribution to estimate the
proportion of output that will be beyond any specification, an approximate value, assuming
that the process is in statistical control and is normally distributed” (3).

APPENDIX F Standard Normal Distribution


“P, = the proportion of process output beyond a particular value of interest (such as a
specification limit) that is z standard deviation units away from the process average (for a
process that is in statistical control and is normally distributed). For example, if z = 2.17, P,
= .0150 or 1.5%. In any actual situation, this proportion is only approximate” (4).

Control Charts in the Study of Measurement Systems Analysis

Having identified our process for study, say our water flow example, and having identified
our rational subgroup (or sample size) frequency and location of measurements, we now
need to consider our measurement process. Like the variation in our river water flow, we
also have variation in our measurement system which consists of the within instrument
variation (EV or repeatability) and variation between the people or our methods of
measurement (AV or reproducibility). We need to ensure that our combined repeatability
and reproducibility are small compared to either the variation in the water flow (% R&R to
process variation) or the allowable limits of the water flow (% R&R to tolerance). There are
two methodologies to investigate measurement systems. The most common one involves

3 (DGS-QD-001)
My Journey with Control Charts (SPC). A view from the shop floor

measuring small numbers of parts via a different appraiser or instrument and evaluating the
results via the “Average and Range” or “ANOVA” methodology (5&6). The second
methodology is known as “EMP” or “evaluating the measurement process” (7). This EMP
methodology has made much wider use of the control chart to determine both repeatability
and reproducibility as well as a differing mathematical model for calculating and defining
acceptability. At the time of writing this methodology is not mandated in AS13003 (although
it is referenced), however it does make very good use of control chart methodology for
evaluating measurement variation, including a closer evaluation of repeatability. In the
longer term, this method may prove to be superior to the AIAG methodology. It is strongly
advisable that the MSA study be carried out before the data gathered from the process under
study, and due care given to resolution of the instrument. It would be most unsatisfactory to
find the results of water flow or contamination levels made useless by excessive variation
(either special or common cause), or a lack of resolution in our measurement system.

Control Charts for Cause and Effect Analysis & Process Optimisation

In my experience, the control chart is used to best effect with other methodologies. It forms
one of the “seven basic tools” of quality as shown in Fig I below. One should identify the
order of how the seven tools are applied, and often not all are required. I would always
advise the use of control charts where the conditions I outlined in my introduction have been
met. In terms of cause and effect analysis, the Fishbone (or Ishikawa) diagram has been
the traditional basic tool, but it does require historical information on the process. The control
chart can be an input to cause and effect analysis, but again it requires prior knowledge on
the process in order to assign which variables affect the responses to make improvements.
While the control chart can provide a basic “one factor at a time analysis” this is much less
efficient than the orthogonal matrix used in DOE, plus in DOE the interactions & process
optimisation parameters can quickly be identified via ANOVA. This is why designed
experiments are recommended for investigating excessive variation (8 & 9).

Figure I, The seven basic tools of Quality

4 (DGS-QD-001)
My Journey with Control Charts (SPC). A view from the shop floor

Control Charts within the DMAIC Process

The DMAIC process has come in for much criticism in recent years, some justified but
much of it unfair. True, six sigma “consultants” have “popped up” everywhere creating a
cottage industry. Like “some” lawyers and accountants, “some” consultants love
complexity, rather than keeping things as simple as possible! In my previous employment
with very large multinationals, I was obliged to attend training in “Six Sigma” and the
completion of a work based project. However, I found the DMAIC/DMADV methodology
very useful for scoping and managing my projects. While the tools utilised (including
control charts) are not new, I found the DMAIC process helpful in planning the scope of
work required and identifying the appropriate tools and resources I needed. Some doubt
the effectiveness of the results of projects, but I have experienced huge and independently
verified improvements from such projects first hand. This is why major multinationals
worldwide (including the ones I worked for), continue with the DMAIC methodologies.

Summary

As I stated in my introduction, I am a manufacturing engineer with no fixed agenda. My


views are based on my own observations and my personal experience. Control charts
certainly have a major role in variation reduction and process improvement, and it is true
that they are often underutilised. Although I accept the concept of normality in general (say
in DOE), I find it a bit confusing in relation to control charts. In my own experience (i.e.
appendix on page 6), data in statistical control has always passed the normality test (even
if not all data that passes a normality test exhibits statistical control).If so, this suggests a
clarification in the AIAG standard may be advised. Finally, like all other statistical tools,
control charts are no magic bullet, and careful planning is required to ensure that they are
effective. When applied poorly, they rightly attract the criticism which I have previously
observed. Also, when working on new design projects, one often has no observational
time series data, as the process may often not yet exist. Here, one has to utilise both
historical and experimental data and simulations beyond the scope of this paper.

References

1. AIAG SPC Reference Manual rev 2 page 57 paragraph 2


2. AIAG SPC Reference Manual rev 2 page 57 paragraph 5
3. AIAG SPC Reference Manual rev 2 page 59 paragraph 7
4. AIAG SPC Reference Manual rev 2 appendix F page 147
5. AIAG MSA Reference Manual
6. Aerospace Standard AS 13003
7. Evaluating the Measurement Process III - Using Imperfect Data" by Donald Wheeler
8. AIAG SPC Reference Manual page rev 2 appendix G page 149
9. Aerospace Standard AS 13003 pages 8 & 31.

Denis Sexton is a Chartered Mechanical Engineer (IMechE & ECUK) with senior Industrial quality and manufacturing
experience in multiple industries, including Aerospace, Automotive and Defence. He has held Chief Test Engineer and
Principal Manufacturing Engineer positions focusing on NPI for Automotive, Defence and Energy applications. Denis designed
an Australian University Postgraduate Programme in Metrology & Quality. In addition Denis is also an ASQ Certified Quality
Engineer, and an ASQ Six Sigma Black Belt. He also holds a Rolls Royce Green Belt for workplace based projects.

5 (DGS-QD-001)
My Journey with Control Charts (SPC). A view from the shop floor

APPENDIX

TESTING THE NORMALITY ASSUMPTION WITH CONTROL CHART DATA VIA THE ANDERSON-DARLING TEST

Xbar-R Chart of T1, ..., T3


1 1

40 UCL=40.69

__
35
Sample Mean

X=33.73

30
LCL=26.78
25
1 1

20
1 1
1 3 5 7 9 11 13 15 17 19
Sample

20
UCL=17.50
15
Sample Range

10
_
R=6.8
5

0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

Starting with my dummy X Bar and R Chart (3 samples per subgroup), X Bar out of control. The 20 subgroup
means are then checked for normality using the Anderson Darling test. The resulting p value (p value 0.019)
suggests non-normal distribution is indicated. Normality, like statistical control is a requirement according
to the AIAG SPC manual. In my experience, this is almost never tested in real life on the shop floor.

Xbar-R Chart of Rv1, ..., Rv3


UCL=41.50
40.0
Sample Mean

37.5
__
X=35.62
35.0

32.5

30.0 LCL=29.73
1 3 5 7 9 11 13 15 17 19
Sample

16
UCL=14.80

12
Sample Range

8
_
R=5.75
4

0 LCL=0
1 3 5 7 9 11 13 15 17 19
Sample

Manipulation of my X Bar and R Chart back into statistical control. Note that the mean has only changed
from 33.73 to 35.62, and the range has changed slightly from 6.8 to 5.75. The standard deviation has
reduced from 7.01 to 3.37. Again, checking the 20 subgroup means for normality using the Anderson
Darling test, the resulting p value (p value 0.091) suggests that the normality assumption can now be
accepted at the 95% CI. Visually, the variation around the normality fit line has not changed greatly.

This is one of several tests which suggest to me that if my control chart is in “statistical control” normality
testing may not be required. I am not sure if it is always the case that data in statistical control meets this
normality requirement, and/or if special cause always causes non-normality. Further research or assistance
from others (who have carried out more extensive research than me) may be required to confirm this.

6 (DGS-QD-001)

You might also like