0% found this document useful (0 votes)
194 views

Improved Process and Systems Performance

Shainin technique
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
194 views

Improved Process and Systems Performance

Shainin technique
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Quality Engineering, 24:171183, 2012

Copyright # Taylor & Francis Group, LLC


ISSN: 0898-2112 print=1532-4222 online
DOI: 10.1080/08982112.2012.654324

Statistical Engineering: Six Decades


of Improved Process and Systems
Performance
Richard D. Shainin

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

Shainin Problem Solving and


Prevention, Northville, Michigan

ABSTRACT Hoerl and Snee (2010) have proposed to formalize a discipline


called statistical engineering to fill the gap between statistical thinking and
statistical tools. This article describes the existing Shainin defined statistical
engineering discipline and evaluates its effectiveness based on the standards
proposed by Snee and Hoerl. The methods are illustrated with an investigation of a performance problem in a complex electromechanical system.
KEYWORDS problem solving and problem prevention, Quincunx paradigm,
Red X1 paradigm

INTRODUCTION

Editors Note: This article presents the


Shainin definition of statistical engineering, which differs from the Hoerl
and Snee definition and is primarily
an engineering approach and discipline using statistics as a tool (see articles Research and Development
Section). Given its historically established use of the term, we thought
that it would be helpful for the
readers to see these definitions and
methods compared.Christine M.
Anderson-Cook and Lu Lu.
Address correspondence to Richard D.
Shainin, Shainin Problem Solving and
Prevention, 41820 Six Mile Rd., Suite
103, Northville, MI 48168, USA.
E-mail: [email protected]

Hoerl and Snee (2010) have called for the establishment of a new formal
discipline called statistical engineering.1 They see this discipline as a formalization of what many practicing applied statisticians have been doing for
years as they bridge the gap between statistical thinking and statistical methods. When successfully developed, they believe that statistical engineering
will ensure that statistical projects have high impact; integrate concepts of
statistical thinking with statistical methods and tools; and provide statisticians an opportunity to become true leaders in their organizations. They
have further proposed standards that should apply to successful statistical
engineering projects.
There is a small complication; the discipline that Hoerl and Snee (2010)
are proposing already exists.2 It has been evolving for more than six decades. Not only does the existing discipline share the same name but it meets
the goals and standards they suggest.
Statistical engineering as defined by R. D. Shainin (1993) is a rigorous
discipline for performance improvement in manufacturing, engineering,
and business processes. Statistical engineers solve and prevent problems
1
Churchill Eisenhart established a statistical engineering laboratory within the Bureau of
Standards in 1947. According to the NIST Website, much of the organizations current focus is
on the application of statistical methods to metrology.
2

Dorian Shainin taught courses at the University of Connecticut from 1950 through 1983 in statistical engineering. He received the EL Grant Medal in 1981 in recognition of that program.

171

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

at all stages of the product and process life cycle


from development to field performance including
quality, reliability, and productivity.
There are thousands of trained statistical engineers
working in a wide range of industries in Asia, Australia,
Europe, North America, and South America. They are
improving material properties in paper, plastics, ceramics, composites, steel, aluminum, iron, glass, foam,
and advanced materials. They are improving process
performance in casting, forging, stamping, painting,
molding, sintering, assembly, machining, forming,
logistics, health care, insurance, and banking.
Statistical engineers, following the discipline
developed by Dorian Shainin, are improving the
quality and reliability of medical devices, automatic
milking machines, trains, planes, and automobiles.
They are improving the performance and reliability
of computers, printers, color televisions, jet engines,
engine control modules, automatic teller machines,
fuel cells, radars, turbines, diesel engines, and gasoline engines, including lower emissions and improving fuel economy.
Though statistical engineering started with a focus
on assuring product quality and reliability through
problem solving and prevention, the focus has broadened to include business process and software
problem solving, as well as productivity improvement in manufacturing, product development, and
business processes. Successful business process
improvements have resulted in significant savings in
the time required to build prototype vehicles; reductions in expedited shipping costs; improved customer
satisfaction and efficiency for emergency room visits;
and the flawless launch of a replacement information technology (IT) system after executives realized
that the planned replacement was going to fail.
Statistical engineerings roots go back to the 1940s
when Dorian Shainin and Joseph Juran along with
quality pioneer Leonard Seder recognized that
Jurans Pareto principle applied to the causes of variation as well as to various business metrics. This led
to an approach to solving variation problems that
was different from both traditional engineering and
statistical approaches.
This strong connection between the Snee=Hoerl
proposed newly formalized discipline and the existing profession raises an interesting question: How
well does the existing discipline meet the goals and
standards suggested by Hoerl and Snee?
R. D. Shainin

This article addresses the fundamental differences


in worldviews that have caused some statisticians to
dismiss Shainins statistical engineering. It explores
the differences in strategic approaches to improvement, based on these worldview differences. It then
compares statistical engineering projects to the standards proposed by Drs. Snee and Hoerl.

STATISTICAL PERSPECTIVES
Hoerl and Snee (2010) wrote: The term statistical
engineering has been used before, perhaps most
notably by consultant Dorian Shainin, who generally
used it to indicate the application of statistical
approaches that were ad hoc (but generally worked)
rather than based on formal statistical theory (p. 52).
The American Society for Qualitys (ASQ) Website
(http://asq.org/about-asq/who-we-are/bio_shainin.
html, accessed 27 December 2011) notes Shainin
developed a discipline called statistical engineering.
He specialized in creating strategies to enable engineers to talk to the parts and solve unsolvable
problems. The discipline has been used successfully
for product development, quality improvement,
analytical problem solving, manufacturing cost
reduction, product reliability, product liability prevention, and research and development.
Shainins methods are in fact based on sound
statistical theory and are absolutely rigorous. Just
prior to Dorian Shainins retirement, Shainin Problem
Solving and Prevention engaged the services of Carl
Bennett, a distinguished statistician, to ensure that
the methods remained statistically sound.
Steiner et al. (2008) wrote: The Shainin System, as
reflected by the genesis of the methodology in
manufacturing is best suited for medium to high volume production. Most of the tools implicitly assume
that many parts are available for study (p. 18).
Bovenzi et al. (2010) described the use of statistical engineering in risk reducing the development
of a new ammonia sensor for diesel engine applications. Abrahamian et al. (2010) described a Shainin
system for ensuring product reliability. Both of
these papers develop profound knowledge from
extremely small sample sizes during the development process. Lloyd Nelson (1989) noted The
Shainin approach thrives on the tiniest of sample
sizes, plus of course, Mr. Shainins immense talent
(p. 78).
172

Balestracci wrote (2008): In informal discussions


with similarly degreed statistical colleagues, I have
found very little respect for his (Dorian Shainins)
work and noticed very little discussion of his methods in the Statistics Division of ASQ (p. 31).
In a discussion paper regarding Steiner et al.
(2008), Montgomery (2008) wrote:

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

There has always been a certain amount of confusion


(indeed, even mystery) surrounding these ideas, partly
because the developers never submitted the approach
and some of its unique tools and perspective to peer
review. . . . I am not convinced that their [Shainins]
techniques and system of implementing them would
be effective in anything but the simplest sort of lowtechnology manufacturing processes. Using Shainin in
high-technology manufacturing, manufacturing biotechnical products and chemical and process settings seems
unwise and unworkable. (p. 36)

The ASQs Website (http://asq.org/about-asq/whowe-are/bio_shainin.html, accessed 27 December


2011) notes: Shainin wrote more than 100 articles
and was the author or coauthor of several books,
including Managing Manpower in the Industrial
Environment; Tool Engineers Handbook; Quality
Control Handbook; New Decision-Making Tools for
Managers; Quality Control for Plastics Engineers;
Manufacturing, Planning, and Estimating Handbook; and Statistics in Action. R. D. Shainin
(1993), P. D. Shainin et al. (1997), and Dao and
Maxson (2009) are just a few of the published
articles that address the approach and some specific
techniques.
Statistical engineering, as developed by Shainin,
has solved numerous difficult problems in high-tech
manufacturing for batch, continuous, and discrete
part production. For a skilled statistical engineer,
these are the relatively easy problems. Deland and
Meyer (1990) and R. D. Shainin (2008b) provide
two examples of finding Red X1 (Red X Holdings
LCC, Anacortes, WA, USA) interactions in complex
manufacturing involving batch and continuous processing in chemistry and biology.

DIFFERENT WORLDVIEWS
Different worldviews lead to different problemsolving approaches.
Some problem solvers address problems by figuring out what is wrong. They see the problem from
the perspective of a mechanic. They study symptoms
173

until they can conceive the cause. They then propose


a design change to either the product or the process
that they believe will fix the problem. Because they
can usually envision a number of potential causes,
they often propose several changes. Their fundamental perspective: the presence of problems must
mean there is a flaw in the system, either in the process or the design.
Other problem solvers address problems by creating a model of the system. They recognize that
the value of any process or product output (Y) is a
function of a series of inputs (Xs). They ask subjectmatter experts to brainstorm potential causes and
then run a series of designed experiments to find
as many statistically significant causeeffect relationships as they can. The more complete the model, the
better. Once the model is complete the subjectmatter experts can consider which inputs they want
to control (Snee 2009). Their fundamental perspective: Y f(x) and the key to optimizing a system is
to discover the important inputs.
Statistical engineers, following the discipline
developed by Shainin, solve problems by finding
what is different. They combine statistical thinking
with engineering insight to converge on the cause
effect relationship that is driving most of the difference between the extremes of a population. Once
this causeeffect relationship is thoroughly understood, statistical engineers consider alternative solutions that control the relationship and reduce
variation. Their fundamental perspective: r2y f(r2x )
rather than Y f(x). The key to solving variation
problems is to find the Red X by talking to the
parts3 (R. D. Shainin 1993). They think Y ! X and
never speculate about possible sources of variation.
Y ! X thinking requires the kind of backwards
reasoning described by Sir Arthur Conan Doyle in
the first Sherlock Holmes story (1888). In the case
of statistical engineering it means a progressive
search using a process of elimination to converge
on the true root cause of unwanted variation (R. D.
Shainin 1993).
Statistical engineers are guided by a different view
of the nature of variation: the Red X paradigm.
3
Talking to the parts is a foundational principle in statistical
engineering. It means generating clues based on patterns of
change for an insightful response (Y). Clue generation is more
aggressive than observational studies and requires both statistical
thinking and engineering insight.

Six Decades of Improved Performance

THE QUINCUNX PARADIGM

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

In his ground-breaking book, Economic Control of


Quality of Manufactured Product (1931), Walter
Shewhart expressed the belief that once assignable
causes have been found and controlled, the remaining variation is created by a number of causes of
approximately equal magnitude that cannot be discovered. He expressed the hope that this state of
equilibrium for a stable system or process could be
identified with a control chart. Brian Joiner (1994)
provided an excellent description of the underlying
belief system for this paradigm:
In a common cause system theres no such thing as
THE cause. Its just a bunch of little things that add up
one way one day and another way the next. So we dont
learn much by trying to find differences between high
points and low points when only common causes are at
work. To change common cause variation, well need to
reconfigure the system somehow. . . . Most problems arise
from common causes. (p. 136)

Dr. Joiner uses a quincunx as the model for common cause variation. A quincunx is a mechanical
device invented by Francis Galton that demonstrates
the approximation of the binomial distribution to the
normal distribution when the probabilities of p and q
are 0.5. Beads are dropped over an array of equally
spaced offset pins. As each bead hits the first pin, it
must bounce either left or right. At the next row it
again must go either left or right. Each row is a
source of variation and the additive nature of several
independent sources of variation where each source
makes an equal contribution matches a normal distribution. The quincunx illustrated in Dr. Joiners
Fourth Generation Management has 10 rows.

lens. A more appropriate perspective is r2y f(r2x ).


Expanding Y f(x) produces the general equation:
Y b0 b1x1 b2x2 b3x3 . . . b12x1x2 b13x1x3
. . . e. Where the coefficients are constant and
each input can take on a range of values. The
sum of all unidentified factors is represented by the
error term, e. Expanding r2y f(r2x ) produces r2y
b21 r2x1 b22 r2x2 b23 r2x3 . . . b212 r2x1 r2x2 b213 r2x1 r2x3
. . . e2 . The contribution to variance from each
term is the product of the coefficient and the variance
of the input(s). The Red X is the term with the largest
product.
The Red X is not the only source of variation. It is
the strongest. If you need to shift the location of a
population; or if you are designing a new product
or process, Y f(x) is a fine model. However, if you
need to reduce variation in any system, r2y f(r2x ) is
a more effective model. r2y f(r2x ) recognizes that
independent sources of variation combine as the
square root of the sum of the squares. If the contribution to variation from the various inputs is different, not only will the Red X make the largest
contribution to the change in Y, but its effect will be
magnified by the squaring of the effects. If the Red
X is contributing five units to the change in Y and a
Pink X is contributing two units, the effect of the
Red X is more than six times the effect of the Pink X
(25:4). The only way to make a substantial reduction
in variation is to find and control the Red X.
Figure 1 illustrates the difference between the Red
X and the quincunx paradigms.

THE RED X PARADIGM


Statistical engineers, following the discipline
developed by Shainin, believe that no matter how
many causes have been found and controlled, the
Pareto principle applies to the remaining sources of
variation. Even when a system meets the Shewhart
standard of equilibrium, there is always one cause
effect relationship whose contribution to variation
is stronger than the others. This may be a main effect
or it might be an interaction. This strongest cause
effect relationship is called the Red X.
It is difficult to comprehend the Red X paradigm if
you view the world of variation through a Y f(x)
R. D. Shainin

FIGURE 1 Two different worldviews. (Color figure available


online.)

174

These differences in understanding about the nature of variation drive very different strategies and
approaches to problem solving.

PROBLEM SOLVING IN A
QUINCUNX WORLD

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

Dr. Joiner opens his chapter on strategies for


reducing variation by citing an example of fuel consumption variation in complex engines (Joiner 1994).
He observes that a strategy that looks for differences
between the best and worst engines would fail if the
engines had all been produced under a consistent set
of common cause factors that just happened to add
up on the high side one day and on the low side
another. In fact, he concluded
They could have found a factor that they thought
explained the difference between the ten best engines
the ten worst when really it was just coincidence. They
would have learned a false lesson. They could have taken
action to put in a solution to prevent that sort of problem
in the future. This almost surely would have added to cost
and increased bureaucracy, and would not have been
effective. (p. 136)

Under the quincunx paradigm, all factors contribute equally to the change in Y. This means that
b1rx1 b2rx2 b3rx3 bmrxm E. In order for this to
be true, the amount of change for each input must
balance precisely with its coefficient so the products
are all equal. Because the variation in each input is a
function of operating conditions in the process or in
upstream processes and the coefficients are functions
of chemistry, physics, or geometry, the quincunx paradigm represents a remarkable set of coincidences.
This model is the foundation for the fifth of
Dr. Demings (1986) famous 14 points: Improve
constantly and forever the system of production and
service (p. 23). Once special causes and easy-to-find
common causes have been eliminated, the best
you can achieve are small incremental reductions in
variation with each improvement to the system.

PROBLEM SOLVING IN A
RED X WORLD
The Red X paradigm is consistent with statistical
thinking as defined by Britz et al. (1996, 2000):
.

All work occurs in a system of interconnected


processes.

175

.
.

Variation exists in all processes.


Understanding and reducing variation are keys to
success.

R. D. Shainin (1993) would have added several


subbullets to understanding variation:
.

.
.

In the physical world nothing happens without a


reason.
There is always a Red X.
Finding and controlling the Red X is the only way
to substantially reduce variation.
Executing a progressive search by talking to the
parts is the best way to find the Red X.

The first bullet is simply a restatement of Y f(x).


It recognizes that no output can change value without a change in one or more inputs.
The second bullet applies Jurans Pareto principle
to the sources of variation. It recognizes that the contribution that any input makes to the overall change
in Y is the product of the terms coefficient and the
range of values for the input. Interactions provide
opportunities for substantial reductions in variation.
By definition an interaction is a change in sensitivity.
That means that the coefficients for the members of
an interaction are variable. If x1 and x7 are interacting, the coefficient for x1 will depend on the value
of x7 and vice versa. Independent main effects create
parallel response surfaces with no opportunity for
optimization. Interactions create twisted response
surfaces. As the settings for each input are adjusted,
the sensitivity of the response to changes in the interacting factors varies. This opportunity to reduce sensitivity is the basis for robust engineering.
The third bullet recognizes the importance of the
square root of the sum of the squares law for combining independent sources of variation. As
described earlier, the relative effect of the Red X on
the change in Y is magnified when the effects are
squared. Finding and controlling an input that is
not the Red X or, in the case of an interaction, part
of the Red X will reduce variation but the improvement will be minor. Irrespective of the techniques
used, any project that results in a substantial
improvement in variation must have found and controlled the Red X.
If finding and controlling the Red X is the key to
system or process improvement, then a mathematical
Six Decades of Improved Performance

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

model in the form of Y f(x) is not necessary. In fact,


it is unproductive. Too much time is spent reconfirming relationships that are already known and
controlled.
Statistical engineers, following Dorians discipline,
approach problem solving as detective work. They generate clues by studying patterns in the change of Y to
eliminate everything that cannot be the Red X. This is
a contrast based approach to problem solving that
determines what is different rather than what is wrong.
The following conversation between Holmes and
Watson from A Study in Scarlet, the first Sherlock
Holmes novel, expresses the thought process well:
I have already explained to you that what is out of the
common is usually a guide rather than a hindrance. In
solving a problem of this sort, the grand thing is to be able
to reason backwards. That is a very useful accomplishment, and a very easy one, but people do not practice it
much. In the every-day affairs of life it is more useful to
reason forwards, and so the other comes to be neglected.
There are fifty who can reason synthetically for one who
can reason analytically.
I confess, said I, that I do not quite follow you.
I hardly expected that you would. Let me see if I can
make it clearer. Most people, if you describe a train of
events to them, will tell you what the result would be.
They can put those events together in their minds, and
argue from them that something will come to pass. There
are few people, however, who, if you told them a result,
would be able to evolve from their own inner consciousness what the steps were which led up to that result. This
power is what I mean when I talk of reasoning backwards,
or analytically. (p. 355)

Following Shainins principles, an effective statistical engineering investigation uses a series of iterative
steps to converge on the identity of the Red X. The
following steps can be found in every successful
project:
1. Narrow the focus of the project to a single
problem.
2. Develop a clue generation strategy depending
on the nature of the problem and the nature of
the system or process. Initial strategies never
include a list of variables.
3. Execute the strategy to eliminate broad categories of potential variation sources. Clue generators are statistical tools that leverage
stratification and disaggregation to discover the
largest sources of variation. They are more
hands-on than observational studies but less
detailed than designed experiments.
R. D. Shainin

4. Update the strategy based on the remaining


potential sources of variation.
5. Execute the revised strategies, further narrowing
the search. Depending upon the nature of the
problem, steps 4 and 5 may be repeated a number of times.
6. Confirm the identity of the Red X with a
designed experiment. Steps 1 through 5 are
the equivalent of a police investigation resulting
in the arrest of one or more suspects. Step 6 is
the equivalent of prosecuting the case in court.
The statistical engineer expects to turn the
problem on and off like a light switch by controlling the input levels of the Red X. Depending on the number of suspects still left at step 6,
the test might be a factorial or a single factor
experiment.
7. Understand the relationship between the Red X
and Y. If the Red X is an independent main
effect, this step involves taking an appropriate
sample to infer the strength of this relationship
with respect to all other sources of variation. If
the Red X is an interaction, this step will involve
a response surface map to identify opportunities
for optimization.
8. Consider potential solutions. Now that the Red X
relationship is well understood, this is an appropriate step for brainstorming potential corrective
actions including seeking input from the operators who must implement the solution. If the
Red X is a main effect, the options for controlling
its contribution to overall variation will be limited; it simply must be controlled more tightly.
On the other hand, when the Red X is an interaction, there are often multiple alternatives for
optimization.
9. Implement the corrective action and track the
results. This is the appropriate step for PDCA
to ensure that the corrective action produces
the expected results and does not cause unintended problems.
10. Leverage lessons learned. In most cases the Red
X is a surprise that leads to a deeper understanding of either the product or the process. This is
the kind of profound knowledge that Dr.
Deming supported. The insights gained should
be communicated to engineering to improve
the math models for the product or process performance and improve future designs.
176

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

WIPER SWEEP ANGLE, A STATISTICAL


ENGINEERING PROJECT
A manufacturer had won a contract to design and
manufacture the wiper system for a new flagship
minivan. The contract was important. Though this
manufacturer had been building wiper systems for
many years, this was the first time it had responsibility
for design. Engineering developed a sophisticated
computer simulation model for the performance of
the wiper system. It was used for the design. The
manufacturer had bid the contract with no allowance
for scrap, rework, or containment. They needed a
capable process.
During the prototype building process a critical
performance characteristic, sweep angle, was found
to be out of specification. Sweep angle determines
how much of the windshield surface will be cleared
with each sweep of the blade. If the sweep angle is
too low, the wiper system will not meet government
safety standards. If the sweep angle is too high, the
wiper blade will hit the support column at the edge
of the windshield, creating an objectionable noise.
Across multiple prototype builds, 10% of the wiper
systems had low sweep angle. The production system was stable but incapable. The variation was from
common causes.
Someone suggested that the problem could be
caused by the prototype tooling and that it would
be fine once the regular production tooling was
available. However, when the production tooling
arrived, the problem was still there.
After the usual manufacturing vs. engineering dispute, in which engineering accused manufacturing
of not controlling their processes and manufacturing
claimed that engineering had provided a poor
design, someone suggested they look at the computer simulation model. Engineering analyzed the
model and identified 23 features that contributed to
sweep angle. Parts were then compared to the print
and two key features were found to be out of spec.
Engineering felt vindicated and supplier quality engineers were dispatched to the suppliers to help fix
their processes. Several weeks were spent fixing
the supplier processes. However, when new wiper
systems were built with in-spec parts, 10% had a
low sweep angle.
At this point, production built 30 golden wiper
systems. For each golden system, parts were sorted
177

to find samples close to nominal for each of the 23


key features. Manufacturing engineers walked the
wiper systems through the process to ensure that
all procedures were properly followed. When these
30 systems were tested, 4 had a low sweep angle.
Thus far, all the actions taken in this case have
been X to Y. Subject-matter experts have studied
the symptoms and thought of Xs that could explain
the results. An effective statistical engineering project
goes Y to X.
The Y to X investigation for wiper sweep angle
started with an assessment of measurement system
discrimination: the ability to see differences among
the units. The evaluation demonstrated the effectiveness of the measurement procedure and eliminated
measurement as the Red X source.
Now that the measurement system has been eliminated, we know that the Red X lives somewhere in
the wiper systems. We are now ready for the next
convergent step.
The statistical engineering team, following the
discipline developed by Shainin, selected two wiper
systems with a large difference in sweep angle performance. The Red X must be at different levels
between these two systems. If systems were selected
with a small difference, that could be caused by
other Xs. Only the Red X can cause a large change
in performance (Pareto and square root of the sum
of the squares law).
Each unit was carefully disassembled and then
reassembled with the same set of components and
retested. This step separates the performance variation between the combined effects of measurement
and assembly and the effect of the components
themselves. It answers the question: Is the Red X in
the components or in the assembly process? The
investigator does not have a preference for one result
or the other, but the direction of the investigation
will depend on the answer. This step is similar to
the disassembly and reassembly procedure described
by Ellis Ott (1975).
In this case, the two systems continued to perform
consistently with each tear-down and rebuild, indicating that the Red X is not in the assembly process.
Subassemblies were swapped between the two
systems. This step eliminated all components except
the motor subassembly and the intermediate linkage.
It also revealed that though the intermediate linkage
effect was statistically significant, it was minor with
Six Decades of Improved Performance

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

regard to the difference between the systems. The


Red X lived in the motor subassembly.
In order to discover the feature or property that was
driving the difference in motor performance, more
samples were needed. Additional pairs of high- and
low-sweep-angle wiper systems were identified and
the motor subassemblies were swapped to confirm
that the differences were being caused by the motors.
To maintain an appropriate balance between alpha
and beta risks (Type I and Type II errors), the number
of pairs selected will be a function of the number of
features and properties to be checked. As the number
of features increases, the number of pairs selected
must increase to keep the alpha risk reasonable. Beta
risk is controlled by creating logical pairs with a
demonstrated high- and low-performing motor in
each pair, and using discriminating measurement systems for the response variable and the tested inputs.
The initial paired comparison failed to reveal the
source of the difference. At this point the investigation had hit a roadblock. It was clear that the
Red X had been captured. The motors could be
installed into new wiper systems with random components for the rest of the system and random
assembly within the normal assembly process and
the resulting sweep angles could be reliably predicted based on the motor selected. The problem
was not that the Red X had not been captured. It
was simply hiding; that is, missing from the features
measured in the paired comparisons.
A team member noted that all of the motor measurements for the paired comparisons had been
taken statically, but the wiper system performance
was a dynamic measure. A new response variable
was developed to measure the motor performance
dynamically. That response clearly separated the
high-sweep-angle motors from the low-sweep-angle
motors. In the interest of protecting client proprietary
information, we will call this response variable J.
There was no engineering specification for response
J. In fact, prior to this investigation, it had never
occurred to anyone to think of this response.
Response J varied randomly across the incoming
motors. Its source was inherent in the system.
A simple experiment confirmed that variable J was
the system-level Red X, with no more than a 5%
alpha risk. Further, a tolerance study revealed the
range of variable J values that would keep wiper
sweep angle within its tolerance while allowing the
R. D. Shainin

remaining system inputs to vary within their tolerances. At this point the problem could be contained
at the motor supplier and the wiper manufacturer
could reliably produce good systems.
Containment is not an effective solution. It should
be treated as a tourniquet and used temporarily until
the Red X is found in the supply chain. A component
search at the motor supplier eliminated the motor
assembly process and all components except for
component Q. A paired comparison of component
Q revealed that feature K was the Red X. An examination of the process that created feature K revealed
the critical process parameter that would need to
be more tightly controlled in the future. Feature K
had a manufacturing tolerance but not an engineering tolerance. Its impact on wiper sweep angle was
missed completely. It had not been identified in the
detailed model of wiper system performance. New
standardized work was developed for the production
of component Q and the problem was killed.
Though a sophisticated math model of wiper system performance with 23 contributing factors was
used to design the system, it was ineffective as a
problem-solving tool. When those 23 factors were
held as close to nominal as possible, the wiper build
still produced the full range of sweep angles. Clearly,
there was something missing. The answer was not
going to be found by studying the model. It was
found by talking to the parts. It is valuable to note
that feature K was meeting the initial manufacturing
tolerance and all motors were meeting the original
engineering specifications. The sweep angle problem could only be observed at the wiper system
level.
The statistical engineering approach used a progressive search based on a process of elimination.
Each subsequent step could only be determined
based on the results of the previous steps. Step by
step every factor that could not be the Red X was
eliminated until finally only one variable still fit the
clues. Once feature K on component Q was found
to be the Red X, the model for wiper system performance was updated.
Figure 2 illustrates the relative strength of the
high-level contributors to wiper sweep angle variation. Controlling the motor feature was the only
way to achieve a substantial reduction.
Statistical engineering, as developed by Shainin,
is a contrast-based approach to problem solving.
178

by individuals seeking Shainin certification. Each one


has been reviewed by a Shainin consultant to ensure
that they meet our standards. Among these standards
are the following:
1. A convergent investigation to a confirmed Red X
with sound strategy development and execution.
2. Rigorous application of statistical methods.
3. Demonstration of a substantial reduction in variation with before and after data.

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

FIGURE 2 Wiper sweep angle analysis of variance (degrees).


(Color figure available online.)

Statistical engineers ask what is different rather than


what is wrong. They look for the largest source of
variation, not all possible causes. They avoid identifying lists of potential variables until the convergent
process has substantially narrowed the possibilities.

COMPARISON TO THE HOERL


AND SNEE STANDARDS
Hoerl and Snee (2010) have suggested that statistical engineering fills a gap between statistical thinking
at the strategic level and statistical tools at the operational level. They wrote: Statistical thinking is the
strategic aspect of our discipline that provides conceptual understanding and the proper context. It
answers the question Why should we use statistics?
When this context is properly understood, a welldeveloped discipline of statistical engineering will
provide statistically based improvement approaches
based on theory and rigorous research (p. 53).
R. D. Shainin (1993) made a similar observation
about statistical engineering and compared statistical
engineering to the strategies employed by General
Schwartzkopf in the Gulf War, noting that it fit
between the strategic direction established by
President Bush and the weapons employed by the
commanders on the ground.
Snee and Hoerl (2010) went on to list standards for
problems requiring a statistical engineering solution.
At Shainin Problem Solving and Prevention, there are
more than 7,500 approved statistical engineering
projects on file.4 These projects have been submitted
4

These are a small subset of successfully completed projects. Individuals who are already certified do not submit subsequent projects.
In addition, many students have conducted successful investigations without seeking certification.

179

How do these projects compare with the goals


proposed by Snee and Hoerl (2010)?
.

The solution will satisfy a high-level need of the


organization.
. The impact of each project is a function of project selection not the problem solving methods
employed. Most of the approved projects have
been selected by client leadership to address a
high-level need.
There is no known solution to the problem.
. There would be no reason to begin a statistical
engineering investigation if the solution were
already known. In fact, one of the major challenges is convincing problem solvers and their
leadership that their preconceived notion of
the root cause may not be correct.
The problem has a high degree of complexity
involving technical and nontechnical challenges.
. Statistical engineering, as practiced by the thousands of engineers trained by Shainin, can be
applied for either special or common cause
problems. As Brian Joiner (1994, p. 136) noted
The sources of common cause variation lie hidden deep within a system. Finding the Red X
requires understanding the structure of complex
systems and progressively eliminating areas that
do not contain the Red X. Virtually every investigation involves political disputes. These are
often between engineering and manufacturing,
customers and suppliers, or different manufacturing groups within a company. As potential
root cause sources are eliminated by talking to
the parts, most of the disputes are quickly
resolved.
More than one statistical technique is required for
the solution. Typically, nonstatistical techniques
are also required.
Six Decades of Improved Performance

Statistical engineering projects always require


multiple statistical tools. It is simply not possible
to follow a convergent strategy otherwise.
They also require nonstatistical tools. Some
examples include strategy diagrams, function
models, event-to-energy transforms, and energy
accounting. Statistical engineers often work with
experts in materials science, chemistry, software, and business processes. However, they
never ask the experts to list possible sources
of variation. Experts are guided in the development of strategy to determine which parts to test
and how to test them. The key to finding the
Red X is the engineering insight to correctly
interpret clues generated from studying patterns
of variation from strategically collected samples
of system outputs.
Long-term successes require embedding solutions
into work processes, typically through using custom software and integrating with other sciences
and other disciplines.
. No process improvement is possible without
changing something. At one extreme this could
be as simple as tighter control on a process
input with new standardized work established
to maintain the gains. In other cases, a design
change produces the optimum solution. Unless
the Red X lives in software, it is unusual that
custom software is required for control.
The whole is greater than the sum of the parts. The
impact is greater than what could be achieved with
individual tools.
. Every successful statistical engineering project
results in new profound knowledge. Relationships that were inadequately understood are
discovered and controlled. It is not possible to
follow a convergent strategy without the use
of multiple statistical and nonstatistical tools
Theoretical foundation is needed to guide solution
development. We understand why it works.
. An essential step in every statistical engineering
is the understand phase. Once the Red X is
discovered and confirmed, the relationship
between the Red X and the Green Y 5 is studied
in detail to determine the optimum alternatives
.

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

5
A Green Y1 is a system output that has been developed or selected to provide engineering insight. Its patterns of change guide
the investigator to the Red X.

R. D. Shainin

for improvement. This involves a combination


of statistical and nonstatistical techniques.
The solution can be leveraged to similar problems
elsewhere. It is not just a one-off.
. Every statistical engineering project ends with a
leverage phase. In some cases this is as simple
as recognizing that the Red X is active in other
similar processes. More value is delivered when
the strategies employed for one problem are
applied to new problems and new Red Xs
are found and controlled. Finally, the most
valuable form of leverage occurs when leadership understands the concepts and language of
statistical engineering and it becomes ingrained
in the organizations culture.

Following are a few examples of statistical engineering projects where the Red X was a complete
surprise:
1. Engine block burn-in: An automotive foundry had
been tolerating high scrap and rework costs associated with burn-in on cast iron engine blocks.
Burn-in is a condition where sand from the mold
becomes embedded in the skin of the casting.
The problem had persisted for more than 40 years
and the plant metallurgists believed that it was
inherent in the process (common cause). A
contrast-based convergent investigation started
with the development of a new response to create
variable data and a measurement system assessment to confirm the new measurement systems
ability to discriminate. These steps were followed
by a multivari (R. D. Shainin 2008b) study to
identify the largest family of variation; a group
comparison to isolate variables that were at different levels during low and high burn-in times; and
a full-factorial experiment that identified a
three-factor chemical interaction as the Red X. A
response surface map identified the optimum
factor settings to eliminate the burn-in. Even after
the answer had been found and demonstrated,
the lead metallurgist resisted changing procedures
because the answer did not conform to his vision
of how the process should be working (Deland
and Meyer 1990).
2. Risk reducing the development of a new
ammonia sensor: Bovenzi et al. (2010) described
the use of Isoplot1, Function ModelsTM, Variable
180

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

SearchTM, and Tolerance ParallelogramsTM to discover critical causeeffect relationships impacting


high-risk functions in a new ammonia sensor to
be used in diesel engine applications. The resulting profound knowledge exposed and resolved
potential problems early in the development
process.
3. Braze voids: An automotive supplier was experiencing high scrap on air-conditioning condensers.
The scrap levels were impacting product shipments and threatening the launch of a new flagship vehicle at the automotive original
equipment manufacturer (OEM). The problem
had been going on for months, but the supplier
had been keeping up with shipments because
required volumes were low during the initial
vehicle launch period. Now that the production
rates had ramped up to planned levels, shipments
were being missed. Subject-matter experts were
convinced that the problem lived in the braze
oven. A statistical engineer used a defect strategy
diagram to plan sample stratification; a measurement system analysis of the leak test and leak
location identification process to confirm
adequate discrimination; a concentration diagram
for each stratified process flow to identify patterns
of variation; and a statistical confirmation test to
confirm the results. The Red X was an interaction
in a stacking operation upstream from the braze
oven (common cause). The investigation took
about 24 hours including 12 hours for the confirmation test due to process cycle times.
The existing discipline holds up well to the goals
and standards recommended by Snee and Hoerl
(2010). This brings us back to our earlier question:
With its broad application and long history of success, why has statistical engineering been largely
ignored or dismissed by the statistical community?

RESEARCH AND DEVELOPMENT IN


THE FIELD OF STATISTICAL
ENGINEERING
Hoerl and Snee (2010) wrote, When this context
is properly understood, a well-developed discipline
of statistical engineering will provide statistically
based improvement approaches based on theory
and rigorous research (p. 53).
181

Shainin Problem Solving and Prevention has been


engaged in just such research for decades. Statistical
engineers have developed or enhanced statistical,
engineering, and management methods to improve
the effectiveness and efficiency of problem solving
and problem prevention. Since the early 1990s,
Shainin Problem Solving and Prevention has been
supported by two statisticians, Bennett and Lucas,
who have confirmed the statistical rigor of the methods developed and enhanced by Dorian Shainin and
helped the organization ensure that new statistical
methods are rigorous.
Statistical engineers, following the discipline
developed by Shainin, have developed focused strategies for solving variation problems in product
reliability, system performance, manufacturing variation in dimensions, material properties and defects,
part damage in both manufacturing and field service,
and business processes.
Strategies and tactics for product reliability
address infant mortality and accidental and
wear-out failure modes. They include techniques
for capturing rare field failures and discovering
the source for problems characterized as no trouble found by the manufacturer. R. D. Shainin
(2008a) contrasted the Shainin Red X approach to
solving a challenging reliability problem to engineering judgment and designmeasureanalyze
improvecontrol (DMAIC).
Assuring reliability in new products includes testing for conceptualized failure modes; discovering
the weakest link remaining in any design and assessing the risk that customers will experience that failure mode in service; and monitoring in-service
products to ensure that the models of customer
usage used to evaluate the design are useful
(Abrahamian et al. 2010). Dorian Shainin (1969)
described the program to ensure the reliability of
the lunar module for the Apollo program.
Solving problems in complex system performance
often require an understanding of macro conditions
for operating and environmental factors (intermittent
problems). Strategies have been developed and
refined to identify these critical conditions and then
measuring performance in terms of energy usage.
The ability to track energy consumption often reveals
surprising relationships and sometimes uncovers the
Red X with a single troubled unit. Maxson (2008)
provided examples of using energy to solve both
Six Decades of Improved Performance

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

system performance and destructive events. Dao and


Maxson (2007) described a complicated problem
with leaky drum brakes. They took the defect rate
from 7% to zero.
Solving manufacturing variation problems for
either dimensions or material properties requires
the development of response variables that reveal
detailed patterns of variation related to the physical
structure of the manufacturing process. These patterns allow the investigators to eliminate those parts
of the process that cannot contain the Red X and progressively converge to the largest source of variation.
Deland and Meyer (1990) and R. D. Shainin (2008b)
described examples of complex processes where
Red X interactions were discovered.
Solving manufacturing problems involving defects
(localized material flaws) requires stratifying samples
based on the physical layout of the process and a
detailed analysis of the flaw. Convergence to the
Red X is based on understanding why the flaw
appears at particular locations rather than others.
Kalellis (2005) provides a case study involving
defects in the production of insulated wire.
Damage problems, whether they occur within a
manufacturing process or in the field, require an
understanding of the nature of the energy that
caused the damage and the ability to measure the
products strength to resist damage. An event-toenergy transform is a Shainin-developed engineering technique that converts a damaging event into
a measure of the energy required to create the
damage. This response provides insights into patterns of variation that produce convergence to the
Red X (Maxson 2008). Dao and Maxson (2009) provided a case study for the solution of a destructive
event problem where wire bonds were failing in
the field.
Hoying (2011) described the unique approach she
developed to talk to business process occurrences
rather than manufactured parts. Hoyings system
applies a functional understanding of the business
process requiring improvement (either problem solving or optimization), evaluates relative risk, and
assesses system hierarchies to select precise highvalue actions to improve system performance.
Hoying reported savings of tens of millions of dollars
in diverse applications ranging from finance to information technology to logistics, scheduling, and other
transactional functions.
R. D. Shainin

A key to any successful performance improvement


program is the active engagement of leadership that
goes well beyond cheerleading. A key to the
increased impact of Shainin statistical engineering
has been the development of leadership skills in
selecting and sponsoring projects. P. D. Shainin et al.
(1997) provided a roadmap for managing an effective
statistical engineering program. R. D. Shainin (2011)
illustrated that either methods or leadership without
the other leads to mediocre results and postulates that
strong leadership combined with effective methods
provides the synergistic effect of an interaction.
Statistical engineering, as developed by Shainin
and practiced over the past six decades, has been
strongly influenced by the work of Sir Ronald Fisher
and John Tukey and has benefited from direct contributions from Frank Satterthwaite and Carl Bennett,
but there are few statisticians practicing this discipline. Most statistical engineers come from an engineering background. We use statistics in the same way
we use calculus or algebra. It is a tool that must be
used with rigor, but it is not the primary focus of
the discipline. The challenges faced by statistical
engineers following the Shainin approach are mostly
engineering challenges. When trying to find the
cause of field failures affecting one part in 10,000,
the challenge is to find test conditions and product
performance responses that reveal the source of
the strongest differences. The key is not the tools
themselves but the ways in which they are combined
to solve tough quality and reliability problems and
to prevent problems in new products and in new
and existing processes (both manufacturing and
business).
Statistical engineering does fill the gap identified by
Hoerl and Snee (2010) between strategic thinking and
the application of tools. Virtually every statistical
engineering projects employs a series of statistical
tools to converge on the largest cause of variation
and then to test the proposed solutions. Statistical
engineering projects following the Shainin approach
have high impact and skilled statistical engineers are
valued resources within their organizations. In statistical engineering the most critical skill is the ability to
develop effective convergent strategies that will
quickly eliminate all sources of variation except the
largest. Those strategies must be executed with discipline and the suspected root cause must be confirmed
with an appropriate statistical test.
182

CONCLUSIONS

Downloaded by [Mr Pablo Baez] at 07:51 04 May 2012

The path that Hoerl and Snee (2010) are proposing to formalize has already been blazed. Statistical
engineering as developed by Shainin fills an important position between strategic and statistical thinking and the application of both statistical and
nonstatistical tools. Sound statistical engineers are
rigorous in their use of statistics. They understand
when to take random samples and when to take
stratified samples. They understand the balance
between alpha risk and beta risk and the dangers
of making unsupported inferences from samples.
They use engineering insight to select stratified samples or disaggregate process or system responses in
order to see the patterns of variation that lead to convergence. Their approaches often mirror the procedures suggested by Ellis Ott (1975).

ABOUT THE AUTHOR


Richard D. Shainin has been a practicing statistical
engineer since 1991. Prior to joining Shainin Problem
Solving and Prevention, he worked for AT&T where
he led teams in engineering, operations, marketing
and sales. He earned a bachelor of engineering
degree from Stevens Institute of Technology in
Hoboken, New Jersey, and an MBA from American
University in Washington, D.C. During his graduate
studies, he had the opportunity of studying nonparametric statistics with Dr. Hubert Lilliefors at George
Washington University. Since joining Shainin,
Richard has had the opportunity to work with several
distinguished statisticians including Carl Bennett and
Jim Lucas. Richard Shainin is the author of the
Multi-vari Charts chapter in the Encyclopedia of
Statistics in Quality and Reliability. Richard Shainin
has trained thousands of engineers in statistical
engineering concepts, strategies, and techniques.

REFERENCES
Abrahamian, J., Hell, R., Hysong, C. (2010). The Red X1 System for product reliability. Available at: https:==shainin.com=library=reliability_
white_paper (accessed 28 December 2011).
Balestracci, D. (2008). Shainin system discussion. Quality Engineering,
20:3135.
Bovenzi, P., Bender, D., Bloink, R., Conklin, M., Abrahamian, J. (2010).
Risk reducing product and process design during new product

183

development. Available at: https:==shainin.com=ammonia_sensor_


sae_paper.pdf (accessed 28 December 2011).
Britz, G. C., Emerling, D. W., Hare, L. B., Hoerl, R. W., Janis, S. J., Shade,
J. E. (1996). Statistical Thinking. Milwaukee, WI: ASQ Quality
Press.
Britz, G. C., Emerling, D. W., Hare, L. B., Hoerl, R. W., Janis, S. J., Shade, J.
E. (2000). Improving Performance through Statistical Thinking.
Milwaukee, WI: ASQ Quality Press.
Dao, H., Maxson, W. (2007). Y to X problem solving using Shainin strategies. Automotive Excellence, Spring:1416. Available at: https:==
shainin.com=YToXProblemSolvingWithShaininStrategies.pdf (accessed
28 December 2011).
Dao, H., Maxson, W. (2009). A convergent approach to problem solving.
Paper read at ASQs World Conference on Quality and Improvement,
May 1820, Minneapolis, MN.
Deland, T., Meyer, M. (1990). A revolutionary system for solving quality
problems. Available at: http://asq.org/qic/display-item/index.html?
item=9564 (accessed 28 December 2011).
Deming, W. E. (1986). Out of the Crisis. Cambridge, MA: MIT Center for
Advanced Engineering Study.
Doyle, A. C. (1887). A Study in Scarlet. London: Ward Lock & Co.
Hoerl, R. W., Snee, R. D. (2010). Closing the gap: Statistical engineering
can bridge statistical thinking with methods and tools. Quality
Progress, 43(5):5253.
Hoying, J. (2011). TransaXional problem solvingA new paradigm. Available at: http://asq-auto.org/filedownload/downloadfile/fileid/52/src/
@random4d98b342269c3/ (accessed 29 December 2011).
Joiner, B. L. (1994). Fourth Generation Management. New York: McGraw
Hill.
Juran, J. M., Gryna, F. M. (1988). Jurans Quality Control Handbook,
4th ed. New York: McGraw Hill.
Kalellis, B. (2005). The case of wayward wires. Available at: http://www.
sae.org/automag/electronics/09-2005/1-113-9-52.pdf (accessed 29
December 2011).
Maxson, W. (2008). Using energy to solve technical problems. Automotive Excellence, Spring. Available at: http://www.taproot.com/download/AE_Spring_08.pdf (accessed 29 December 2011).
Montgomery, D. C. (2008). Shainin system discussion. Quality Engineering, 20:3637.
Nelson, L. S. (1989). World class quality book review. JQT, 21(1):7679.
NIST. http://www.nist.gov/itl/sed/index.cfm (accessed 9 October 2011).
Ott, E. (1975). Process Quality Control. New York: McGraw-Hill.
Shainin, D. (1969). Reliability: Managing a reliability program. Apollo
lunar module engine exhaust products. Science 166:733738.
Shainin, P. D., Shainin, R. D., Nelson, M. T. (1997). Managing Statistical
Engineering.
Available
at:
http://asq.org/members/news/aqc/
51_1997/10628.pdf (accessed 28 December 2011).
Shainin, R. D. (1993). Strategies for technical problem solving. Quality
Engineering, 5(3):433448.
Shainin, R. D. (2008a). How lean is your Six Sigma program. Six Sigma
Forum Magazine, 7(4):4245.
Shainin, R. D. (2008b). Multi-vari charts. In F. Ruggeri, R. S. Kennett, F. W.
Faltin, Eds. Encyclopedia of Statistics in Quality and Reliability. London:
Wiley.
Shainin, R. D. (2011). Methods vs. leadership: What matters most? Six
Sigma Forum Magazine, 11(1):2931.
Shewhart, W. A. [1931] (1980). Economic Control of Quality of Manufactured Product. New York: American Society for Quality Control.
Snee, R. D. (2009). Raise your batting average. Quality Progress, 42(12):
6468.
Snee, R. D., Hoerl, R. W. (2010). Further explanation; clarifying points
about statistical engineering. Quality Progress, 43(12):6872.
Steiner, S. H., MacKay, R. J., Ramberg, J. S. (2008). An overview of the
Shainin SystemTM for quality improvement. Quality Engineering,
20:619.

Six Decades of Improved Performance

You might also like