100% found this document useful (1 vote)
347 views

Design of Experiment

This document discusses experimental design and statistical analysis techniques. It introduces the Taguchi method for designing experiments, which uses orthogonal arrays to study multiple variables with fewer experiments. It also covers analysis of variance (ANOVA), a statistical technique used to analyze the results of designed experiments. ANOVA can be used to determine how much of the variability in a data set is attributed to different sources. The document provides examples of one-way and two-way ANOVA and their assumptions. It also lists some typical applications and advantages/disadvantages of experimental design and ANOVA techniques.

Uploaded by

Emre BAŞER
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
347 views

Design of Experiment

This document discusses experimental design and statistical analysis techniques. It introduces the Taguchi method for designing experiments, which uses orthogonal arrays to study multiple variables with fewer experiments. It also covers analysis of variance (ANOVA), a statistical technique used to analyze the results of designed experiments. ANOVA can be used to determine how much of the variability in a data set is attributed to different sources. The document provides examples of one-way and two-way ANOVA and their assumptions. It also lists some typical applications and advantages/disadvantages of experimental design and ANOVA techniques.

Uploaded by

Emre BAŞER
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Kocaeli University / Faculty of Engineering

Department of Mechanical Engineering

Design of Experiments
Construction and Manufacturing Laboratory

submitted to: Prof. Dr. Emel TABAN

Emre BAŞER
160217019

16th Group
Contents
ABSTRACT ....................................................................................................................................................................... 3

INTRODUCTION TO STRATEGY OF EXPERIMENTATION ................................................................................................. 3

SOME TYPICAL APPLICATIONS OF EXPERIMENTAL DESIGN ........................................................................................... 6

BASIC PRINCIPLES ........................................................................................................................................................... 7

GUIDELINES FOR DESIGNING EXPERIMENTS .................................................................................................................. 8

SUMMARY: USING STATISTICAL TECHNIQUES IN EXPERIMENTATION .......................................................................... 9

INTRODUCTION TO TAGUCHI AND ANOVA .................................................................................................................. 10

TAGUCHI METHOD .................................................................................................................................................. 12

Philosophy of the Taguchi Method ..................................................................................................................... 13

8-Steps in Taguchi Methodology ................................................................................................................... 13

The Term Of Taguchi Methods ............................................................................................................................ 14

Taguchi Loss Function.......................................................................................................................................... 15

Determining Parameter Design Orthogonal Array .............................................................................................. 16

Static Problem (Batch Process Optimization) ...................................................................................................... 18

Smaller-The-Better .......................................................................................................................................... 18

Larger-The-Better............................................................................................................................................ 18

Nominal-The-Best ........................................................................................................................................... 18

Dynamic Problem (Technology Development) .................................................................................................... 19

Sensitivity (Slope) ............................................................................................................................................ 19

Linearity (Larger-The-Better) .......................................................................................................................... 20

A Typical Orthogonal Array ................................................................................................................................. 20

Properties Of An Orthogonal Array ..................................................................................................................... 21

Minimum Number Of Experiments To Be Conducted ......................................................................................... 21

Assumptions Of The Taguchi Method ................................................................................................................. 21

Selection Of The Independent Variables ............................................................................................................. 22

Deciding The Number Of Levels .......................................................................................................................... 22

Selection Of An Orthogonal Array ....................................................................................................................... 22

Assigning The Independent Variables To Columns ............................................................................................. 22

Conducting The Experiment ................................................................................................................................ 23

1
Analysis of the data ............................................................................................................................................. 23

Inference ............................................................................................................................................................. 23

Robust Design ...................................................................................................................................................... 23

Advantages and Disadvantages ........................................................................................................................... 24

ANALYSIS OF VARIANCE (ANOVA) ........................................................................................................................... 25

What Does the Analysis of Variance Reveal? ...................................................................................................... 25

Anova................................................................................................................................................................... 26

Why ANOVA Instead Of Multiple T-Tests? .......................................................................................................... 27

Algorithm ............................................................................................................................................................. 27

What is a One-Way ANOVA? ............................................................................................................................... 28

What Are The Hypotheses Of a One-Way ANOVA? ........................................................................................ 28

What Are The Assumptions Of a One-Way ANOVA? ...................................................................................... 28

What is a Two-Way ANOVA? ............................................................................................................................... 29

What Are The Assumptions Of A Two-Way ANOVA?...................................................................................... 29

What Are The hypotheses Of A Two-Way ANOVA? ........................................................................................ 29

Summary: Differences Between One-Way and Two-Way ANOVA ...................................................................... 30

Uses of ANOVA .................................................................................................................................................... 30

Advantages and Disadvantages ........................................................................................................................... 31

Advantages: .................................................................................................................................................... 31

Disadvantages: ................................................................................................................................................ 31

Applications ......................................................................................................................................................... 31

SOURCES ...................................................................................................................................................................... 32

Books ....................................................................................................................................................................... 32

Articles ..................................................................................................................................................................... 32

Links ......................................................................................................................................................................... 32

Software .................................................................................................................................................................. 33

Course ...................................................................................................................................................................... 33

2
ABSTRACT

Design of experiment is the method, which is used at a very large scale to study the
experimentations of industrial processes. It is a statically approach where we develop the mathematical
models through experimental trial runs to predict the possible output on the basis of the given input data or
parameters. The aim of this chapter is to stimulate the engineering community to apply Taguchi technique
to experimentation, the design of experiments, and to tackle quality problems in industrial chemical
processes that they deal with. Based on years of research and applications, Dr. G. Taguchi has
standardized the methods for each of these DOE application steps. Thus, DOE using Taguchi approach
has become a much more attractive tool to practicing engineers and scientists. And since the last four
decades, there were limitations when conventional experimental design techniques were applied to
industrial experimentation. And Taguchi, also known as orthogonal array design, adds a new dimension to
conventional experimental design. Taguchi method is a broadly accepted method of DOE, which has
proven in producing high-quality products at subsequently low cost.

INTRODUCTION TO STRATEGY OF EXPERIMENTATION

Observing a system or process while it is in operation is an important part of the learning process
and is an integral part of understanding and learning about how systems and processes work. Yogi Berra
said that “. . . you can observe a lot just by watching.”
However, to understand what happens to a process when you change certain input factors, you must do
more than just watch, you actually have to change the factors. This means that to really understand
cause-and-effect relationships in a system you must deliberately change the input variables to the system
and observe the changes in the system output that these changes to the inputs produce. In other words, you
need to conduct experiments on the system.
Observations on a system or process can lead to theories or hypotheses about what makes the system
work, but experiments of the type described above are required to demonstrate that these theories are
correct. Investigators perform experiments in virtually all fields of inquiry, usually to discover something
about a particular process or system. Each experimental run is a test.
More formally, we can define an experiment as a test or series of runs in which purposeful changes are
made to the input variables of a process or system so that we may observe and identify the reasons for
changes that may be observed in the output response. We may want to determine which input variables are
responsible for the observed changes in the response, develop a model relating the response to the
important input variables and to use this model for process or system improvement or other
decision-making.

3
In general, experiments are used to study the performance of processes and systems. The process
or system can be represented by the model shown in Figure1.1. We can usually visualize the process as a
combination of operations, machines, methods, people, and other resources that transforms some input
(often a material) into an output that has one or more observable response variables. Some of the process
variables and material properties x1, x2, . . . , xp are controllable, whereas other variables z1, z2, . . . , zq are
uncontrollable (although they may be controllable for purposes of a test). The objectives of the experiment
may include the following:
1. Determining which variables are most influential on the response y
2. Determining where to set the influential x’s so that y is almost always near the desired nominal
value
3. Determining where to set the influential x’s so that variability in y is small
4. Determining where to set the influential x’s so that the effects of the uncontrollable variables z1,
z2, . . . , zq are minimized.

As you can see from the


foregoing discussion,
experiments often
involve several factors.
Usually, an objective of
the experimenter is to determine the influence that these factors have on the output response of the system.
The general approach to planning and conducting the experiment is called the strategy of experimentation.
An experimenter can use several strategies.

If we see that some factors are not important because their effects are so small that they
have no practical value; so these factors can be decided to ignored. Engineers, scientists, and
business analysts often must make these types of decisions about some of the factors they are
considering in real experiments.
One approach would be to select an arbitrary combination of these factors, test them, and see what
happens. This approach could be continued almost indefinitely, switching the levels of one or two
(or perhaps several) factors for the next test, based on the outcome of the current test. This
strategy of experimentation, which we call the best-guess approach, is frequently used in practice
by engineers and scientists. It often works reasonably well, too, because the experimenters often
have a great deal of technical or theoretical knowledge of the system they are studying, as well as
considerable practical experience.
The best-guess approach has at least two disadvantages. First, suppose the initial best-guess does
not produce the desired results. Now the experimenter has to take another guess at the correct

4
combination of factor levels. This could continue for a long time, without any guarantee of
success. Second, suppose the initial best-guess produces an acceptable result. Now the
experimenter is tempted to stop testing, although there is no guarantee that the best solution has
been found.

Another strategy of experimentation that is used extensively in practice is the


onefactor-at-a-time (OFAT) approach. The OFAT method consists of selecting a starting point,
or baseline set of levels, for each factor, and then successively varying each factor over its range
with the other factors held constant at the baseline level. After all tests are performed, a series of
graphs are usually constructed showing how the response variable is affected by varying each
factor with all other factors held constant.
The major disadvantage of the OFAT strategy is that it fails to consider any possible interaction
between the factors. An interaction is the failure of one factor to produce the same effect on the
response at different levels of another factor.

The correct approach to dealing with several factors is to conduct a factorial experiment.
This is an experimental strategy in which factors are varied together, instead of one at a time.
If our factorial experiment has both factors at two
levels and that all possible combinations of the two
factors across their levels are used in the design.
Geometrically, the four runs form the corners of a
square. This particular type of factorial experiment
is called a 22 factorial design (two factors, each at
two levels).
This experimental design would enable the
experimenter to investigate the individual effects of
each factor (or the main effects) and to determine
whether the factors interact.
Statistical testing could be used to determine whether any of these effects differ from
zero. One very important feature of the factorial experiment is evident; namely, factorials
make the most efficient use of the experimental data. No other strategy of experimentation
makes such an efficient use of the data. This is an important and useful feature of
factorials.

We can extend the factorial


experiment concept to three factors. Assume
that all three factors have two levels. Notice
that there are some test combinations of these
three factors across the two levels of each and
that these trials can be represented
5
geometrically as the corners of a cube. This is an example of a 23 factorial design.

That figure illustrates how all four


factors could be investigated in a 24 factorial
design. As in any factorial design, all possible
combinations of the levels of the factors are
used. Because all four factors are at two
levels, this experimental design can still be
represented geometrically as a cube (actually a
hypercube).

Generally, if there are k factors, each at two levels, the factorial design would require 2k runs.

SOME TYPICAL APPLICATIONS OF EXPERIMENTAL DESIGN

Experimental design methods have found broad application in many disciplines. As noted
previously, we may view experimentation as part of the scientific process and as one of the ways by which
we learn about how systems or processes work. Generally, we learn through a series of activities in which
we make conjectures about a process, perform experiments to generate data from the process, and then use
the information from the experiment to establish new conjectures, which lead to new experiments, and so
on.
Experimental design is a critically important tool in the scientific and engineering world for improving the
product realization process. Critical components of these activities are in new manufacturing process
design and development, and process management. The application of experimental design techniques
early in process development can result in
1. Improved process yields
2. Reduced variability and closer conformance to nominal or target requirements
3. Reduced development time
4. Reduced overall costs.
Experimental design methods are also of fundamental importance in engineering design activities, where
new products are developed and existing ones improved. Some applications of experimental design in
engineering design include
1. Evaluation and comparison of basic design configurations
2. Evaluation of material alternatives
3. Selection of design parameters so that the product will work well under a wide variety of field
conditions, that is, so that the product is robust
4. Determination of key product design parameters that impact product performance
5. Formulation of new products.

6
BASIC PRINCIPLES

Statistical design of experiments refers to the process of planning the experiment so that
appropriate data will be collected and analyzed by statistical methods, resulting in valid and objective
conclusions.
The statistical approach to experimental design is necessary if we wish to draw meaningful conclusions
from the data. When the problem involves data that are subject to experimental errors, statistical methods
are the only objective approach to analysis. Thus, there are two aspects to any experimental problem: the
design of the experiment and the statistical analysis of the data. These two subjects are closely related
because the method of analysis depends directly on the design employed.

The three basic principles of experimental design are randomization, replication, and
blocking. Sometimes we add the factorial principle to these three.

Randomization is the cornerstone underlying the use of statistical methods in experimental


design. By randomization we mean that both the allocation of the experimental material and the order in
which the individual runs of the experiment are to be performed are randomly determined. Statistical
methods require that the observations (or errors) be independently distributed random variables.
Randomization usually makes this assumption valid. By properly randomizing the experiment, we also
assist in “averaging out” the effects of extraneous factors that may be present.
For example, suppose that the specimens in the hardness experiment are of slightly different
thicknesses and that the effectiveness of the quenching medium may be affected by specimen
thickness. If all the specimens subjected to the oil quench are thicker than those subjected to the
saltwater quench, we may be introducing systematic bias into the experimental results. This bias
handicaps one of the quenching media and consequently invalidates our results. Randomly
assigning the specimens to the quenching media alleviates this problem.
Sometimes experimenters encounter situations where randomization of some aspect of the
experiment is difficult. For example, in a chemical process, temperature may be a very
hard-to-change variable as we may want to change it less often than we change the levels of other
factors. In an experiment of this type, complete randomization would be difficult because it would
add time and cost. There are statistical design methods for dealing with restrictions on
randomization.

By replication we mean an independent repeat run of each factor combination. Replication has
two important properties. First, it allows the experimenter to obtain an estimate of the experimental error.
This estimate of error becomes a basic unit of measurement for determining whether observed differences
in the data are really statistically different. Second, if the sample mean (ȳ) is used to estimate the true
mean response for one of the factor levels in the experiment, replication permits the experimenter to
obtain a more precise estimate of this parameter. The point is that without replication we have no way of

7
knowing why the two observations are different. There is an important distinction between replication and
repeated measurements.
For example, suppose that a silicon wafer is etched in a single-wafer plasma etching process, and
a critical dimension (CD) on this wafer is measured three times. These measurements are not
replicates; they are a form of repeated measurements, and in this case the observed variability in
the three repeated measurements is a direct reflection of the inherent variability in the
measurement system or gauge and possibly the variability in this CD at different locations on the
wafer where the measurement were taken. As another illustration, suppose that as part of an
experiment in semiconductor manufacturing four wafers are processed simultaneously in an
oxidation furnace at a particular gas flow rate and time and then a measurement is taken on the
oxide thickness of each wafer. Once again, the measurements on the four wafers are not replicates
but repeated measurements. In this case, they reflect differences among the wafers and other
sources of variability within that particular furnace run. Replication reflects sources of variability
both between runs and (potentially) within runs.
Blocking is a design technique used to improve the precision with which comparisons among the
factors of interest are made. Often blocking is used to reduce or eliminate the variability transmitted from
nuisance factors—that is, factors that may influence the experimental response but in which we are not
directly interested.
For example, an experiment in a chemical process may require two batches of raw material to
make all the required runs. However, there could be differences between the batches due to
supplier-to-supplier variability, and if we are not specifically interested in this effect, we would
think of the batches of raw material as a nuisance factor. Generally, a block is a set of relatively
homogeneous experimental conditions. In the chemical process example, each batch of raw
material would form a block, because the variability within a batch would be expected to be
smaller than the variability between batches. Typically, as in this example, each level of the
nuisance factor becomes a block. Then the experimenter divides the observations from the
statistical design into groups that are run in each block.

GUIDELINES FOR DESIGNING EXPERIMENTS

To use the statistical approach in designing and analyzing an experiment, it is necessary for
everyone involved in the experiment to have a clear idea in advance of exactly what is to be studied, how
the data are to be collected, and at least a qualitative understanding of how these data are to be analyzed.
1. Recognition of and statement of the problem.
a. Factor screening or characterization.
b. Optimization.
c. Confirmation.
d. Discovery. Pre-experimental planning
e. Robustness.

8
2. Selection of the response variable.
3. Choice of factors, levels, and range.
4. Choice of experimental design.
5. Performing the experiment.
6. Statistical analysis of the data.
7. Conclusions and recommendations.

SUMMARY: USING STATISTICAL TECHNIQUES IN


EXPERIMENTATION

Much of the research in engineering, science, and industry is empirical and makes extensive use
of experimentation. Statistical methods can greatly increase the efficiency of these experiments and often
strengthen the conclusions so obtained. The proper use of statistical techniques in experimentation
requires that the experimenter keep the following points in mind:
- Use your nonstatistical knowledge of the problem.
Experimenters are usually highly knowledgeable in their fields. In some fields, there is a
large body of physical theory on which to draw in explaining relationships between
factors and responses. This type of nonstatistical knowledge is invaluable in choosing
factors, determining factor levels, deciding how many replicates to run, interpreting the
results of the analysis, and so forth.
- Keep the design and analysis as simple as possible.
Don’t be overzealous in the use of complex, sophisticated statistical techniques.
Relatively simple design and analysis methods are almost always best. If you do the
pre-experiment planning carefully and select a reasonable design, the analysis will almost
always be relatively straightforward.
- Recognize the difference between practical and statistical significance.
Just because two experimental conditions produce mean responses that are statistically
different, there is no assurance that this difference is large enough to have any practical
value.
- Experiments are usually iterative
Generally, we are not well equipped to answer these questions at the beginning of the
experiment, but we learn the answers as we go along. This argues in favor of the iterative,
or sequential, approach discussed previously. Of course, there are situations where
comprehensive experiments are entirely appropriate, but as a general rule most
experiments should be iterative. Consequently, we usually should not invest more than
9
about 25 percent of the resources of experimentation (runs, budget, time, etc.) in the
initial experiment. Often these first efforts are just learning experiences, and some
resources must be available to accomplish the final objectives of the experiment.

INTRODUCTION TO TAGUCHI AND ANOVA

If you think back to the late 1970s, that was an era where a lot of companies in the US began to be
negatively impacted by foreign competition, particularly competition from Japan. And this was in the
automobile industry and in the electronics industry. And companies were really wondering how the
Japanese had become so effective. Because it was only in the mid-1940s that they had lost a world war,
and yet here, in something like 25 years, they have rebuilt their industrial base. And they are producing
products that are, in many cases, as good or better than anything we can make, and they are doing it at a
lower price. Well, how are they doing this?
One of the things that companies discovered is that the Japanese had been paying a lot of attention
to Dr. Deming. And Deming had advocated the use of statistical methods to help rebuild their industry,
and they began using designed experiments quite extensively. A Japanese engineer named Genichi
Taguchi had played a primary role in getting companies like Toyota started down this path. And so as
soon as US companies found out that statistical methods and experimental design were prominent tools in
Japanese success, they began to get very interested in these methods here. And suddenly, we found
automobile companies, aerospace companies, electronics companies, people that made everything from
semiconductors to radios to television sets getting interested in using designed experiments. And from,
roughly the late 70s until the early 90s, there was an enormous boom in the use of these methods.

Taguchi’s DOE’s are denoted by ‘Labc’ where ‘La’ the orthogonal arrays of variables or design
matrix, ‘b’ the levels of variables and ‘c’ numbers of variables. Taguchi method is a broadly accepted
method of DOE which has proven in producing high quality products at subsequently low cost.
This method is regularly used in automobile, electronics, and other processing industries. The
objective of the Taguchi method is to determine the optimum settings of input parameters, neglecting the
variation caused by uncontrollable factors or noise factors. Factor here refers to an input variable whereby
the state can be controlled during the experiment. Taguchi method, a systematic application in design and
analysis of experiments, is used for designing and improving product quality. It has become a powerful
tool for improving productivity during research and development so that high quality products can be
produced at reduced costs. However, the original Taguchi method was designed to optimize a single
performance characteristic. Furthermore, optimization of multiple performance characteristics is much
more complicated than optimization of a single performance characteristic. Although similar to DOE, the
Taguchi design only conducts the balanced (orthogonal) experimental combinations, which makes the
Taguchi design even more effective than a fractional factorial design. By using the Taguchi technique,
industries are able to greatly reduce product development cycle time for both design and production thus
reducing costs and increasing profit.
Taguchi proposed that engineering optimization of a process or product should be carried out in a
three-step approach: system design, parameter design and tolerance design. In system design, the

10
engineer applies scientific and engineering knowledge to produce a basic functional prototype design. The
objective of the parameter design is to optimize the settings of the process parameter values for improving
performance characteristics and to identify the product parameter values under the optimal process
parameter values. The steps included in the Taguchi parameter design are: selecting the proper Orthogonal
Array (OA) according to the number of controllable factors (parameters); running experiments based on
the OA; analyzing data; identifying the optimum condition; and conducting confirmation runs with the
optimal levels of all the parameters. The main effects indicate the general trend of influence of each
parameter.
Knowledge of the contribution of individual parameters is the key to decide the nature of the
control to be established on a production process. ANOVA is the statistical treatment most commonly
applied to the results of the experiments to determine the percentage contribution of each parameter
against a stated level of confidence Taguchi suggests two different routes for carrying out the complete
analysis. In the standard approach, the results of a single run or the average of repetitive runs are
processed through the main effect and ANOVA (raw data analysis). The second approach, which Taguchi
strongly recommends for multiple runs is to use the Signalto-Noise (S/N) ratio for the same steps in the
analysis.
The Grey system theory proposed by (Deng, 1989) has been proven to be useful for dealing with
poor, incomplete, and uncertain information. The Grey relational analysis can be used to solve
complicated interrelationships among multiple performance characteristics effectively. By this analysis, a
Grey relational grade is obtained to evaluate the multiple performance characteristics. As a result,
optimization of the complicated multiple performance characteristics can be converted into optimization
of a single Grey relational grade. An advantage of the Taguchi method is that it emphasizes a mean
performance characteristic value close to the target value rather than a value within certain specific limits,
thus improving the product quality. Additionally, Taguchi's method for experimental design is
straightforward and easy to apply to many engineering situations, making it a powerful yet simple tool. It
can be used to quickly narrow the scope of a research project or to identify problems in a manufacturing
process from data already in existence.
It is probably unfortunate that the important concepts advocated by Taguchi have been
overshadowed by controversy associated with his approach to modeling and data analysis. There have
been many papers and several books explaining, reviewing, or criticizing Taguchi’s ideas. Most of these,
however, have not adequately captured the diverse views on the topic and their underlying rationale. In
particular, the Taguchi methodology has not been well represented in statistical journals. These
considerations led to a published “panel discussion” by a group of practitioners and researchers. The
topics covered include the importance of variation reduction, the use of noise factors, the role of
interactions and selection of quality characteristics, S/N ratios, experimental strategy, dynamic systems,
and applications. Panelists provided comments on topics on which they have worked or with which they
had practical experience. Their comments were organized into sections to give readers a balanced picture
of the different views on each topic. In this panel discussion, Shin Taguchi proclaimed “Pure science
strives to discover the casual relationship and to understand the mechanics of how things happen.
Engineering, however, strives to achieve the result needed to satisfy the customer”. In the same paper,
Box among others, declared his profound disagreement with this claim. Jeff Wu, one of the panelists,
pointed out that the S/N ratio was an objective of the analysis.
Raghu Kacker suggested that it should be kept in mind that Taguchi method is not an universal
approach.The role of interactions has been debated vigorously since Taguchi’s approach to experimental
11
design for robust products and processes became known in United States. A perception persists in the
statistical literature that Taguchi’s approach assumes that interactions are absent and hence the method is
unscientific. The presence of interactions implies that a much larger number of experiments would be
needed to study the same number of control factors. Understanding contributions to noise factors is very
difficult. Knowledge of the noise factors and their behavior is an important prerequisite to an efficient
experiment. If the noise factor exhibits variation and further experimental detail is required using an outer
array configuration, then understanding of the range of variation of the noise factor will be required to
select the factor levels. Often noise factor cannot be controlled during the experimentation. De Mast
(2004) compared three methodologies (Shainin system, Taguchi methods, Six Sigma) for quality
improvement and concluded that Taguchi methodology falls short in the exploration phase – for which it
provides only limited guidance and the focus on picking optimal settings (as opposed to gaining insight in
the system) is debatable.

The main disadvantage of the Taguchi method is that the results obtained are only relative and do
not exactly indicate what parameter has the highest effect on the performance characteristic value. Also,
since orthogonal arrays do not test all variable combinations, this method should not be used with all
relationships between all variables. Taguchi method has been criticized in the literature for its difficulty in
accounting for interactions between parameters. Another limitation is that the Taguchi methods are
offline, and therefore inappropriate for a dynamically changing process such as a simulation study.
Furthermore, since the Taguchi methods deal with designing quality rather than correcting for poor
quality, they are applied most effectively at early stages of process development.

Every experimenter has to plan and conduct experiments to obtain enough and relevant data so
that he can infer the science behind the observed phenomenon. He can do so by,
- Trial-and-error approach
- Design of experiments
- Taguchi Methods

TAGUCHI METHOD

The Taguchi method involves reducing the variation in a process through robust design of
experiments. The overall objective of the method is to produce high quality product at low cost to the
manufacturer. The Taguchi method was developed by Dr. Genichi Taguchi of Japan who maintained that
variation. Taguchi developed a method for designing experiments to investigate how different parameters
affect the mean and variance of a process performance characteristic that defines how well the process is
functioning. The experimental design proposed by Taguchi involves using orthogonal arrays to organize
the parameters affecting the process and the levels at which they should be varies. Instead of having to test
all possible combinations like the factorial design, the Taguchi method tests pairs of combinations. This
allows for the collection of the necessary data to determine which factors most affect product quality with
a minimum amount of experimentation, thus saving time and resources. The Taguchi method is best used
12
when there is an intermediate number of variables (3 to 50), few interactions between variables, and when
only a few variables contribute significantly.

The Taguchi arrays can be derived or looked up. Small arrays can be drawn out manually; large
arrays can be derived from deterministic algorithms. Generally, arrays can be found online. The arrays are
selected by the number of parameters (variables) and the number of levels (states). This is further
explained later in this article. Analysis of variance on the collected data from the Taguchi design of
experiments can be used to select new parameter values to optimize the performance characteristic. The
data from the arrays can be analyzed by plotting the data and performing a visual analysis, ANOVA, bin
yield and Fisher's exact test, or Chi-squared test to test significance.

Philosophy of the Taguchi Method


Quality should be designed into a product, not inspected into it. Quality is designed into a process
through system design, parameter design, and tolerance design. Parameter design is performed by
determining what process parameters most affect the product and then designing them to give a specified
target quality of product. Quality "inspected into" a product means that the product is produced at random
quality levels and those too far from the mean are simply thrown out.
Quality is best achieved by minimizing the deviation from a target. The product should be designed so that
it is immune to uncontrollable environmental factors. In other words, the signal (product quality) to noise
(uncontrollable factors) ratio should be high.
The cost of quality should be measured as a function of deviation from the standard and the losses should
be measured system wide. This is the concept of the loss function, or the overall loss incurred upon the
customer and society from a product of poor quality. Because the producer is also a member of society
and because customer dissatisfaction will discourage future patronage, this cost to customer and society
will come back to the producer.

8-Steps in Taguchi Methodology


Step-1: Identify the main function, side effects, and failure mode
Step-2: Identify the noise factors, testing conditions, and quality characteristics
Step-3: Identify the objective function to be optimized
Step-4: Identify the control factors and their levels
Step-5: Select the orthogonal array matrix experiment
Step-6: Conduct the matrix experiment
Step-7: Analyze the data, predict the optimum levels and performance
Step-8: Perform the verification experiment and plan the future action

13
Define the process objective, or more specifically, a target value for a performance measure of the
process. This may be a flow rate, temperature, etc. The target of a process may also be a minimum or
maximum; for example, the goal may be to maximize the output flow rate. The deviation in the
performance characteristic from the target value is used to define the loss function for the process.
Determine the design parameters affecting the process. Parameters are variables within the
process that affect the performance measure such as temperatures, pressures, etc. that can be easily
controlled. The number of levels that the parameters should be varied at must be specified. For example, a
temperature might be varied to a low and high value of 40 C and 80 C. Increasing the number of levels to
vary a parameter at increases the number of experiments to be conducted.
Create orthogonal arrays for the parameter design indicating the number of and conditions for each
experiment. The selection of orthogonal arrays is based on the number of parameters and the levels of
variation for each parameter, and will be expounded below.
Conduct the experiments indicated in the completed array to collect data on the effect on the performance
measure.
Complete data analysis to determine the effect of the different parameters on the performance measure.

The Term Of Taguchi Methods


The term 'Taguchi methods' is normally used to cover two related ideas. The first is that, by the
use of statistical methods concerned with the analysis of variance, experiments may be constructed which
enable identification of the important design factors responsible for degrading product performance. The
second (related) concept is that when judging the effectiveness of designs, the degree of degradation or
loss is a function of the deviation of any design parameter from its target value.
These ideas arise from development work undertaken by Dr Genichi Taguchi whilst working at the
Japanese telecommunications company NTT in the 1950s and 1960s. He attempted to use experimental
techniques to achieve both high quality and low-cost design solutions. He suggested that the design
process should be seen as three stages:
- systems design;
- parameter design; and
- tolerance design.

Systems design identifies the basic elements of the design, which will produce the desired output, such as
the best combination of processes and materials.

Parameter design determines the most appropriate, optimising set of parameters covering these design
elements by identifying the settings of each parameter which will minimise variation from the target
performance of the product.
14
Tolerance design finally identifies the components of the design which are sensitive in affecting the
quality of the product and establishes tolerance limits which will give the required level of variation in the
design.

Taguchi methodology emphasises the importance of the middle (parameter design) stage in the total
design process; a stage which is often neglected in industrial design practice. The methodology involves
the identification of those parameters which are under the control of the designer, and then the
establishment of a series of experiments to establish that subset of those parameters which has the greatest
influence on the performance and variation of the design. The designer thus is able to identify the
components of a design which most influence the desired outcome of the design process.

Taguchi Loss Function


The second related aspect of the Taguchi methodology - the Taguchi loss function or quality loss
function maintains that there is an increasing loss both for producers and for society at large, which is a
function of the deviation or variability from the ideal or target value of any design parameter. The greater
the deviation from target, the greater is the loss. The concept of loss being dependent on variation is well
established in design theory, and at a systems level is related to the benefits and costs associated with
dependability.

Variability inevitably means waste of some kind but operations managers also realise that it is impossible
to have zero variability. The common response has been to set not only a target level for performance but
also a range of tolerance about that target which represents acceptable performance. Thus if performance
falls anywhere within the range, it is regarded as acceptable, while if it falls outside that range it is not
acceptable.

The Taguchi methodology suggests that instead of this implied step function of acceptability, a more
realistic function is used based on the square of the deviation from the ideal target, i.e. that
customers/users get significantly more dissatisfied as performance varies from ideal.

This function, the quality loss function, is given by the expression:

l(y)=kc(y−τ)2
L = the loss to society of a unit of output at value x
τ = the ideal state target value, where at a, L = 0
k = a constant

The constant, kc, in the loss function can be determined by considering the specification limits or the
acceptable interval, delta.
15
kc=CΔ2
The difficulty in determining kc is that τ and C are sometimes difficult to define.

If the goal is for the performance characteristic value to be minimized, the loss function is defined as
follows:
l(y)=kcy2
where τ=0τ=0.

If the goal is for the performance characteristic value to maximized, the loss function is defined as
follows:
l(y)=kcy2

The loss functions described here are the loss to a customer from one product. By computing these loss
functions, the overall loss to society can also be calculated.

Determining Parameter Design Orthogonal Array


The effect of many different parameters on the performance characteristic in a condensed set of
experiments can be examined by using the orthogonal array experimental design proposed by Taguchi.
Once the parameters affecting a process that can be controlled have been determined, the levels at which
these parameters should be varied must be determined. Determining what levels of a variable to test
requires an in-depth understanding of the process, including the minimum, maximum, and current value of
the parameter. If the difference between the minimum and maximum value of a parameter is large, the
values being tested can be further apart or more values can be tested. If the range of a parameter is small,
then less values can be tested or the values tested can be closer together. For example, if the temperature
of a reactor jacket can be varied between 20 and 80 degrees C and it is known that the current operating
jacket temperature is 50 degrees C, three levels might be chosen at 20, 50, and 80 degrees C. Also, the
cost of conducting experiments must be considered when determining the number of levels of a parameter
to include in the experimental design. In the previous example of jacket temperature, it would be cost
prohibitive to do 60 levels at 1 degree intervals. Typically, the number of levels for all parameters in the
experimental design is chosen to be the same to aid in the selection of the proper orthogonal array.

Knowing the number of parameters and the number of levels, the proper orthogonal array can be
selected. Using the array selector table shown below, the name of the appropriate array can be found by
looking at the column and row corresponding to the number of parameters and number of levels. Once the
name has been determined (the subscript represents the number of experiments that must be completed),
the predefined array can be looked up. Links are provided to many of the predefined arrays given in the
array selector table. These arrays were created using an algorithm Taguchi developed, and allows for each
variable and setting to be tested equally.
16
L4: Three two-level factors
L8: Seven two-level factors
L9 : Four three-level factors
L12: Eleven two-level factors
L16: Fifteen two-level factors
L16b: Five four-level factors
L18: One two-level and seven three-level factors
L25: Six five-level factors
L27: Thirteen three-level factors
L32: Thirty-two two-level factors
L32b: One two-level factor and nine four-level factors
L36: Eleven two-level factors and twelve three-level factors
L50: One two-level factors at 2 levels and eleven five-level factors
L54: One two-level factor and twenty-five three-level factors
L64: Thirty-one two-level factors
L64b: Twenty-one four-level factors
L81: Forty three-level factors
Every experimenter develops a nominal process/product that has the desired functionality as
demanded by users. Beginning with these nominal processes, he wishes to optimize the processes/products
by varying the control factors at his disposal, such that the results are reliable and repeatable (i.e. show
less variations).
In Taguchi Method, the word "optimization" implies "determination of BEST levels of control factors". In
turn, the BEST levels of control factors are those that maximize the Signal-to-Noise ratios. The
Signal-to-Noise ratios are log functions of desired output characteristics. The experiments, that are
conducted to determine the BEST levels, are based on "Orthogonal Arrays", are balanced with respect to
all control factors and yet are minimum in number. This in turn implies that the resources (materials and
time) required for the experiments are also minimum.
Taguchi method divides all problems into 2 categories - STATIC or DYNAMIC. While the
Dynamic problems have a SIGNAL factor, the Static problems do not have any signal factor. In Static
problems, the optimization is achieved by using 3 Signal-to-Noise ratios - smaller-the-better,
LARGER-THE-BETTER and nominal-the-best. In Dynamic problems, the optimization is achieved by
using 2 Signal-to-Noise ratios - Slope and Linearity.
Taguchi Method is a process/product optimization method that is based on 8-steps of planning, conducting
and evaluating results of matrix experiments to determine the best levels of control factors. The primary
goal is to keep the variance in the output very low even in the presence of noise inputs. Thus, the
processes/products are made ROBUST against all variations.

Taguchi Method treats optimization problems in two categories,


Static Problems;
Generally, a process to be optimized has several control factors which directly decide the
target or desired value of the output. The optimization then involves determining the best control

17
factor levels so that the output is at the the target value. Such a problem is called as a "STATIC
PROBLEM".
This is best explained using a P-Diagram which is shown below ("P" stands for Process or
Product). Noise is shown to be present in the process but should have no effect on the output! This
is the primary aim of the Taguchi experiments - to minimize variations in output even though
noise is present in the process. The process is then said to have become ROBUST.

Dynamic Problems;
If the product to be optimized has a signal input that directly decides the output, the
optimization involves determining the best control factor levels so that the "input signal / output"
ratio is closest to the desired relationship. Such a problem is called as a "DYNAMIC
PROBLEM".
This is best explained by a P-Diagram. Again, the primary aim of the Taguchi experiments - to
minimize variations in output even though noise is present in the process- is achieved by getting
improved linearity in the input/output relationship.

Static Problem (Batch Process Optimization)

Smaller-The-Better
n = -10 Log10 [ mean of sum of squares of measured data ]
This is usually the chosen S/N ratio for all undesirable characteristics like " defects " etc.
for which the ideal value is zero. Also, when an ideal value is finite and its maximum or minimum
value is defined (like maximum purity is 100% or maximum Tc is 92K or minimum time for
making a telephone connection is 1 sec) then the difference between measured data and ideal
value is expected to be as small as possible. The generic form of S/N ratio then becomes,
n = -10 Log10 [ mean of sum of squares of {measured - ideal} ]

Larger-The-Better
n = -10 Log10 [mean of sum squares of reciprocal of measured data]
This case has been converted to SMALLER-THE-BETTER by taking the reciprocals of
measured data and then taking the S/N ratio as in the smaller-the-better case.

Nominal-The-Best
square of mean
n = 10 Log10 = _______________
variance

18
This case arises when a specified value is MOST desired, meaning that neither a smaller
nor a larger value is desirable.

Examples are;
(i) most parts in mechanical fittings have dimensions which are nominal-the-best type.
(ii) Ratios of chemicals or mixtures are nominally the best type.
e.g. Aqua regia 1:3 of HNO3:HCL
Ratio of Sulphur, KNO3 and Carbon in gun powder
(iii) Thickness should be uniform in deposition /growth /plating /etching..

Dynamic Problem (Technology Development)


In dynamic problems, we come across many applications where the output is supposed to follow
input signal in a predetermined manner. Generally, a linear relationship between "input" "output" is
desirable.

For example : Accelerator peddle in cars,


volume control in audio amplifiers,
document copier (with magnification or reduction)
various types of moldings
etc.
There are 2 characteristics of common interest in "follow-the-leader" or "Transformations" type of
applications,
(i) Slope of the I/O characteristics, and
(ii) Linearity of the I/O characteristics
(minimum deviation from the best-fit straight line)

The Signal-to-Noise ratio for these 2 characteristics have been defined as;

Sensitivity (Slope)
The slope of I/O characteristics should be at the specified value (usually 1).
It is often treated as Larger-The-Better when the output is a desirable characteristics (as in the
case of Sensors, where the slope indicates the sensitivity).
n = 10 Log10 [square of slope or beta of the I/O characteristics]
On the other hand, when the output is an undesired characteristics, it can be treated as
Smaller-the-Better.
19
n = -10 Log10 [square of slope or beta of the I/O characteristics]

Linearity (Larger-The-Better)
Most dynamic characteristics are required to have direct proportionality between the input
and output. These applications are therefore called as "TRANSFORMATIONS". The straight line
relationship between I/O must be truly linear i.e. with as little deviations from the straight line as
possible.
Square of slope or beta
n = 10 Log10 = _____________________
variance
Variance in this case is the mean of the sum of squares of deviations of measured data points from
the best-fit straight line (linear regression).

A Typical Orthogonal Array


While there are many standard orthogonal arrays available, each of the arrays is meant for a
specific number of independent design variables and levels . For example, if one wants to conduct an
experiment to understand the
influence of 4 different
independent variables with
each variable having 3 set
values ( level values), then an
L9 orthogonal array might be
the right choice. The L9
orthogonal array is meant for
understanding the effect of 4
independent factors each
having 3 factor level values.
This array assumes that there
is no interaction between any
two factor. While in many
cases, no interaction model
assumption is valid, there are some cases where there is a clear evidence of interaction. A typical case of
interaction would be the interaction between the material properties and temperature.

The table shows an L9 orthogonal array. There are totally 9 experiments to be conducted and each
experiment is based on the combination of level values as shown in the table. For example, the third
experiment is conducted by keeping the independent design variable 1 at level 1, variable 2 at level 3,
variable 3 at level 3, and variable 4 at level 3.

20
Properties Of An Orthogonal Array
The orthogonal arrays has the following special properties that reduces the number of experiments to
be conducted.
1. The vertical column under each independent variables of the above table has a special
combination of level settings. All the level settings appears an equal number of times. For L9
array under variable 4 , level 1 , level 2 and level 3 appears thrice. This is called the balancing
property of orthogonal arrays.
2. All the level values of independent variables are used for conducting the experiments.
3. The sequence of level values for conducting the experiments shall not be changed. This means
one can not conduct experiment 1 with variable 1, level 2 setup and experiment 4 with variable 1 ,
level 1 setup. The reason for this is that the array of each factor columns are mutually orthogonal
to any other column of level values. The inner product of vectors corresponding to weights is
zero. If the above 3 levels are normalized between -1 and 1, then the weighing factors for level 1,
level 2 , level 3 are -1 , 0 , 1 respectively. Hence the inner product of weighing factors of
independent variable 1 and independent variable 3 would be

Minimum Number Of Experiments To Be Conducted


The design of experiments using the orthogonal array is, in most cases, efficient when compared
to many other statistical designs. The minimum number of experiments that are required to conduct the
Taguchi method can be calculated based on the degrees of freedom approach.

For example, in case of 8 independent variables study having 1 independent variable with 2 levels
and remaining 7 independent variables with 3 levels ( L18 orthogonal array) , the minimum number of
experiments required based on the above equation is 16. Because of the balancing property of the
orthogonal arrays, the total number of experiments shall be multiple of 2 and 3. Hence the number of
experiments for the above case is 18.

Assumptions Of The Taguchi Method


The additive assumption implies that the individual or main effects of the independent variables
on performance parameter are separable. Under this assumption, the effect of each factor can be linear,
quadratic or of higher order, but the model assumes that there exists no cross product effects (interactions)
among the individual factors. That means the effect of independent variable 1 on performance parameter
does not depend on the different level settings of any other independent variables and vice versa. If at
anytime, this assumption is violated, then the additivity of the main effects does not hold, and the
variables interact.

21
Selection Of The Independent Variables
Before conducting the experiment, the knowledge of the product/process under investigation is of
prime importance for identifying the factors likely to influence the outcome. In order to compile a
comprehensive list of factors, the input to the experiment is generally obtained from all the people
involved in the project.

Deciding The Number Of Levels


Once the independent variables are decided, the number of levels for each variable is decided. The
selection of number of levels depends on how the performance parameter is affected due to different level
settings. If the performance parameter is a linear function of the independent variable, then the number of
level setting shall be 2. However, if the independent variable is not linearly related, then one could go for
3, 4 or higher levels depending on whether the relationship is quadratic, cubic or higher order.
In the absence of exact nature of relationship between the independent variable and the performance
parameter, one could choose 2 level settings. After analyzing the experimental data, one can decide
whether the assumption of level setting is right or not based on the percent contribution and the error
calculations.

Selection Of An Orthogonal Array


Before selecting the orthogonal array, the minimum number of experiments to be conducted shall
be fixed based on the total number of degrees of freedom [5] present in the study. The minimum number
of experiments that must be run to study the factors shall be more than the total degrees of freedom
available. In counting the total degrees of freedom the investigator commits 1 degree of freedom to the
overall mean of the response under study. The number of degrees of freedom associated with each factor
under study equals one less than the number of levels available for that factor. Hence the total degrees of
freedom without interaction effect is 1 + as already given by equation 2.1. For example, in case of 11
independent variables, each having 2 levels, the total degrees of freedom is 12. Hence the selected
orthogonal array shall have at least 12 experiments. An L12 orthogonal satisfies this requirement.
Once the minimum number of experiments is decided, the further selection of orthogonal array is based on
the number of independent variables and number of factor levels for each independent variable.

Assigning The Independent Variables To Columns


The order in which the independent variables are assigned to the vertical column is very essential.
In case of mixed level variables and interaction between variables, the variables are to be assigned at right
columns as stipulated by the orthogonal array.
Finally, before conducting the experiment, the actual level values of each design variable shall be decided.
It shall be noted that the significance and the percent contribution of the independent variables changes
depending on the level values assigned. It is the designers responsibility to set proper level values.

22
Conducting The Experiment
Once the orthogonal array is selected, the experiments are conducted as per the level
combinations. It is necessary that all the experiments be conducted. The interaction columns and dummy
variable columns shall not be considered for conducting the experiment, but are needed while analyzing
the data to understand the interaction effect. The performance parameter under study is noted down for
each experiment to conduct the sensitivity analysis.

Analysis of the data


Since each experiment is the combination of different factor levels, it is essential to segregate the
individual effect of independent variables. This can be done by summing up the performance parameter
values for the corresponding level settings. For example, in order to find out the main effect of level 1
setting of the independent variable 2 sum the performance parameter values of the experiments 1, 4 and 7.
Similarly for level 2, sum the experimental results of 2, 5 and 7 and so on.
Once the mean value of each level of a particular independent variable is calculated, the sum of square of
deviation of each of the mean value from the grand mean value is calculated. This sum of square deviation
of a particular variable indicates whether the performance parameter is sensitive to the change in level
setting. If the sum of square deviation is close to zero or insignificant, one may conclude that the design
variables is not influencing the performance of the process. In other words, by conducting the sensitivity
analysis, and performing analysis of variance (ANOVA), one can decide which independent factor
dominates over other and the percentage contribution of that particular independent variable.

Inference
From the above experimental analysis, it is clear that the higher the value of sum of square of an
independent variable, the more it has influence on the performance parameter. One can also calculate the
ratio of individual sum of square of a particular independent variable to the total sum of squares of all the
variables. This ratio gives the percent contribution of the independent variable on the performance
parameter.
In addition to above, one could find the near optimal solution to the problem. This near optimum value
may not be the global optimal solution. However, the solution can be used as an initial / starting value for
the standard optimization technique.

Robust Design
A main cause of poor yield in manufacturing processes is the manufacturing variation. These
manufacturing variations include variation in temperature or humidity, variation in raw materials, and drift

23
of process parameters. These source of noise / variation are the variables that are impossible or expensive
to control.
The objective of the robust design is to find the controllable process parameter settings for which noise or
variation has a minimal effect on the product's or process's functional characteristics. It is to be noted that
the aim is not to find the parameter settings for the uncontrollable noise variables, but the controllable
design variables. To attain this objective, the control parameters, also known as inner array variables, are
systematically varied as stipulated by the inner orthogonal array. For each experiment of the inner array, a
series of new experiments are conducted by varying the level settings of the uncontrollable noise
variables. The level combinations of noise variables are done using the outer orthogonal array.
The influence of noise on the performance characteristics can be found using the ratio where S is the
standard deviation of the performance parameters for each inner array experiment and N is the total
number of experiment in the outer orthogonal array. This ratio indicates the functional variation due to
noise. Using this result, it is possible to predict which control parameter settings will make the process
insensitive to noise.
However, when the functional characteristics are not affected by the external noises, there is no need to
conduct the experiments using the outer orthogonal arrays. This is true in case of experiments which are
conducted using the computer simulation as the repeatability of a computer simulated experiments is very
high.

Advantages and Disadvantages


An advantage of the Taguchi method is that it emphasizes a mean performance characteristic
value close to the target value rather than a value within certain specification limits, thus improving the
product quality. Additionally, Taguchi's method for experimental design is straightforward and easy to
apply to many engineering situations, making it a powerful yet simple tool. It can be used to quickly
narrow down the scope of a research project or to identify problems in a manufacturing process from data
already in existence. Also, the Taguchi method allows for the analysis of many different parameters
without a prohibitively high amount of experimentation. For example, a process with 8 variables, each
with 3 states, would require 6561 (38) experiments to test all variables. However using Taguchi's
orthogonal arrays, only 18 experiments are necessary, or less than .3% of the original number of
experiments. In this way, it allows for the identification of key parameters that have the most effect on the
performance characteristic value so that further experimentation on these parameters can be performed
and the parameters that have little effect can be ignored.
The main disadvantage of the Taguchi method is that the results obtained are only relative and do
not exactly indicate what parameter has the highest effect on the performance characteristic value. Also,
since orthogonal arrays do not test all variable combinations, this method should not be used with all
relationships between all variables are needed. The Taguchi method has been criticized in the literature for
difficulty in accounting for interactions between parameters. Another limitation is that the Taguchi
methods are offline, and therefore inappropriate for a dynamically changing process such as a simulation
study. Furthermore, since Taguchi methods deal with designing quality in rather than correcting for poor
quality, they are applied most effectively at early stages of process development. After design variables
are specified, use of experimental design may be less cost effective.

24
ANALYSIS OF VARIANCE (ANOVA)

Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed
aggregate variability found inside a data set into two parts: systematic factors and random factors. The
systematic factors have a statistical influence on the given data set, while the random factors do not.
Analysts use the ANOVA test to determine the influence that independent variables have on the dependent
variable in a regression study.
The t- and z-test methods developed in the 20th century were used for statistical analysis until 1918, when
Ronald Fisher created the analysis of variance method.
ANOVA is also called the Fisher analysis of variance, and it is the extension of the t- and z-tests.
The term became well-known in 1925, after appearing in Fisher's book, "Statistical Methods for Research
Workers." It was employed in experimental psychology and later expanded to subjects that were more
complex.
The Formula for ANOVA is:

where:
F=ANOVA coefficient
MST=Mean sum of squares due to treatment
MSE=Mean sum of squares due to error

What Does the Analysis of Variance Reveal?


The ANOVA test is the initial step in analyzing factors that affect a given data set. Once the test is
finished, an analyst performs additional testing on the methodical factors that measurably contribute to the
data set's inconsistency. The analyst utilizes the ANOVA test results in an f-test to generate additional data
that aligns with the proposed regression models.
The ANOVA test allows a comparison of more than two groups at the same time to determine whether a
relationship exists between them. The result of the ANOVA formula, the F statistic (also called the
F-ratio), allows for the analysis of multiple groups of data to determine the variability between samples
and within samples.
If no real difference exists between the tested groups, which is called the null hypothesis, the result of the
ANOVA's F-ratio statistic will be close to 1. Fluctuations in its sampling will likely follow the Fisher F

25
distribution. This is actually a group of distribution functions, with two characteristic numbers, called the
numerator degrees of freedom and the denominator degrees of freedom.

- Analysis of variance, or ANOVA, is a statistical method that separates observed variance data
into different components to use for additional tests.
- A one-way ANOVA is used for three or more groups of data, to gain information about the
relationship between the dependent and independent variables.
- If no true variance exists between the groups, the ANOVA's F-ratio should equal close to 1.

Anova
An ANOVA test is a way to find out if survey or experiment results are significant. In other words,
they help you to figure out if you need to reject the null hypothesis or accept the alternate hypothesis.
The type of ANOVA test used depends on a number of factors. It is applied when data needs to be
experimental. Analysis of variance is employed if there is no access to statistical software resulting in
computing ANOVA by hand. It is simple to use and best suited for small samples. With many
experimental designs, the sample sizes have to be the same for the various factor level combinations.
ANOVA is helpful for testing three or more variables. It is similar to multiple two-sample t-tests.
However, it results in fewer type I errors and is appropriate for a range of issues. ANOVA groups
differences by comparing the means of each group and includes spreading out the variance into diverse
sources. It is employed with subjects, test groups, between groups and within groups.

There are two types of ANOVA: one-way (or unidirectional) and two-way.
One-way or two-way refers to the number of independent variables in your analysis of variance
test. A one-way ANOVA evaluates the impact of a sole factor on a sole response variable. It determines
whether all the samples are the same. The one-way ANOVA is used to determine whether there are any
statistically significant differences between the means of three or more independent (unrelated) groups.
A two-way ANOVA is an extension of the one-way ANOVA. With a one-way, you have one
independent variable affecting a dependent variable. With a two-way ANOVA, there are two
independents. For example, a two-way ANOVA allows a company to compare worker productivity based
on two independent variables, such as salary and skill set. It is utilized to observe the interaction between
the two factors and tests the effect of two factors at the same time.
It is a generalized method of t-test for more than 2 groups but is more conservative(results in less type 1
error) and hence suited to a wide range of practical applications.
“Classical” ANOVA for balanced data does three things at once:
1. As exploratory data analysis, an ANOVA employs an additive data decomposition, and its sums
of squares indicate the variance of each component of the decomposition (or, equivalently, each
set of terms of a linear model).
26
2. Comparisons of mean squares, along with an F-test… allow testing of a nested sequence of
models.
3. Closely related to the ANOVA is a linear model fit with coefficient estimates and standard errors.
In short, ANOVA is a statistical tool used in several ways to develop and confirm an explanation for the
observed data.
Additionally:
1. It is computationally elegant and relatively robust against violations of its assumptions.
2. ANOVA provides strong (multiple sample comparison) statistical analysis.
3. It has been adapted to the analysis of a variety of experimental designs.

Why ANOVA Instead Of Multiple T-Tests?


Before ANOVA, multiple t-tests was the only option available to compare population means of
two or more groups.
As the number of groups increases, the number of two sample t-test also increases.
With increases in the number of t-tests, the probability of making the type 1 error also increases.

Algorithm
1. State null and alternative hypothesis
2. State alpha
3. Calculate the degree of freedom
4. State decision rule
5. Calculate test statistics
Calculate the variance between samples
Calculate variance within samples
Calculate F-statistic : if calculated F value > F table value, reject Ho
If F is significant, perform post hoc test.

27
The test statistic of analysis of variance is:

Where r-1, n-r is the degree of freedom in numerator and denominator respectively.

What is a One-Way ANOVA?


A one-way ANOVA is a type of statistical test that compares the variance in the group means
within a sample whilst considering only one independent variable or factor. It is a hypothesis-based test,
meaning that it aims to evaluate multiple mutually exclusive theories about our data. Before we can
generate a hypothesis, we need to have a question about our data that we want an answer to. For example,
adventurous researchers studying a population of walruses might ask “Do our walruses weigh more in
early or late mating season?” Here, the independent variable or factor (the two terms mean the same thing)
is “month of mating season”. In an ANOVA, our independent variables are organised in categorical
groups. For example, if the researchers looked at walrus weight in December, January, February and
March, there would be four months analyzed, and therefore four groups to the analysis.
A one-way ANOVA compares three or more than three categorical groups to establish whether there is a
difference between them. Within each group there should be three or more observations (here, this means
walruses), and the means of the samples are compared.

What Are The Hypotheses Of a One-Way ANOVA?


In a one-way ANOVA there are two possible hypotheses.
The null hypothesis (H0) is that there is no difference between the groups and equality between
means. (Walruses weigh the same in different months)
The alternative hypothesis (H1) is that there is a difference between the means and groups.
(Walruses have different weights in different months)

What Are The Assumptions Of a One-Way ANOVA?


Normality – That each sample is taken from a normally distributed population
Sample independence – that each sample has been drawn independently of the other samples
Variance Equality – That the variance of data in the different groups should be the same

28
Your dependent variable – here, “weight”, should be continuous – that is, measured on a scale
which can be subdivided using increments (i.e. grams, milligrams)

What is a Two-Way ANOVA?


A two-way ANOVA is, like a one-way ANOVA, a hypothesis-based test. However, in the two-way
ANOVA each sample is defined in two ways, and resultingly put into two categorical groups. Thinking again
of our walruses, researchers might use a two-way ANOVA if their question is: “Are walruses heavier in early
or late mating season and does that depend on the gender of the walrus?” In this example, both “month in
mating season” and “gender of walrus” are factors – meaning in total, there are two factors. Once again, each
factor’s number of groups must be considered – for “gender” there will only two groups “male” and “female”.
The two-way ANOVA therefore examines the effect of two factors (month and gender) on a dependent variable
– in this case weight, and also examines whether the two factors affect each other to influence the continuous
variable.

What Are The Assumptions Of A Two-Way ANOVA?


Your dependent variable – here, “weight”, should be continuous – that is, measured on a
scale which can be subdivided using increments (i.e. grams, milligrams)
Your two independent variables – here, “month” and “gender”, should be in categorical,
independent groups.
Sample independence – that each sample has been drawn independently of the other samples
Variance Equality – That the variance of data in the different groups should be the same
Normality – That each sample is taken from a normally distributed population

What Are The hypotheses Of A Two-Way ANOVA?


Because the two-way ANOVA consider the effect of two categorical factors, and the
effect of the categorical factors on each other, there are three pairs of null or alternative
hypotheses for the two-way ANOVA. Here, we present them for our walrus experiment, where
month of mating season and gender are the two independent variables.
H0: The means of all month groups are equal
H1: The mean of at least one month group is different

H0: The means of the gender groups are equal


H1: The means of the gender groups are different

29
H0: There is no interaction between the month and gender
H1: There is interaction between the month and gender

Summary: Differences Between One-Way and Two-Way ANOVA


The key differences between one-way and two-way ANOVA are summarized clearly below.
A one-way ANOVA is primarily designed to enable the equality testing between three or more means. A
two-way ANOVA is designed to assess the interrelationship of two independent variables on a dependent
variable.
A one-way ANOVA only involves one factor or independent variable, whereas there are two independent
variables in a two-way ANOVA.

In a one-way ANOVA, the one factor or


independent variable analyzed has three or
more categorical groups. A two-way
ANOVA instead compares multiple groups
of two factors.
One-way ANOVA need to satisfy only two
principles of design of experiments, i.e.
replication and randomization. As opposed
to Two-way ANOVA, which meets all
three principles of design of experiments
which are replication, randomization, and
local control.

Uses of ANOVA
To test the significance between the variance of two samples.
To test correlation and regression.
To study the homogeneity in case of two-way classification.
To test the significance of the multiple correlation coefficient.
To test the linearity of regression.
Interpretation of the significance of means and their interactions.

30
Advantages and Disadvantages

Advantages:
it is an improved technique over t-test and z-test.
Suitable for multidimensional variables.
Analysis of various factors at a time.
economical method of parametric testing.
Can be used in 3 or more than 3 groups.

Disadvantages:
it is difficult to analyze ANOVA under strict assumptions regarding the nature of data.
It is not so helpful in comparison with t-test that there is no special interpretation of the
significance of two means.
The requirement of post-ANOVA t-test for further testing.

Applications
Recommendation of a fertilizer against others for the improvement of crop yield.
ANOVA has immensely useful practical applications in business, particularly Lean-Six
Sigma/operational efficiency.
Comparing the gas mileage of different vehicles, or the same vehicle under different fuel types, or
road types.
Understanding the impact of temperature, pressure or chemical concentration on some chemical
reaction (power reactors, chemical plants, etc).
Understanding the impact of different catalysts on chemical reaction rates.
Studying whether advertisements of different kinds solicit different numbers of customer
responses.
Understanding the performance, quality or speed of manufacturing processes based on the number
of cells or steps they’re divided into.

31
SOURCES

Books

DESIGN AND ANALYSIS OF EXPERIMENTS, Douglas C. MONTGOMERY, Arizona State


University Eighth Edition.

See Taguchi, G., El Sayed, M. & Hsaing, C. (1989). Quality engineering and production systems. New York:
McGraw-Hill.

Articles

Design of experiments to optimize casting process of aluminum alloy7075 in addition of TiO2 using
Taguchi method, J. Jensin JoshuaA. Abraham Eben Andrews, (ELSEVIER)
Application of Taguchi based Design of Experiments to Fusion Arc Weld Processes: A Review, Siva
Prasad KONDAPALLI, Ch.Srinivasa RAO, Department of Aeronautical Engineering, Hindustan Institute
of Technology and Science, India, (SCIENCEDIRECT)
Using risk analysis and Taguchi's method to find optimal conditions of design parameters: A case study;
January 2006 The International Journal of Advanced Manufacturing Technology 27(5):445-454; M.
NATARAJ, Arunachalam V P. (RESEARCHGATE)
Hypothesis Testing - Analysis of Variance (ANOVA); Lisa Sullivan, PhD, Professor of Biostatistics,
Boston University School of Public Health

Links

https://www.ee.iitb.ac.in/~apte/CV_PRA_TAGUCHI_INTRO.htm

https://eng.libretexts.org/Bookshelves/Industrial_and_Systems_Engineering/Book%3A_Chemical_Proce
ss_Dynamics_and_Controls_(Woolf)/14%3A_Design_of_Experiments/14.01%3A_Design_of_Experiments
_via_Taguchi_Methods_-_Orthogonal_Arrays

https://www.ims-productivity.com/page.cfm/content/Taguchi-Methods/

32
https://www.itl.nist.gov/div898/handbook/pri/section5/pri56.htm

https://www.york.ac.uk/depts/maths/tables/orthogonal.htm
http://www.ecs.umass.edu/mie/labs/mda/fea/sankar/chap2.html
https://www.investopedia.com/terms/a/anova.asp

https://www.technologynetworks.com/informatics/articles/one-way-vs-two-way-anova-definition-differ
ences-assumptions-and-hypotheses-306553

https://medium.com/@StepUpAnalytics/anova-one-way-vs-two-way-6b3ff87d3a94

https://www.statisticshowto.com/probability-and-statistics/hypothesis-testing/anova/

https://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_hypothesistesting-anova/bs704_hypothesist
esting-anova_print.html#:~:text=The%20null%20hypothesis%20in%20ANOVA,rather%20than%20in%20
mathematical%20symbols.

Software

Minitab

Course

COURSERA (coursera.org),

Experimental Design Basics by Arizona State University, Douglas C. Montgomery

33
Kocaeli University
Faculty of Engineering
Department of Mechanical Engineering

submitted to: Prof. Dr. Emel TABAN


Construction and Manufacturing Laboratory

Design of experiments to optimize casting process of aluminum alloy 7075 in addition of TiO 2 using Taguchi
method

J. Jensin Joshua , A. Abraham Eben Andrews


Department of Aeronautical Engineering, Hindustan Institute of Technology and Science, India

34

(16th Group)
Contents
ABSTRACT .............................................................................................................................................. 38
INTRODUCTION ..................................................................................................................................... 38
EXPERIMENTAL PROCEDURE ................................................................................................................. 39
MINITAB ................................................................................................................................................ 41
Design Summary ................................................................................................................................ 41
Response Table for Signal to Noise Ratios ........................................................................................ 42
Response Table for Means ................................................................................................................ 42
Regression Equation .......................................................................................................................... 44
Coefficients ........................................................................................................................................ 44
Model Summary ................................................................................................................................ 44
Analysis of Variance........................................................................................................................... 44
Fits and Diagnostics for Unusual Observations ................................................................................. 44
SIMULATED ALUMINIUM CASTING ....................................................................................................... 46
RESULT AND DISCUSSIONS .................................................................................................................... 46
CONCLUSION ......................................................................................................................................... 47

35
Figure 1 Dimensions of casting pattern. _______________________________________________________ 39
Figure 2 main effect plot for means. __________________________________________________________ 43
Figure 3 main effect plot for SN ratios. ________________________________________________________ 43
Figure 4 pareto chart of the standardized effects. _______________________________________________ 45
Figure 5 residual plots for shrinkage.__________________________________________________________ 45
Figure 6 simulated aluminium casting. ________________________________________________________ 46
Figure 7 optimal values ____________________________________________________________________ 46

36
Table 1. Chemical composition of AA7075. _______________________________________________________ 38
Table 2 selected parameters for optimization. ____________________________________________________ 39
Table 3 Tguchi orthogonal L9 array design. ______________________________________________________ 40
Table 4 values obtained during experiments. _____________________________________________________ 41
Table 5 Design of Taguchi.____________________________________________________________________ 42
Table 6 response table for signal to noise ratios. __________________________________________________ 42
Table 7 tesponse table for means. _____________________________________________________________ 42
Table 8 coefficients. _________________________________________________________________________ 44
Table 9 model summary. _____________________________________________________________________ 44
Table 10 analysis of variance. _________________________________________________________________ 44
Table 11 fits and diagnostic for unusual observations. _____________________________________________ 44

37
ABSTRACT

Design of Experiments is the design of any task that aims to describe the variation of data under
conditions that are hypothesized to reflect the variation. In this research, we analyzed various significant
process parameters of the casting of aluminium alloy 7075 with TiO2 addition. An experiment was made
to obtain an optimal setting for casting parameters to yield the optimum casting of aluminium alloy 7075.
The process parameters are considered as melting temperature, filling time of the mould and the
solidification time. The effect of selected parameters of casting towards the optimum cast of aluminium
alloy 7075 will be accomplished using the Taguchi method. The purpose of adding titanium dioxide is to
further enhance the physical properties like the hardness and its wear resistance of aluminium alloy 7075.
The Taguchi method is used here to bring out the best settings possible for casting that depends on the
above parameters. Minitab software is used to apply the Taguchi method.

INTRODUCTION
When products need to design to improvise its quality, it is imperative to find a design solution that are
robust to variation. it would be possible to achieve a robust design by using expensive components, good
grade materials or even by controlling the process parameters, these solutions are rarely economically
justifiable so the Taguchi method is the most cost-effective method to be found in Design of Experiments.
Dr Genichi Taguchi was found out several new statistical tools, techniques and concepts to improvise its
quality of the product that depend more on the statistical theory of design of experiments. Taguchi
introduces his approach using design of experiment for,
(i) Design a process to be robust to conditions and variation
(ii) Lowering the target value to minimize its variation

component Al Cr Cu Fe Mg Mn Si Ti Zn

Wt % 87.1–91.4 0.18–0.28 1.2–2 Max 0.5 2.1–2.9 Max 0.3 Max 0.4 Max 0.2 5.1–6.1

Table 1. Chemical composition of AA7075.

central composite design (CCD) is an Design of Experiment optimization process, useful in response
surface methodology, for creating a second order quadratic mathematical model for the response
parameter without needing to use a complete three-level factorial experiment. It is better than all the
optimization process but we choose taguchi method because it’s a economically justification method.
Optimization is the action of making the best or most effective use of a situation or resource.
Aluminium is the world’s most abundant metal. The versatility of aluminium makes it the most widely
used metal after steel. aluminium alloy 7075 has zinc as the primary alloying element. Its
micro-segregation makes this alloy more susceptible to embrittlement than other aluminium alloys Table
1. This alloy is used in aerospace application like space shuttle SRB nozzles and the External tank SRB
beam in the Inter-tank section because of its excellent mechanical properties, and excellent ductility, high
strength, toughness and good resistance to fatigue.
38
EXPERIMENTAL PROCEDURE
In a sand casting process optimization technique is used to eliminate errors, in open mould aluminium
casting shrinkage is the biggest issue we are facing. We have to select the important process parameter
The selected casting parameters are:
• Pouring Temperature
• Pouring Time
• Solidification Time
The role of these parameters in casting help in obtaining a cast with proper finish and minimal shrinkage.
In Fig. 1 presents a dimension of the casting pattern used in the experimental procedure. The test pattern
was a rectangular plate with dimensions of 30 X 25 X 10 cm. castings for each trial condition were made
using a randomization technique.

Figure 1 Dimensions of casting pattern.

Box furnace is used to melt Aluminium alloy 7075 and poured in the sand mould at the selected
parameter Table 2. For optimization we choose L9 Taguchi Orthogonal Array Design so it has 9 factors
with 9 runs Table 3.

Parameter Designation Process Parameters Range Level 1 Level 2 Level 3

A Pouring Temperature (°C) 670–750 690 710 730

B Pouring Time (sec) 1.2–1.6 1.2 1.4 1.6

C Solidification Time (sec) 6–9 6.2 7.5 8.1

Table 2 selected parameters for optimization.

39
S.NO A B C

1 1 1 1

2 1 2 2

3 1 3 3

4 2 1 2

5 2 2 3

6 2 3 1

7 3 1 3

8 3 2 1

9 3 3 2

Table 3 Tguchi orthogonal L9 array design.

The range of each process parameter is fixed the aluminium melting point ranges from 670 to 750 °C it is
considered as pouring temperature, pouring time of the molten material with in the mould is 1.2 to 1.6 sec,
the solidification time of AA7075 is 6 to 9 sec from this we are fixing three levels for the optimization.
For optimization to be done certain process parameters like pouring temperature, pouring time and
solidification time are taken. A response parameter is needed for finding out the influence of the process
parameters on it Table 4. The response parameter is taken as shrinkage percentage. This process is done
using the MINITAB software where the Design of experiments is done using Taguchi Method to
optimize.

TRAIL Pouring Tempering Pouring rate Solidification Time Shrinkage

1 690 1.2 6.2 4.26

2 690 1.4 7.5 4.74

3 690 1.6 8.1 4.62

40
TRAIL Pouring Tempering Pouring rate Solidification Time Shrinkage

4 710 1.2 7.5 5.10

5 710 1.4 8.1 4.98

6 710 1.6 6.2 5.16

7 730 1.2 8.1 4.86

8 730 1.4 6.2 3.84

9 730 1.6 7.5 4.26

Table 4 values obtained during experiments.

Here orthogonality refers in the combinatoric sense that is for any pair of columns, all combinations of
factor levels occur Fig. 2, Fig. 3, and they occur an equal number of times. This is called balancing
property and it implies

MINITAB

In this case, 3 controllable factor are used and noise factor that is called ‘shrinkage’. With Taguci
Method, on Minitab software, 3x3 orthogonal array is designed. 3 means number of levels of per factors,
second 3 means number of number of factors. In actually, we need to do 33 = 27 experiments to have
precisely result. It may cause haigh cost, waste of time, non-standard process or products.

Design Summary
Taguchi L9(3^3)
Array
Factors: 3
Runs: 9

Columns of L9(3^4) array: 1 2 3

41
Pouring Temp. (°C) Pouring Time (sec) Solidific. Time (sec) Shrinkage SNRA2 MEAN2
690 1,2 6,2 4,26 -12,5882 4,26
690 1,4 7,5 4,74 -13,5156 4,74
690 1,6 8,1 4,62 -13,2928 4,62
710 1,2 7,5 5,10 -14,1514 5,10
710 1,4 8,1 4,98 -13,9446 4,98
710 1,6 6,2 5,16 -14,2530 5,16
730 1,2 8,1 4,86 -13,7327 4,86
730 1,4 6,2 3,84 -11,6866 3,84
730 1,6 7,5 4,26 -12,5882 4,26
Table 5 Design of Taguchi.

Response Table for Signal to Noise Ratios

Smaller is better
Pouring
Temperature Pouring Solidification
Level (°C) Time (sec) Time (sec)
1 -13,13 -13,49 -12,84
2 -14,12 -13,05 -13,42
3 -12,67 -13,38 -13,66
Delta 1,45 0,44 0,81
Rank 1 3 2

Table 6 response table for signal to noise ratios.

Response Table for Means


Pouring Pouring
Temperature Time Solidification
Level (°C) (sec) Time (sec)
1 4,540 4,740 4,420
2 5,080 4,520 4,700
3 4,320 4,680 4,820
Delta 0,760 0,220 0,400
Rank 1 3 2
Table 7 tesponse table for means.

42
Figure 2 main effect plot for means.

Figure 3 main effect plot for SN ratios.

43
Regression Equation
Shrinkage = 7,23 - 0,0055 Pouring Temperature (°C)
- 0,15 Pouring Time (sec)
+ 0,211 Solidification Time (sec)

Coefficients
Term Coef SE Coef T-Value P-Value VIF
Constant 7,23 7,56 0,96 0,383
Pouring Temperature -0,0055 0,0102 -0,54 0,614 1,00
(°C)
Pouring Time (sec) -0,15 1,02 -0,15 0,889 1,00
Solidification Time (sec) 0,211 0,211 1,00 0,362 1,00
Table 8 coefficients.

Model Summary
S R-sq R-sq(adj) R-sq(pred)
0,500815 20,87% 0,00% 0,00%
Table 9 model summary.

Analysis of Variance
Source DF Adj SS Adj MS F-Value P-Value
Regression 3 0,33072 0,110241 0,44 0,735
Pouring Temperature °C) 1 0,07260 0,072600 0,29 0,614
Pouring Time (sec) 1 0,00540 0,005400 0,02 0,889
Solidification Time (sec) 1 0,25272 0,252724 1,01 0,362
Error 5 1,25408 0,250815
Total 8 1,58480
Table 10 analysis of variance.

Fits and Diagnostics for Unusual Observations


Obs Shrinkage Fit Resid Std Resid
6 5,160 4,391 0,769 2,13 R

R Large residual
Table 11 fits and diagnostic for unusual observations.

44
Figure 4 pareto chart of the standardized effects.

Figure 5 residual plots for shrinkage.

45
SIMULATED ALUMINIUM CASTING

Figure 6 simulated aluminium casting.

Figure 7 optimal values

RESULT AND DISCUSSIONS

This reflect that by using Taguchi method the factor levels when optimized will result in reduction of
casting defects and increase the yield percentage of the accepted casting without any additional
investment. A usage of quality tools like pareto chart is useful for finding the major defects in the daily

46
operations of foundry. Quality of casting can be improved by aesthetic look, dimensional accuracy, better
understanding of noise factor and interaction between variables, quality cost system based on individual
product, scrap reduction, reworking of casting and process control.

CONCLUSION

The result of this project is the optimized process parameters of the aluminium casting process which
results in minimum defects. The optimum process parameters levels are pouring temperature is 690
Celsius, pouring rate is 1.2 s and the solidification time is 8.1 s. Also, the experiments give a
comprehensible picture of contribution of all factors taken to the variation in amount of shrinkage present
in the casting, thus the quality can be improved without further investment. The obtained optimal values
of casting parameter can be used in future for getting good quality of open aluminium plaster casting.

47
SOURCES

Design of experiments to optimize casting process of aluminum alloy 7075 in addition of TiO 2 using Taguchi
method

J. Jensin Joshua , A. Abraham Eben Andrews


Department of Aeronautical Engineering, Hindustan Institute of Technology and Science, India

and used MINITAB Software

48

You might also like