0% found this document useful (0 votes)
11 views

Lecture 1

This document provides an overview of the Design of Experiments (DOE) course TPQ 3323 / TMS 3433. The course deals with concepts and techniques for designing and analyzing experiments. It will cover topics like completely randomized designs, blocking designs, confounding, and fractional factorial designs. Students will learn CLO1-CLO4, including describing experimental design concepts and applying them to create a scientific experiment. The course will be assessed through weekly quizzes and a final exam. The course outline presents chapters covering simple comparative experiments, single-factor ANOVA, randomized blocks, factorial designs, confounding, and fractional factorial designs.

Uploaded by

Pabloster Amzar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Lecture 1

This document provides an overview of the Design of Experiments (DOE) course TPQ 3323 / TMS 3433. The course deals with concepts and techniques for designing and analyzing experiments. It will cover topics like completely randomized designs, blocking designs, confounding, and fractional factorial designs. Students will learn CLO1-CLO4, including describing experimental design concepts and applying them to create a scientific experiment. The course will be assessed through weekly quizzes and a final exam. The course outline presents chapters covering simple comparative experiments, single-factor ANOVA, randomized blocks, factorial designs, confounding, and fractional factorial designs.

Uploaded by

Pabloster Amzar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

TPQ 3323 / TMS 3433

Design of Experiments
DR IQBAL SHAMSUDHEEN
JULY – SEPTEMBER 2023
2

Some u This course deals with the concepts and


Housekeeping techniques used in the design and
analysis of experiments.
Stuff
u The concepts and different models of an
Course Synopsis
experimental design will be studied,
leading to hands-on experience by
applying various techniques on scientific
researches.
u Topics covered will include an
introduction to experiments, completely
randomised designs, blocking designs,
cofounding and fractional designs with
two levels.
3

Some
Housekeeping u CLO1: Describe the concepts of
experimental design.
Stuff u CLO2: Determine the key factor in the
process.
Course Learning
Outcome u CLO3: Apply the concepts in creating a
designed experiment including
randomisation, blocking, and replication.
u CLO4: Design and complete their own
scientific experiment and interpret
statistical results from an experiment and
report them in non-technical language.
4

u Since this is a short semester with 7 weeks


Some in total, I will be conducting one quiz per
week reflecting the topic we learned in
Housekeeping the particular week. Therefore, you will
Stuff have 5 different quizzes for a total
coursework mark of 60%
Method of Evaluation
Assessment Percentage (%)

Quizzes 60
Final Exam 40
Total 100

u This is to ensure that you will keep up with


the fast paced lectures and test your
understanding every week.
5

u Chapter 1: Simple Comparative


Experiments
Course Outline u Chapter 2: Experiment with a Single
Factor: ANOVA
u Chapter 3: Randomized Blocks, Latin
Squares and Related Design
u Chapter 4: Factorial Designs and 2K
Factorial Design
u Chapter 5: Confounding
u Chapter 6: Two-Level Fractional Factorial
Designs
6

References
Main Reference
u Montgomery, D. C. (2020). Design and
Analysis of Experiments, 10th Edition, John
Wiley & Sons.
If you would like an electronic copy of this
book, please request in the telegram group
7

Introduction to
Design of • THE SCIENTIFIC METHOD

Experiments • A QUICK HISTORY OF DOE

(DOE)
8
What is the Scientific Method?

u Do you remember learning about this back in high school or junior high even? What
were those steps again?
u Decide what phenomenon you wish to investigate. Specify how you can manipulate
the factor and hold all other conditions fixed, to insure that these extraneous
conditions aren't influencing the response you plan to measure.
u Then measure your chosen response variable at several (at least two) settings of the
factor under study. If changing the factor causes the phenomenon to change, then
you conclude that there is indeed a cause-and-effect relationship at work.
u How many factors are involved when you do an experiment? Some say two -
perhaps this is a comparative experiment? Perhaps there is a treatment group and a
control group? If you have a treatment group and a control group then, in this case,
you probably only have one factor with two levels.
9
What is the Scientific Method?

u How many of you have baked a cake? What are the factors involved to ensure a
successful cake? Factors might include preheating the oven, baking time,
ingredients, amount of moisture, baking temperature, etc.-- what else? You probably
follow a recipe so there are many additional factors that control the ingredients - i.e.,
a mixture. In other words, someone did the experiment in advance! What parts of the
recipe did they vary to make the recipe a success? Probably many factors,
temperature and moisture, various ratios of ingredients, and presence or absence of
many additives. Now, should one keep all the factors involved in the experiment at a
constant level and just vary one to see what would happen? This is a strategy that
works but is not very efficient. This is one of the concepts that we will address in this
course.
10
A Quick History of DOE

“All experiments are designed experiments, it is just that some are poorly designed and
some are well-designed.”
u If we had infinite time and resource budgets there probably wouldn't be a big fuss
made over designing experiments. In production and quality control we want to
control the error and learn as much as we can about the process or the underlying
theory with the resources at hand. From an engineering perspective we're trying to
use experimentation for the following purposes:
u reduce time to design/develop new products & processes
u improve performance of existing processes
u improve reliability and performance of products
u achieve product & process robustness
u perform evaluation of materials, design alternatives, setting component & system tolerances,
etc.
11
A Quick History of DOE

u We always want to fine-tune or improve the process. In today's global world this drive
for competitiveness affects all of us both as consumers and producers.
u Every experiment design has inputs. Back to the cake baking example: we have our
ingredients such as flour, sugar, milk, eggs, etc. Regardless of the quality of these
ingredients we still want our cake to come out successfully. In every experiment there
are inputs and in addition, there are factors (such as time of baking, temperature,
geometry of the cake pan, etc.), some of which you can control and others that you
can't control. The experimenter must think about factors that affect the outcome. We
also talk about the output and the yield or the response to your experiment. For the
cake, the output might be measured as texture, flavour, height, size, or flavour.
12
A Quick History of DOE

Here's a quick timeline: u Quality improvement initiatives in many


companies
u The agricultural origins, 1918 – 1940s
u CQI and TQM were important ideas and
u R. A. Fisher & his co-workers became management goals
u Profound impact on agricultural science u Taguchi and robust parameter design,
u Factorial designs, ANOVA process robustness

u The first industrial era, 1951 – late 1970s u The modern era, beginning circa 1990,
when economic competitiveness and
u Box & Wilson, response surfaces
globalization are driving all sectors of the
u Applications in the chemical & process economy to be more competitive.
industries

u The second industrial era, late 1970s –


1990
13
A Quick History of DOE

u A lot of what we are going to learn in this course goes back to what Sir Ronald Fisher
developed in the UK in the first half of the 20th century. He really laid the foundation
for statistics and for design of experiments. He and his colleague Frank Yates
developed many of the concepts and procedures that we use today. Basic
concepts such as orthogonal designs and Latin squares began there in the '20s
through the '40s. World War II also had an impact on statistics, inspiring sequential
analysis, which arose from World War II as a method to improve the accuracy of long-
range artillery guns.
u Immediately following World War II the first industrial era marked another resurgence
in the use of DOE. It was at this time that Box and Wilson (1951) wrote the key paper in
response surface designs thinking of the output as a response function and trying to
find the optimum conditions for this function. George Box died early in 2013. And, an
interesting fact here - he married Fisher's daughter! He worked in the chemical
industry in England in his early career and then came to America and worked at the
University of Wisconsin for most of his career.
14
A Quick History of DOE

u The Second Industrial Era - or the Quality Revolution


u The importance of statistical quality control was taken to Japan in the 1950s by W Edward
Deming. This started what Montgomery calls a second Industrial Era, and sometimes the
quality revolution. After the second world war, Japanese products were of terrible quality.
They were cheaply made and not very good. In the 1960s their quality started improving.
The Japanese car industry adopted statistical quality control procedures and conducted
experiments which started this new era. Total Quality Management (TQM), Continuous
Quality Improvement (CQI) are management techniques that have come out of this
statistical quality revolution - statistical quality control and design of experiments.
u Taguchi, a Japanese engineer, discovered and published a lot of the techniques that
were later brought to the West, using an independent development of what he referred to
as orthogonal arrays. In the West, these were referred to as fractional factorial designs.
These are both very similar and we will discuss both of these in this course. He came up
with the concept of robust parameter design and process robustness.
15
A Quick History of DOE

The Modern Era


u Around 1990 Six Sigma, a new way of representing CQI, became popular. Now it is a
company and they employ a technique which has been adopted by many of the large
manufacturing companies. This is a technique that uses statistics to make decisions based
on quality and feedback loops. It incorporates a lot of previous statistical and
management techniques.
Clinical Trials
u Montgomery omits in this brief history a major part of design of experimentation that
evolved - clinical trials. This evolved in the 1960s when medical advances were previously
based on anecdotal data; a doctor would examine six patients and from this wrote a
paper and published it. The incredible biases resulting from these kinds of anecdotal
studies became known. The outcome was a move toward making the randomized
double-blind clinical trial the gold standard for approval of any new product, medical
device, or procedure. The scientific application of the statistical procedures became very
important.
16

Introduction to • EXPERIMENTS
• THE BASIC PRINCIPLES OF
Design of DOE
STEPS FOR PLANNING,
Experiments

CONDUCTING AND
ANALYSING AN
(DOE) EXPERIMENT
17
Why Experiment?

u This course deals with comparative experiments to compare different treatments. We


compare treatments by using them and analysing the outcome.
u Specifically, we apply the treatments to experimental units and then measure one or
more responses.
u An experiment is characterised by the treatments and experimental units to be used,
the way treatments are assigned to units, and the responses that are measured.
18
Advantages of Experiments

u What is so special about experiments? Consider that:


u Experiments allow us to setup a direct comparison between the treatments of interest.
u We can design experiments to minimise any bias in the comparison.
u We can design experiments so that the error in the comparison is small.
u Most important, we are in control of experiments, and having that control allows us to make
stronger inferences about the nature of differences that we see in the experiment.
Specifically, we may make inferences about causation.
u This last point distinguishes an experiment from an observational study. An
observational study also has treatments, units, and responses. However, in the
observational study we merely observe which units are in which treatment groups; we
don’t get to control that assignment.
19
Observational Studies

u The drawback of observational studies is that the grouping into “treatments” is not
under the control of the experimenter and its mechanism is usually unknown. Thus
observed differences in responses between treatment groups could very well be due
to these other hidden mechanisms, rather than the treatments themselves.
u It is important to say that while experiments have some advantages, observational
studies are also useful and can produce important results. For example, studies of
smoking and human health are observational, but the link that they have established
is one of the most important public health issues today. Similarly, observational studies
established an association between heart valve disease and the diet drug fen-phen
that led to the withdrawal of the drugs fenfluramine and dexfenfluramine from the
market (Connolloy et al. 1997 and US FDA 1997).
20
Components of an Experiment

u An experiment has treatments, experimental units, responses, and a method to assign


treatments to units.
u Treatments, units, and assignment method specify the experimental design.
u Note that there is no mention of a method for analysing the results. Strictly speaking,
the analysis is not part of the design, though a wise experimenter will consider the
analysis when planning an experiment.
u Whereas the design determines the proper analysis to a great extent, we will see that
two experiments with similar designs may be analysed differently, and two
experiments with different designs may be analysed similarly. Proper analysis depends
on the design and the kinds of statistical model assumptions we believe are correct
and are willing to assume.
21
Components of an Experiment

u Not all experimental designs are created equal. A good experimental design must
u Avoid systematic error
u Be precise
u Allow estimation of error
u Have broad validity
22
Avoid Systematic Error

u Comparative experiments estimate differences in response between treatments. If


our experiment has systematic error, then our comparisons will be biased, no matter
how precise our measurements are or how many experimental units we use.
u For example, if responses for units receiving treatment one are measured with
instrument A, and responses for treatment two are measured with instrument B, then
we don’t know if any observed differences are due to treatment effects or instrument
miscalibrations.
u Randomisation, as will be discussed in Chapter 2, is our main tool to combat
systematic error.
23
Be Precise

u Even without systematic error, there will be random error in the responses, and this will
lead to random error in the treatment comparisons. Experiments are precise when this
random error in treatment comparisons is small. Precision depends on the size of the
random errors in the responses, the number of units used, and the experimental
design used.
24
Allow Estimation of Error

u Experiments must be designed so that we have an estimate of the size of random


error. This permits statistical inference: for example, confidence intervals or tests of
significance. We cannot do inference without an estimate of error. Sadly, experiments
that cannot estimate error continue to be run.
25
Have Broad Validity

u The conclusions we draw from an experiment are applicable to the experimental


units we used in the experiment. If the units are actually a statistical sample from
some population of units, then the conclusions are also valid for the population.
u Beyond this, we are extrapolating, and the extrapolation might or might not be
successful. For example, suppose we compare two different drugs for treating
attention deficit disorder. Our subjects are preadolescent boys from our clinic. We
might have a fair case that our results would hold for preadolescent boys elsewhere,
but even that might not be true if our clinic’s population of subjects is unusual in some
way. The results are even less compelling for older boys or for girls.
u Thus if we wish to have wide validity—for example, broad age range and both
genders—then our experimental units should reflect the population about which we
wish to draw inference.
26
The Basic Principles of DOE

Randomisation
u This is an essential component of any experiment that is going to have validity. If you
are doing a comparative experiment where you have two treatments, a treatment
and a control, for instance, you need to include in your experimental process the
assignment of those treatments by some random process. An experiment includes
experimental units. You need to have a deliberate process to eliminate potential
biases from the conclusions, and random assignment is a critical step.
27
The Basic Principles of DOE

Replication
u Replication is some in sense the heart of all of statistics. To make this point...
Remember what the standard error of the mean is? It is the square root of the
!
estimate of the variance of the sample mean, i.e., ! ⁄" . The width of the confidence
interval is determined by this statistic. Our estimates of the mean become less
variable as the sample size increases.
u Replication is the basic issue behind every method we will use in order to get a
handle on how precise our estimates are at the end. We always want to estimate or
control the uncertainty in our results. We achieve this estimate through replication.
Another way we can achieve short confidence intervals is by reducing the error
variance itself. However, when that isn't possible, we can reduce the error in our
estimate of the mean by increasing n.
u Another way is to reduce the size or the length of the confidence interval is to reduce
the error variance - which brings us to blocking.
28
The Basic Principles of DOE

Blocking
u Blocking is a technique to include other factors in our experiment which contribute to
undesirable variation. Much of the focus in this class will be to creatively use various
blocking techniques to control sources of variation that will reduce error variance.
u For example, in human studies, the gender of the subjects is often an important
factor. Age is another factor affecting the response. Age and gender are often
considered nuisance factors which contribute to variability and make it difficult to
assess systematic effects of a treatment. By using these as blocking factors, you can
avoid biases that might occur due to differences between the allocation of subjects
to the treatments, and as a way of accounting for some noise in the experiment.
u We want the unknown error variance at the end of the experiment to be as small as
possible. Our goal is usually to find out something about a treatment factor (or a
factor of primary interest), but in addition to this, we want to include any blocking
factors that will explain variation.
29
The Basic Principles of DOE

Multi-factor Designs
u We will spend a big part of this course talking about multi-factor experimental
designs: 2k designs, 3k designs, response surface designs, etc.
u The point to all of these multi-factor designs is contrary to the scientific method where
everything is held constant except one factor which is varied. The one factor at a
time method is a very inefficient way of making scientific advances.
u It is much better to design an experiment that simultaneously includes combinations
of multiple factors that may affect the outcome. Then you learn not only about the
primary factors of interest but also about these other factors. These may be blocking
factors which deal with nuisance parameters or they may just help you understand
the interactions or the relationships between the factors that influence the response.
30
The Basic Principles of DOE

Confounding
u Confounding is something that is usually considered bad! Here is an example. Let's
say we are doing a medical study with drugs A and B. We put 10 subjects on drug A
and 10 on drug B. If we categorize our subjects by gender, how should we allocate
our drugs to our subjects? Let's make it easy and say that there are 10 male and 10
female subjects. A balanced way of doing this study would be to put five males on
drug A and five males on drug B, five females on drug A and five females on drug B.
This is a perfectly balanced experiment such that if there is a difference between
male and female at least it will equally influence the results from drug A and the
results from drug B.
31
The Basic Principles of DOE

u An alternative scenario might occur if patients were randomly assigned treatments as


they came in the door. At the end of the study, they might realize that drug A had
only been given to the male subjects and drug B was only given to the female
subjects. We would call this design totally confounded. This refers to the fact that if
you analyse the difference between the average response of the subjects on A and
the average response of the subjects on B, this is exactly the same as the average
response on males and the average response on females. You would not have any
reliable conclusion from this study at all. The difference between the two drugs A and
B, might just as well be due to the gender of the subjects since the two factors are
totally confounded.
32
The Basic Principles of DOE

u Confounding is something we typically want to avoid but when we are building


complex experiments we sometimes can use confounding to our advantage. We will
confound things we are not interested in order to have more efficient experiments for
the things we are interested in. This will come up in multiple factor experiments later
on. We may be interested in main effects but not interactions so we will confound the
interactions in this way in order to reduce the sample size, and thus the cost of the
experiment, but still have good information on the main effects.
33
The Basic Principles of DOE

u Confounding is something we typically want to avoid but when we are building


complex experiments we sometimes can use confounding to our advantage. We will
confound things we are not interested in order to have more efficient experiments for
the things we are interested in. This will come up in multiple factor experiments later
on. We may be interested in main effects but not interactions so we will confound the
interactions in this way in order to reduce the sample size, and thus the cost of the
experiment, but still have good information on the main effects.
34
Steps for Planning, Conducting and
Analysing an Experiment
u The practical steps needed for planning and conducting an experiment include:
recognizing the goal of the experiment, choice of factors, choice of response, choice of
the design, analysis and then drawing conclusions. This pretty much covers the steps
involved in the scientific method.
1. Recognition and statement of the problem
2. Selection of the response variable(s) In practice, steps 2
3. Choice of factors, levels, and ranges and 3 are often
done simultaneously
4. Choice of design or in reverse order.
5. Conducting the experiment
6. Statistical analysis
7. Drawing conclusions, and making recommendations
u What this course will deal with primarily is the choice of the design. This focus includes all
the related issues about how we handle these factors in conducting our experiments.
35
Recognition of and statement of the
problem

u To develop a clear and generally accepted statement of this problem. It is necessary to


develop all ideas about the objectives of the experiment. Usually, it is important to solicit
input from all concerned parties: engineering, quality assurance, manufacturing,
marketing, management, customer, and operating personnel For this reason, a team
approach to designing experiments is recommended.
u It is usually helpful to prepare a list of specific problems or questions that are to be
addressed by the experiment. A clear statement of the problem often contributes
substantially to better understanding of the phenomenon being studied and the final
solution of the problem.
u It is also important to keep the overall objectives of the experiment in mind. There are
several broad reasons for running experiments and each type of experiment will generate
its own list of specific questions that need to be addressed.
36
Recognition - Factor screening or
characterisation
Factors
u We usually talk about "treatment" factors, which are the factors of primary interest to
you. In addition to treatment factors, there are nuisance factors which are not your
primary focus, but you have to deal with them. Sometimes these are called blocking
factors, mainly because we will try to block on these factors to prevent them from
influencing the results.
u There are other ways that we can categorize factors:
37
Recognition - Choice of factors, levels,
and ranges
Experimental vs. Classification Factors
u Experimental Factors
u These are factors that you can specify (and set the levels) and then assign at
random as the treatment to the experimental units. Examples would be
temperature, level of an additive fertilizer amount per acre, etc.
u Classification Factors
u These can't be changed or assigned, these come as labels on the experimental
units. The age and sex of the participants are classification factors which can't be
changed or randomly assigned. But you can select individuals from these groups
randomly.
38
Recognition - Choice of factors, levels,
and ranges
Quantitative vs. Qualitative Factors
u Quantitative Factors
u You can assign any specified level of a quantitative factor. Examples: percent or
pH level of a chemical.
u Qualitative Factors
u These factors have categories which are different types. Examples might be
species of a plant or animal, a brand in the marketing field, gender, - these are
not ordered or continuous but are arranged perhaps in sets.
39
Recognition - Optimisation

u After the system has been characterised and we are reasonably certain that the
important factors have been identified, the next objective is usually optimisation, that
is, find the settings or levels of the important factors that result in desirable values of
the response.
u For example, if a screening experiment on a chemical process results in the
identification of time and temperature as the two most important factors, the
optimisation experiment may have as its objective to find the levels of time and
temperature that maximises yield, or perhaps maximising yield while keeping some
product property that is critical to the customer within specifications.
u An optimisation experiment is usually a follow-up to a screening experiment. It would
be very unusual for a screening experiment to produce the optimal settings of the
important factors.
40
Recognition - Confirmation

u In a confirmation experiment, the experimenter is usually trying to verify that the


system operates or behaves in a manner that is consistent with some theory or past
experience.
u For example, if theory or experience indicates that a particular new material is
equivalent to the one currently in use and the new material is desirable (perhaps less
expensive, or easier to work with in some way), then a confirmation experiment would
be conducted to verify that substituting the new material results in no change in
product characteristics that impact its use.
u Moving a new manufacturing process to full-scale production based on results found
during experimentation at a pilot plant or development site is another situation that
often results in confirmation experiments-that is, are the same factors and settings that
were determined during development work appropriate for the full-scale process?
41
Recognition - Discovery

u In discovery experiments, the experimenters are usually trying to determine what


happens when we explore new materials, or new factors, or new ranges for factors.
u Discovery experiments often involve screening of several (perhaps many) factors. In
the pharmaceutical industry, scientists are constantly conducting discovery
experiments to find new materials or combinations of materials that will be effective in
treating disease.
42
Recognition - Robustness

u These experiments often address questions such as under what conditions do the
response variables of interest seriously degrade?
u Or what conditions would lead to unacceptable variability in the response variables?
u A variation of this is determining how we can set the factors in the system that we can
control to minimize the variability transmitted into the response from factors that we
cannot control very well.
43
Selection of the response variable

u In selecting the response variable, the experimenter should be certain that this
variable really provides useful information about the process under study
u Most often, the average or standard deviation (or both) of the measured
characteristic will be the response variable. Multiple responses are not unusual. The
experimenters must decide how each response will be measured, and address issues
such as how will any measurement system be calibrated and how this calibration will
be maintained during the experiment
u The gauge or measurement system capability (or measurement error) is also an
important factor. If gauge capability is inadequate, only relatively large factor effects
will be detected by the experiment or perhaps additional replication will be required
44
Selection of the response variable

u In some situations where gauge capability is poor, the experimenter may decide to
measure each experimental unit several times and use the average of the repeated
measurements as the observed response
u It is usually critically important to identify issues related to defining the responses of
interest and how they are to be measured before conducting the experiment
u Sometimes designed experiments are employed to study and improve the
performance of measurement systems
45
Choice of factors, levels, and range

Factors
u When considering the factors that may influence the performance of a process or
system, the experimenter usually discovers that these factors can be classified as
either potential design factors or nuisance factors
u The potential design factors are those factors that the experimenter may wish to vary
in the experiment. Often we find that there are a lot of potential design factors, and
some further classification of them is helpful.
u Some useful classifications are design factors, held-constant factors, and allowed-to-
vary factors. The design factors are the factors actually selected for study in the
experiment. Held-constant factors are variables that may exert some effect on the
response, but for purposes of the present experiment these factors are not of interest,
so they will be held at a specific level.
46
Choice of factors, levels, and range

u For example, in an etching experiment in the semiconductor industry, there may be


an effect that is unique to the specific plasma etch tool used in the experiment.
However, this factor would be very difficult to vary in an experiment, so the
experimenter may decide to perform all experimental runs on one particular (ideally
“typical”) etcher. Thus, this factor has been held constant.
u As an example of allowed-to-vary factors, the experimental units or the “materials” to
which the design factors are applied are usually nonhomogeneous, yet we often
ignore this unit-to-unit variability and rely on randomization to balance out any
material or experimental unit effect. We often assume that the effects of held-
constant factors and allowed-to-vary factors are relatively small.
47
Choice of factors, levels, and range

u Nuisance factors, on the other hand, may have large effects that must be accounted for,
yet we may not be interested in them in the context of the present experiment. Nuisance
factors are often classified as controllable, uncontrollable, or noise factors.
u A controllable nuisance factor is one whose levels may be set by the experimenter. For
example, the experimenter can select different batches of raw material or different days
of the week when conducting the experiment. The blocking principle, discussed in the
previous section, is often useful in dealing with controllable nuisance factors.
u If a nuisance factor is uncontrollable in the experiment, but it can be measured, an
analysis procedure called the analysis of covariance can often be used to compensate
for its effect. For example, the relative humidity in the process environment may affect
process performance, and if the humidity cannot be controlled, it probably can be
measured and treated as a covariate.
48
Choice of factors, levels, and range

u When a factor that varies naturally and uncontrollably in the process can be
controlled for purposes of an experiment, we often call it a noise factor. In such
situations, our objective is usually to find the settings of the controllable design factors
that minimise the variability transmitted from the noise factors.
49
Choice of factors, levels, and range

Levels
u Once the experimenter has selected the design factors, he or she must choose the
ranges over which these factors will be varied and the specific levels at which runs will
be made. Thought must also be given to how these factors are to be controlled at
the desired values and how they are to be measured.
u Process knowledge is required to do this. This process knowledge is usually a
combination of practical experience and theoretical understanding. It is important to
investigate all factors that may be of importance and to be not overly influenced by
past experience, particularly when we are in the early stages of experimentation or
when the process is not very mature.
50
Choice of experimental design

u If the above pre-experimental planning activities are done correctly, this step is
relatively easy. Choice of design involves consideration of sample size (number of
replicates), selection of a suitable run order for the experimental trials, and
determination of whether or not blocking or other randomisation restrictions are
involved.
u Design selection also involves thinking about and selecting a tentative empirical
model to describe the results. The model is just a quantitative relationship (equation)
between the response and the important design factors. In many cases, a low-order
polynomial model will be appropriate. A first-order model in two variables is

𝑦 = 𝛽# + 𝛽$ 𝑥$ + 𝛽% 𝑥% + 𝜀
51
Choice of experimental design

u where y is the response, the x's are the design factors, the 𝛽 's are unknown
parameters that will be estimated from the data in the experiment, and & 𝜀 is a
random error term that accounts for the experimental error in the system that is being
studied. The first-order model is also sometimes called a main effects model. First-
order models are used extensively in screening or characterization experiments.
u A common extension of the first-order model is to add an interaction term, say

𝑦 = 𝛽# + 𝛽$ 𝑥$ + 𝛽% 𝑥% + 𝛽$% 𝑥$ 𝑥% + 𝜀

u where the cross-product term 𝑥$ 𝑥% represents the two-factor interaction between the
design factors. Because interactions between factors is relatively common, the first-
order model with interaction is widely used.
52
Choice of experimental design

u Higher-order interactions can also be included in experiments with more than two
factors if necessary. Another widely used model is the second-order model
% %
𝑦 = 𝛽# + 𝛽$ 𝑥$ + 𝛽% 𝑥% + 𝛽$% 𝑥$ 𝑥% + 𝛽$$ 𝑥$$ + 𝛽% 𝑥%% +𝜀

u Second-order models are often used in optimization experiments.


u In selecting the design, it is important to keep the experimental objectives in mind.
u Some factor levels will result in different values for the response. We could be
interested in identifying which factors cause this difference and in estimating the
magnitude of the response change.
53
Choice of experimental design

u In other situations, we may be more interested in verifying uniformity. For example,


two production conditions A and B may be compared, A being the standard and B
being a more cost-effective alternative.
u The experimenter will then be interested in demonstrating that, say, there is no
difference in yield between the two conditions.
54
Performing the experiment

u When running the experiment, it is vital to monitor the process carefully to ensure that
everything is being done according to plan. Errors in experimental procedure at this stage
will usually destroy experimental validity.
u One of the most common mistakes is that the people conducting the experiment failed to
set the variables to the proper levels on some runs. Someone should be assigned to check
factor settings before each run. Up-front planning to prevent mistakes like this is crucial to
success. It is easy to underestimate the logistical and planning aspects of running a
designed experiment in a complex manufacturing or research and development
environment.
u Coleman and Montgomery (1993) suggest that prior to conducting the experiment a few
trial runs or pilot runs are often helpful. These runs provide information about consistency of
experimental material, a check on the measurement system, a rough idea of experimental
error, and a chance to practice the overall experimental technique. This also provides an
opportunity to revisit the decisions made in steps 1-4, if necessary.
55
Statistical analysis of the data

u Statistical methods should be used to analyse the data so that results and conclusions are
objective rather than judgmental in nature. If the experiment has been designed correctly
and performed according to the design, the statistical methods required are not
elaborate. There are many excellent software packages designed to assist in data
analysis.
u Often we find that simple graphical methods play an important role in data analysis and
interpretation. Because many of the questions that the experimenter wants to answer can
be cast into an hypothesis-testing framework, hypothesis testing and confidence interval
estimation procedures are very useful in analysing data from a designed experiment.
u It is also usually very helpful to present the results of many experiments in terms of an
empirical model, that is, an equation derived from the data that express the relationship
between the response and the important design factors.
56
Conclusions and recommendations

u Once the data have been analysed, the experimenter must draw practical conclusions
about the results and recommend a course of action.
u Graphical methods are often useful in this stage, particularly in presenting the results to
others.
u Follow-up runs and confirmation testing should also be performed to validate the
conclusions from the experiment.
u Throughout this entire process, it is important to keep in mind that experimentation is an
important part of the learning process, where we tentatively formulate hypotheses about
a system, perform experiments to investigate these hypotheses, and on the basis of the
results formulate new hypotheses, and so on. This suggests that experimentation is iterative.
It is usually a major mistake to design a single, large, comprehensive experiment at the
start of a study.
57
Conclusions and recommendations

u We usually experiment sequentially, and as a general rule, no more than about 25


percent of the available resources should be invested in the first experiment. This will
ensure that sufficient resources are available to perform confirmation runs and
ultimately accomplish the final objective of the experiment.
u Finally, it is important to recognise that all experiments are designed experiments. The
important issue is whether they are well designed or not. Good pre-experimental
planning will usually lead to a good, successful experiment. Failure to do such
planning usually leads to wasted time, money, and other resources and often poor or
disappointing results.

You might also like