0% found this document useful (0 votes)
29 views31 pages

HASTS212 Lecture 1

This document provides an overview of design of experiments (DOE) and statistical experimental design. It discusses key topics such as: - The fundamentals and basic steps of DOE, which involves systematically changing variables to determine their effects with minimal samples. - Principles of experimentation like replication, randomization, and blocking to increase precision and control biases. - Terminology used in DOE like factors, responses, treatment combinations, and confounding. - The importance of planning experiments, including clear objectives, appropriate designs, valid analysis and drawing conclusions. - Examples of simple comparative experiments and statistical techniques like hypothesis testing, ANOVA, and checking model assumptions.

Uploaded by

Carl Ushe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views31 pages

HASTS212 Lecture 1

This document provides an overview of design of experiments (DOE) and statistical experimental design. It discusses key topics such as: - The fundamentals and basic steps of DOE, which involves systematically changing variables to determine their effects with minimal samples. - Principles of experimentation like replication, randomization, and blocking to increase precision and control biases. - Terminology used in DOE like factors, responses, treatment combinations, and confounding. - The importance of planning experiments, including clear objectives, appropriate designs, valid analysis and drawing conclusions. - Examples of simple comparative experiments and statistical techniques like hypothesis testing, ANOVA, and checking model assumptions.

Uploaded by

Carl Ushe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

HASTS212: Design and Analysis of Experiments

[email protected]

THE FUNDAMENTALS
DESIGN OF EXPERIMENT (DOE) IN PROCESS IMPROVEMENT

DOE is a formal mathematical method for systematically planning


and conducting scientific studies that change experimental
variables together in order to determine their effect of a given
response.

DOE makes controlled changes to input variables in order to gain


maximum amounts of information on cause and effect relationships
with a minimum sample size.
ROLE OF DOE IN PROCESS
IMPROVEMENT
DOE is more efficient that a standard approach of changing “one
variable at a time” in order to observe the variable’s impact on a
given response.

DOE generates information on the effect various factors have on a


response variable and in some cases may be able to determine
optimal settings for those factors.
ROLE OF DOE IN PROCESS IMPROVEMENT

DOE encourages “brainstorming” activities associated with


discussing key factors that may affect a given response and allows
the experimenter to identify the “key” factors for future studies.

DOE is readily supported by numerous statistical software


packages available on the market.
BASIC STEPS IN DOE

Four elements associated with DOE:


1. The design of the experiment,
2. The collection of the data,
3. The statistical analysis of the data, and
4. The conclusions reached and recommendations made as a
result of the experiment.
PRINCIPLES OF EXPERIMENTATION
Replication – repetition of a basic experiment without changing
any factor settings, allows the experimenter to estimate the
experimental error (noise) in the system used to determine whether
observed differences in the data are “real” or “just noise”, allows
the experimenter to obtain more statistical power (ability to identify
small effects)
.Randomization – a statistical tool used to
minimize potential uncontrollable biases in
the experiment by randomly assigning
material, people, order that experimental
trials are conducted, or any other factor not
under the control of the experimenter.
Results in “averaging out” the effects of the
extraneous factors that may be present in
order to minimize the risk of these factors
affecting the experimental results.
Blocking – technique used to increase the precision of an
experiment by breaking the experiment into homogeneous
segments (blocks) in order to control any potential block to block
variability (multiple lots of raw material, several shifts, several
machines, several inspectors). Any effects on the experimental
results as a result of the blocking factor will be identified and
minimized.
TERMINOLOGY
Confounding - A concept that basically means that multiple effects
are tied together into one parent effect and cannot be separated.
For example,
1. Two people flipping two different coins would result in the effect
of the person and the effect of the coin to be confounded
2. As experiments get large, higher order interactions (discussed
later) are confounded with lower order interactions or main effect.
TERMINOLOGY
Factors – experimental factors or independent variables
(continuous or discrete) an investigator manipulates to capture any
changes in the output of the process. Other factors of concern are
those that are uncontrollable and those which are controllable but
held constant during the experimental runs.
TERMINOLOGY
Responses – dependent variable measured to describe the output
of the process.

Treatment Combinations (run) – experimental trial where all factors


are set at a specified level.
TERMINOLOGY
Fixed Effects Model - If the treatment levels are specifically
chosen by the experimenter, then conclusions reached will
only apply to those levels.

Random Effects Model – If the treatment levels are randomly


chosen from a population of many possible treatment levels,
then conclusions reached can be extended to all treatment
levels in the population.
PLANNING A DOE
Everyone involved in the experiment should have a clear idea in
advance of exactly what is to be studied, the objectives of the
experiment, the questions one hopes to answer and the results
anticipated
PLANNING A DOE
Select a response/dependent variable (variables) that will provide
information about the problem under study and the proposed
measurement method for this response variable, including an
understanding of the measurement system variability
PLANNING A DOE
Select the independent variables/factors (quantitative or qualitative)
to be investigated in the experiment, the number of levels for each
factor, and the levels of each factor chosen either specifically (fixed
effects model) or randomly (random effects model).
PLANNING A DOE
Choose an appropriate experimental design (relatively simple design and
analysis methods are almost always best) that will allow your experimental
questions to be answered once the data is collected and analyzed, keeping
in mind tradeoffs between statistical power and economic efficiency. At this
point in time it is generally useful to simulate the study by generating and
analyzing artificial data to insure that experimental questions can be
answered as a result of conducting your experiment
PLANNING A DOE
Perform the experiment (collect data) paying particular attention
such things as randomization and measurement system accuracy,
while maintaining as uniform an experimental environment as
possible. How the data are to be collected is a critical stage in
DOE
PLANNING A DOE
Analyze the data using the appropriate statistical model insuring
that attention is paid to checking the model accuracy by
validating underlying assumptions associated with the model.
Be liberal in the utilization of all tools, including graphical
techniques, available in the statistical software package to insure
that a maximum amount of information is generated
PLANNING A DOE
Based on the results of the analysis, draw conclusions/inferences
about the results, interpret the physical meaning of these results,
determine the practical significance of the findings, and make
recommendations for a course of action including further
experiments
SIMPLE COMPARATIVE EXPERIMENTS

Single Mean Hypothesis Test


Difference in Means Hypothesis Test with Equal Variances
Difference in Means Hypothesis Test with Unequal Variances
Difference in Variances Hypothesis Test
Paired Difference in Mean Hypothesis Test
One Way Analysis of Variance
CRITICAL ISSUES ASSOCIATED WITH
SIMPLE COMPARATIVE EXPERIMENTS

How Large a Sample Should We Take?


Why Does the Sample Size Matter Anyway?
What Kind of Protection Do We Have Associated with Rejecting
“Good” Stuff?
What Kind of Protection Do We Have Associated with Accepting
“Bad” Stuff?
SINGLE MEAN HYPOTHESIS TEST
After a production run of 12 oz. bottles, concern is expressed about
the possibility that the average fill is too low.
Ho: m = 12
Ha: m <> 12

level of significance = a = .05


sample size = 9
SPEC FOR THE MEAN: 12 + .1
SINGLE MEAN HYPOTHESIS TEST
Sample mean = 11.9
Sample standard deviation = 0.15
Sample size = 9
Computed t statistic = -2.0
P-Value = 0.0805162
CONCLUSION: Since P-Value > .05, you fail to reject hypothesis
and ship product.
EXAMPLE:
CONCLUSIONS

In general we control the likelihood of reaching these incorrect


conclusions by the selection of the level of significance for the
test and the amount of data collected (sample size).
SUMMARY
Strategy of experimentation
 One-factor-at-a-time (OFAT) approach
 inefficient (requires many test runs)
 fails to consider any possible interaction between factors
 Factorial approach (invented in the 1920’s)
 Factors varied together
 Correct, modern, and most efficient approach
 Can determine how factors interact
 Used extensively in industrial R and D, and for process
improvement.

DOE COURSE
STATISTICAL DESIGN OF EXPERIMENTS

All experiments should be designed


experiments
Unfortunately, some experiments are poorly
designed - valuable resources are used
ineffectively and results inconclusive
Statistically designed experiments permit
efficiency and economy, and the use of
statistical methods in examining the data result
in scientific objectivity when drawing
conclusions.

DOE COURSE
THREE BASIC PRINCIPLES OF STATISTICAL DOE
Replication
 allows an estimate of experimental error
 allows for a more precise estimate of the sample mean
value
Randomization
 cornerstone of all statistical methods
 “average out” effects of extraneous factors
 reduce bias and systematic errors
Blocking
 increases precision of experiment
 “factor out” variable not studied

DOE COURSE
USING STATISTICAL TECHNIQUES IN
EXPERIMENTATION - THINGS TO KEEP IN MIND

Use non-statistical knowledge of the problem


 physical laws, background knowledge
Keep the design and analysis as simple as
possible
 Don’t use complex, sophisticated statistical techniques
 If design is good, analysis is relatively straightforward
 If design is bad - even the most complex and elegant
statistics cannot save the situation
Recognize the difference between practical and
statistical significance
 statistical significance  practically significance

DOE COURSE
DESIGN OF EXPERIMENTS
BASIC STATISTICAL CONCEPTS
Simple comparative experiments
 The hypothesis testing framework
 The two-sample t-test
 Checking assumptions, validity
Comparing more than two factor levels…the
analysis of variance
 ANOVA decomposition of total variability
 Statistical testing & analysis
 Checking assumptions, model validity
 Post-ANOVA testing of means

DOE COURSE
MODEL ADEQUACY CHECKING IN
THE ANOVA
Checking assumptions is important
Normality
Constant variance
Independence
Have we fit the right model?
Later we will talk about what to do if some of these assumptions
are violated

DOE COURSE
MODEL ADEQUACY CHECKING IN
THE ANOVA
Checking assumptions is important
Normality
Constant variance
Independence
Have we fit the right model?
Later we will talk about what to do if some of these assumptions
are violated

DESIGN & ANALYSIS OF EXPERIMENTS 9TH


Chapter 1
MONTGOMERY

You might also like