0% found this document useful (0 votes)
595 views36 pages

Catapult Experiment: Design of Experiments (Doe) and Response Surface Methods (RSM)

This report applies design of experiments and response surface methodology to optimize a catapult experiment. A screening experiment identified elastic position, pull back angle, and stop position as having the greatest effect on ball firing distance. A follow-up experiment constructed a prediction model for firing distance within +/- 159 mm accuracy. The model was then used to make predictions and optimize the catapult design within tested parameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
595 views36 pages

Catapult Experiment: Design of Experiments (Doe) and Response Surface Methods (RSM)

This report applies design of experiments and response surface methodology to optimize a catapult experiment. A screening experiment identified elastic position, pull back angle, and stop position as having the greatest effect on ball firing distance. A follow-up experiment constructed a prediction model for firing distance within +/- 159 mm accuracy. The model was then used to make predictions and optimize the catapult design within tested parameters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Design of Experiments (DoE) and Response Surface Methods

(RSM)

Module Coursework

Catapult Experiment

DoE & RSM Project Report pg. 1


July 2018
Executive Summary
This report focuses on the application of advanced statistical tools learned during BB3
module Design of Experiments (DoE) and Response Surface Methods (RSM) applied on the
Catapult Experiment using Minitab and interpreting the findings.

The Catapult Experiment consist of a small wooden toy catapult that can be fired in the
classroom that has a variety of setting factors – which makes it an excellent example for an
academic exercise to apply the DoE and RSM tools.

The methodology outlines the sequential strategy that will be followed – that starts with the
screening experiment (planning, execution and analysis), and the follow-up experiment
(planning, execution and analysis) to inform which factors have the greatest influence on the
ball firing response and should be used on the prediction model. Once the model is
constructed, it can be used to make predictions whiting the confidence interval.

As such, Elastic Position (most statistically significant effect), Pull Back Angle and Stop
Position were identified as the factors with the greatest effect on the response. The model
accuracy was +/- 159 mm.

Conclusions and recommendations for future work and discussions conclude the report,
including personal reflections.

DoE & RSM Project Report pg. 2


July 2018
Table of Contents
Executive Summary .................................................................................................................. 2
Table of Contents ...................................................................................................................... 3
1 Introduction and Objectives .............................................................................................. 6
1.1 Background................................................................................................................. 6
1.2 Aims, Objectives and Methodology ............................................................................ 6
1.3 Scope / Limitations and Constraints .......................................................................... 6
1.4 Report Outline ............................................................................................................ 6
2 Methodology .......................................................................................................................7
2.1 Response Surface.........................................................................................................7
2.2 Planning the Screening Experiment ............................................................................7
2.3 Follow-up Experiment Planning ............................................................................... 11
2.4 Prediction using the model........................................................................................ 12
3 Results and Analysis ......................................................................................................... 13
3.1 Screening Experiment Results and Analysis ............................................................. 13
3.1.1 Main Effects Plots ...................................................................................................14
3.1.2 Interaction Plots ..................................................................................................14
3.1.3 Response Surface Plots ....................................................................................... 15
3.1.4 Pareto Charts .......................................................................................................16
3.1.5 Half Normal Plots................................................................................................ 17
3.1.6 Conclusions for Screening Experiment Results and Analysis ........................... 18
3.2 Follow-up Experiment Results and Analysis ............................................................ 18
3.2.1 Deleted Residuals Plots .......................................................................................19
3.2.2 Pareto Plots ........................................................................................................ 20
3.2.3 Regression Analysis............................................................................................ 20
3.2.4 Regression Analysis – Re-run .............................................................................21
3.2.5 Response Surface Models................................................................................... 23
3.2.6 Interaction Plots ................................................................................................. 24
3.2.7 Contour Plots...................................................................................................... 24
3.3 Validation: Predicting & Confirming Results / Prediction and Optimisation ......... 25
4 Discussion ........................................................................................................................ 26
5 Conclusions & Recommendations ................................................................................... 28
6 References ........................................................................................................................ 29
7 Annexes and Appendices ................................................................................................. 30
Appendix A – Main Minitab file ....................................................................................... 30
Appendix B – Interpreting R2, R2 adj, PRESS & PRESS RMSE in DOE – slides by Paula
Palade 30
Appendix C – Other graphs and calculations .......................................................... 32

DoE & RSM Project Report pg. 3


July 2018
Table of Figures

Figure 1. The operator and the measurer for the experiments – including the Catapult
Experiment setup and factors: Pull-back angle (A), Hold time in armed position (B), Elastic
Twists (C), Elastic Position - Fixing Arm (D), Stop position (E) ............................................... 9
Figure 2. Available Factorial Designs – Minitab...................................................................... 10
Figure 3. A face cantered central composite design [8] ............................................................12
Figure 4. The Fractional Factorial Design and results (coded) ................................................13
Figure 5. The Fractional Factorial Design Factors and Levels ..................................................14
Figure 6. Main effects plot - screening experiment (coded) .....................................................14
Figure 7. Interaction Plot - screening experiment .................................................................... 15
Figure 8. Surface Plot - screening experiment ..........................................................................16
Figure 9. Pareto Chart of the Effects (α = 0.05) - screening experiment ................................. 17
Figure 10. Pareto Chart of the Effects (α = 0.15) - screening experiment ................................ 17
Figure 11. Half Normal Plot (α = 0.05) - screening experiment .............................................. 18
Figure 12. Face Centred Composite Experiment Model ...........................................................19
Figure 13. Normal Probability Plot – Follow-up Experiment ...................................................19
Figure 14. Deleted Residual vs Fit Plot – Follow-up Experiment ............................................ 20
Figure 15. Pareto Chart – Follow-up Experiment .................................................................... 20
Figure 16. Normal Probability Plot, Deleted Residuals, Pareto Chart and Half Normal –
Follow-up Experiment Re-run ................................................................................................. 23
Figure 17. Surface Plots – Follow-up Experiment ................................................................... 24
Figure 18. Interaction Plots – Follow-up Experiment ............................................................. 24
Figure 19. Contour Plots – Follow-up Experiment .................................................................. 25
Figure 20. The Fractional Factorial Design and results (not coded) ....................................... 32
Figure 21. The Fractional Factorial Design Factors and Levels (not coded) ........................... 32
Figure 22. Main effects plot - screening experiment (un-coded) ............................................ 32
Figure 23. Interaction Plot - screening experiment ................................................................. 33
Figure 24. Surface Plot - screening experiment ....................................................................... 33
Figure 25. Half Normal Plot (α = 0.15) - screening experiment ............................................. 34
Figure 26. Surface Plots – Follow-up Experiment ................................................................... 34
Figure 27. Interaction Plots – Follow-up Experiment ............................................................. 34
Figure 28. Contour Plots – Follow-up Experiment ................................................................. 35
Figure 29. Normal Probability Plot – Follow-up Experiment Re-run ..................................... 35
Figure 30. Deleted Residual vs Fit Plot – Follow-up Experiment Re-run ............................... 35
Figure 31. Pareto Chart – Follow-up Experiment – Re-run .................................................... 36

Table of Tables

Table 1. Selected DoE tools and Justification ............................................................................ 7


Table 2. Selected factors and levels ............................................................................................ 8
Table 3. Noise factors ................................................................................................................. 8
Table 4. Team Roles and Responsibilities ................................................................................. 9

DoE & RSM Project Report pg. 4


July 2018
Table 5. Selected 3 factors and 3 levels ..................................................................................... 11
Table 6. Response Surface Regression ......................................................................................21
Table 7. Response Surface Regression – Re-run ..................................................................... 22
Table 8. Prediction vs. Actual................................................................................................... 25

Table of Equations

Equation 1. 𝑦 = 𝑓(𝑥1, 𝑥2, 𝑥3, . . . ) .................................................................................... 7


Equation 2. Press RSME = PRESSNumber of Runs ..........................................................12

Glossary of Notations

Acronym Meaning

-1 Low level setting for each factor


0 Mid-level setting for multi-level experiment factors
1 High level setting for each factor
AVONA Analysis of Variance
BB Black Belt
CC Central Composite
DoE Design of Experiment
EESE Electrical, Electronic and Software Engineering
FCCC Face-centred Central Composite Design
HR Human Resources
Lenth’s PSE Lenth’s Pseudo-Standard Error
PD Product Development
PMA Post Module Assignment
PRESS Predicted Residual Error Sum of Squares
PSD Prediction Standard Deviation
Response Distance travelled by the ball in flight
RSE Response Surface Modelling
SE Standard Error
SE Standard Errors
VIF Variance Inflation Factor

DoE & RSM Project Report pg. 5


July 2018
1 Introduction and Objectives

1.1 Background

The catapult has the advantage that is has multiple factors that can be adjusted to control the
firing distance response – which makes it very suitable to use as a case study for these
learning outcomes.

1.2 Aims, Objectives and Methodology


The specific Aim of this report is to use the data obtained from running the catapult to
define an operational definition, select meaningful factors and use a sequential methodology
to develop a prediction model by deploying Minitab Statistical software, and to analyse,
interpret and draw conclusions from the data.

The Objectives are to conduct analysis on the catapult experiment by using advanced DoE
and RSM statistical analysis that will serve future BB project – as this is an academic exercise
for developing the candidate’s practical skills in applying the statistical tools.

The Methodology used is looking to select and make use of the most appropriate statistical
tools and deploy a sequential methodologies to plan, run and analyse experiments, with the
appropriate justification to support the engineering problem.

1.3 Scope / Limitations and Constraints


The Scope of the project is to use Minitab as a statistical tool to apply the methodology
highlighted.

Following the scope, in terms of Limitations and Constraints, the major limitation
comes from the small number of experiments that was carried out in the class. Due to time
constrains, not understanding the importance or difficulty – not enough repeats where
carried out.

1.4 Report Outline


The report is structured using the technical report format, where Chapter 2 Methodology
presents the plan of use of statistical tools and methods, with justification, Chapter 3 Results
and Analysis presents the Minitab results and their interpretation, Chapter 4 Discussion
looks at a reflection on the results and their implications and Chapter 5 Conclusions &
Recommendations presents succinct conclusions and recommendation for further work. The
report concludes with Chapter 6 References and Chapter 7 Annexes and Appendices.

DoE & RSM Project Report pg. 6


July 2018
2 Methodology
A quick view of the tools that will be used in this report with the justification of their selection
is presented in Table 1.

Table 1. Selected DoE tools and Justification


Method / Tools Justification
Main Effects Plots Used to evaluate the individual impact of each factor on the response,
observed when the 2 levels of factors influence the response differently,
in relation to the mean [1]
Interaction Plots Used to evaluate how interaction affect the relationship between the
factors and the response [2]
Response Surface Plots Used a surface plot 3D view to evaluate the interactions between pairs
of variables and their impact on the response distance [3]
Pareto Plots Used to evaluate the absolute value of the standardised effect on the
response [4]
Half Normal Plots Used to evaluate the absolute values of the standardized effects [4] for
a selected α level on the response. Similar to Pareto Plots, includes
multifactor interactions. Attention to potential type 1 and 2 errors. [4]
Regression Analysis Analysis of the main effects within the reduced model, check error
within the model, estimating the Prediction Equation for the model [5]
Response Surface Used to test hypothesis and reduce the model, check errors and predict
Regression Analysis the model equation through parameter estimation
Deleted Residuals Used to check for a normal distribution and any outliers that could be
Probability Plot removed
Deleted Residuals vs Fits Used to check for a healthy graph by looking at the scatter above and
below the line

2.1 Response Surface


Response Surface or Transfer Function, which will be used interchangeably, denote the
relationship between a system output or response (y) and one or more system inputs (x’s).
The Response Surface can be represented graphically or via an equation:

Equation 1. 𝑦 = 𝑓(𝑥1, 𝑥2, 𝑥3, . . . )


Obviously, when we have multiple inputs – x’s, we cannot plot the full surface, but a
combination of 2 factors with the other fixed.

There are many benefits of knowing the response surface for a product or process. It is
advisable to follow a sequential strategy for experimentation, which is what we will do in this
report: identify trends with a two-level experiment and then run a multi-level experiment
aiming to model the response surface accurately with a smaller number of factors.

2.2 Planning the Screening Experiment


The methodology for experimentation is going to follow the sequential strategy outlined,
which begins with Step 1 (Screening).

DoE & RSM Project Report pg. 7


July 2018
Screening can be implemented in 3 distinct ways:

 fit a 1st order response surface (ignoring interactions),


 fit selected interactions
 fit a complete 1st order with twist response surface.
In this experiment, we choose to fit a complete ‘1st order + twist’ response surface/equation
so that the interactions where accounted for. The response surface model is a two factor
interaction (2fi) model.

The response in our case is the total horizontal distance travelled by the ball in flight -
the projectile distance of the catapult. Initially, we will include 5 quantitative factors in
the experiment and we will conduct a 2-level experiment, followed by the selection of 3 of
them that have the strongest interaction and effect, which we will investigate in detail in a
multi-level experiment.

Varying the factor levels allows the identification of the largest effects on the response. The
factors levels where selected from the possible setting options and by using engineering
knowledge. The tensioning peg was not selected as one of the factors due to the drastic effect
on the catapult function.

The 5 selected factors and the 2 initial levels or settings for the experiment are presented in
Table 2, with the corresponding coding level.

Table 2. Selected factors and levels


The Factor Levels The Factor Levels
The 5 selected factors are:
Coded lowest level (-1) Coded Highest level (+1)
1. Pull-back angle (A) – the angle at which 160º 180º
the catapult is pulled back
2. Hold time in armed position (B) – the time 5 sec 15 sec
for which the catapult arm is kept
extended at the pullback angle
3. Elastic Twists (C) – How many twists are 0 10
applied in the elastic before it’s attached to
the arm
4. Elastic Position - Fixing Arm (D) – 1 5
the position where the elastic pin is fixed
to the arm
5. Stop position (E) - the position at which 3 5
the arm stops after the catapult fired from
the particular pull angle

Several noise factors where identified, which are presented in the table below (Table 3).

Table 3. Noise factors


Noise Factor Mitigation – proposed action to reduce the noise factor
Operator The same operator was used in all experiments to fire the catapult (includes
the procedure, the way the arm is released)
Measurer The same measurer was used in all experiments to measure where the ball
landed

DoE & RSM Project Report pg. 8


July 2018
Elastic stretch The elastic band was loosened prior to each shot to minimise any initial
stretch
Elastic position Elastic needs to sit on either side of the tensioning peg (not on one side only)
Initial Position of Each experiment caused the catapult to move, therefore the catapult was
Catapult repositioned on the marked position after each run.
Catapult Movement The catapult was securely held to the floor by one operator during the
in Action experiment
Aluminium Foil The ball land mark on the aluminium foil was ironed out after each
landing measurement to not introduce noise for future landing nearby

Figure 1. The operator and the measurer for the experiments – including the Catapult Experiment
setup and factors: Pull-back angle (A), Hold time in armed position (B), Elastic Twists (C), Elastic
Position - Fixing Arm (D), Stop position (E)

Based on the above described factors and the potential noise factors, the following
operational definition was applied in the experiment, which was carried out by the team
using the roles and responsibilities highlighted in Table 4.

Table 4. Team Roles and Responsibilities


Team Member Roles and Responsibilities
Coordinator The coordinator has the an experimental design to give the factor levels to the
operator and record the results in Minitab / helps with holding down the
catapult at each run
Operator The operator has the role to fire the catapult as per the instructions of the
Coordinator
Measurer The measurer has the role to measure where the ball landed (the projectile
distance) with a tape measure on the aluminium foil landing mark in front of
the catapult. The measurement will take place from the base of the catapult
on the centre of the dent in the aluminium foil where the ball landed.

DoE & RSM Project Report pg. 9


July 2018
Timekeeper The timekeeper uses a stopwatch to count down the hold-back time in the
armed position

Operational definition for the experimentation methodology was defined as below to


allow accuracy and repeatability by controlling the noise factors:

1. Coordinator reads the factor levels from the experimental design in Minitab
2. The operator follows the set up instructions for the defined levels:
a. Sets the firing parameter levels (Coordinator verifies the set up levels)
b. Re-positions the catapult in the marked spot
c. Securely holds it down
d. Arms the catapult arm and load the ball
e. Waits for fire instructions from timekeeper
f. Fires catapult at the timekeeper instructions in the same way was for each run
g. Releases elastic to allow stretch and resets catapult in neutral position
3. The measurer measures the distance the ball travelled using the tape meter and the
instructions from the roles and responsibilities
4. The coordinator records the measurement in Minitab
5. Repeat for each of the factors levels

Coding / standardising

Coding or standardising the analysis factors means to use -1 to represent the lowest observed
value and to use +1 to represent the highest, whilst coding the intermediate values by linear
interpolation. This is beneficial in several ways, such as giving the ability to compare and
interpret the parameter estimates, the independency of the units and scale of the parameter.
[6] Also, the intercept is now a meaningful number that is the predicted response of the
travelled distance when all x’s are at coded 0 – which is at the ‘centre’ of the data. [7]

Next, we need to select a factorial design that includes the number of levels and factors (5
factors with 2 levels). When selecting a design, we aim to minimise the number of runs that
we have to carry out, while still maintaining the factors list.

Figure 2. Available Factorial Designs – Minitab

DoE & RSM Project Report pg. 10


July 2018
For a Full Factorial design, the total number of runs is calculated as (# levels) (# factors) i.e. 25 =
32 runs. An alternative to Full factorial is to minimise the number of runs by running a
fractional factorial design: (# levels) (# factors-1) i.e. 25-1 = 16 runs, which is ½ fraction of the full
runs. These carefully selected 16 combinations will allow us to fit the response surface
equation.

2.3 Follow-up Experiment Planning


This experiment forms the second stage of a DoE and builds on the screening experiment,
where we have identified the 3 factors that have the largest impact on the response. This
experiment is a multi-level experiment with three levels for each of the three selected factors.

This experiment is designed to fit a 2nd order surface, which was not done in the screening
experiment, which will include the quadratic terms for a more accurate curved response
surface.

The aim of this experiment is to fit the 2nd order response surface that can be used to predict,
and optimise the projectile distance of the ball from the catapult.

Table 5 presents the factors and the levels – with the codding for all 3 levels, including the
middle setting which is codded at 0.

The other 2 discarded factors are keep constant at the +1 positions - Hold time in armed
position (B) = 15 sec and Elastic Twists (C) = 10.

Table 5. Selected 3 factors and 3 levels


Coded lowest level Coded Mid-level Coded Highest level
The 3 selected factors are:
(-1) (0) (+1)
1. Pull-back angle (A) 160º 170º 180º
2. Elastic Position - Fixing Arm (D) 1 3 5
3. Stop position (E) 3 4 5

Central Composite (CC) design was chosen, which gave us 17 runs in total.

The below extract from Minitab shows the CC design set up. The design contains 3 factors, on
the basis of one replicate, with 3 centre points in cube.

Central Composite Design

Factors: 3 Replicates: 1
Base runs: 17 Total runs: 17
Base blocks: 1 Total blocks: 1

Two-level factorial: Full factorial

Cube points: 8
Center points in cube: 3
Axial points: 6
Center points in axial: 0

α: 1

DoE & RSM Project Report pg. 11


July 2018
The figure below shows the face-cantered Central Composite Design – where the cube is a
two-level full factorial and the star with the face-centre points and the centre point.

Figure 3. A face cantered central composite design [8]

2.4 Prediction using the model


The variable that tells us how well our model can predict is given in Equation 2. Press
(Predicted Residual Error Sum of Squares) RSME is measured in the units of the response
variable.

PRESS
Equation 2. Press RSME = √
Number of Runs

Using this prediction interval, we will obtain a range in which our response will fall within
given confidence level. To obtain the prediction, we will use Minitab.

DoE & RSM Project Report pg. 12


July 2018
3 Results and Analysis

3.1 Screening Experiment Results and Analysis


After the completion of the screening experiment, the data gathered was used in Minitab by
using statistical methods, to establish which factors had the largest influence on the firing
distance (y).

For each experiment run, we repeated the measurement 3 times and calculated the Average –
as presented in Figure 4 (uncoded results Figure 20 in Annexes). The Average was used as
the response variable. This was done in order to ensure that the confidence intervals obtained
are greater, whilst still having an efficient design.

Figure 4. The Fractional Factorial Design and results (coded)

Fractional Factorial Design

Factors: 5 Base Design: 5, 16


Resolution: V
Runs: 16 Replicates: 1
Fraction: 1/2
Blocks: 1 Center pts (total): 0

Design Generators: E = ABCD

Alias Structure

I + ABCDE

A + BCDE
B + ACDE
C + ABDE
D + ABCE
E + ABCD
AB + CDE
AC + BDE
AD + BCE
AE + BCD
BC + ADE
BD + ACE

DoE & RSM Project Report pg. 13


July 2018
BE + ACD
CD + ABE
CE + ABD
DE + ABC

Figure 5. The Fractional Factorial Design Factors and Levels

3.1.1 Main Effects Plots


Main Effects Plot for the screening experiment is presented in Figure 6 – where we can
observe how steep the graph is (the slope of the line), which indicates how strong the effect is
on the response. The horizontal line is the overall mean response, in our case approximately
920 mm.

It is easy to observe from the plot that factors Hold Time and the Twists have very little
effect on the distance travelled by the ball, and therefore it can be concluded that these
factors can be removed from the next phase of the experiment.

The remaining 3 factors – Pull Back Angle, Elastic Position and Stop Position appear
to have large magnitude of effect on the response as per the slope steepness.

Figure 6. Main effects plot - screening experiment (coded)

3.1.2 Interaction Plots


Next, the interaction plot will be analysed - Figure 7 – which show all the 5 factors
interactions with each other in 2D. From the difference in gradients of the lines, one can
observe the interaction between the factors. Little or no interactions are present when the
lines are parallel. This is the case with Twists and Pull Back Angle, Hold Time and Pull Back
Angle, etc. This is not to be confused with the main effect – as we could still have a main
effect on the response with no factor interaction – for one or both of the factors.

DoE & RSM Project Report pg. 14


July 2018
Figure 7. Interaction Plot - screening experiment
In contrast, when we have diverging lines, there is a strong interaction between those factors.
This is the case with Elastic Position and Stop Position, Pull Back Angle and Stop Position,
etc.

3.1.3 Response Surface Plots


Figure 8 presents all the 5 factors plotted in individual graphs with 2 changing factors and
one constant – in the Response Surface Plots, which are a 3D version of the interaction plots
easier to interpret. [3] These plots present the 1st order with twist response surface. From
these we can identify the operating conditions that are most suitable to gain the optimal
response distance, and are helpful for predicting and optimisation.

DoE & RSM Project Report pg. 15


July 2018
Surface Plots of Average
Hold Values
Pull Back Angle 0
Hold Time 0
Twists 0
1100 1100 1400
1000 1000 1250 1200
ver age ver age ver age ver age
1000
900 900 1000
800
0
1
800
0
1 750
500 0
1
800
0
1
Elastic Position 0
-1 Hold Tim e -1 Twis t s -1 Elas t ic Pos it ion -1 St op Pos it ion

Pull B ack A ngle


0
1
-1 0

Pull B ack A ngle


1
-1 0

Pull B ack A ngle


1
-1

Pull B ack A ngle


0
1
-1
Stop Position 0

1000 1200 1100


1200
950 1000
ver age ver age 1000 ver age ver age 1000
900 900 800
1 800 1 1 1
850 600 800 600
0 0 0 0
-1 Twis t s -1 Elas t ic Pos it ion -1 St op Pos it ion -1 Elas t ic Pos it ion
0 -1 0 -1 0 -1 0 -1
1 1 1 1
Hold Tim e Hold Tim e Hold Tim e Twis t s

1100 1500
1000
ver age ver age
1000
900
1 1
800
0 500 0
-1 St op Pos it ion -1 St op Pos it ion
0 -1 0 -1
1 1
Twis t s Elast ic Pos it ion

Figure 8. Surface Plot - screening experiment


From the plots, we can see that if the other factors are held at 0 (the values indicated in the
legend – equivalent with 0 coded), the optimal conditions were we can obtain the greatest
travel distance (if this is the parameter level required) are:

 Pull-back angle (A) = 180º (+1)


 Hold time in armed position (B) = 15 sec (+1)
 Elastic Twists (C) = 10 (+1)
 Elastic Position - Fixing Arm (D) = Position 5 (+1)
 Stop position (E) = Position 3 (-1)

3.1.4 Pareto Charts


The Pareto Charts presented in Figure 9 and Figure 10 present the absolute value of the
standardised effects from the largest to the smallest. [4] The statistical significance level (α)
gives the reference line.

We can observe the magnitude and importance of effects – with D (Elastic Position) having
the largest effect by far, followed by A and E as main factors and interactions (DE, AE, AD,
BC, CD) up to the reference line. For α = 0.05, the reference line is 58.3 and for α = 0.15, the
reference line is 38.6.

DoE & RSM Project Report pg. 16


July 2018
Figure 9. Pareto Chart of the Effects (α = 0.05) - Figure 10. Pareto Chart of the Effects (α = 0.15)
screening experiment - screening experiment
Therefore, the bars across the line are statistically significant, at the defined α level for our
catapult model.

The limitations of the Pareto chart are due to the fact that the absolute value of the effect is
given, but not the ability to determine which effects increase or decrease the response.

Above we used two values for α – the conventional value of 0.05 (strict) and the proposed
value for the analysis of 0.15 (less strict – industry experience) – in order to help to
determine which effects are statistically significant – by using P-values. If P-value is greater
than α, then the effect is not significant and can be removed, and if the opposite is true, the
effect is significant.

In this analysis, moving forward, α = 0.05 will be used, with the results for α = 0.15 being
presented in the annexes.

3.1.5 Half Normal Plots


Figure 11 and Figure 25 present the half normal plots for 2 levels of α that show, similar to
the Pareto charts – the absolute values of the standardized effects from the largest effect to
the smallest effect. [4], testing the null hypothesis that the effect is 0. The plotted points are
in relation to the reference line for all effects being 0, therefore as further form the x axis an
effect gets, the larger the magnitude and the statistical significance. In our case, we can see
that D has the largest effect, followed by the rest of the factors, as in the Pareto Plots. The
blue dots are statistically insignificant and are not being labelled with the effect.

Similar to Pareto Plots, from this plots we cannot determine the effect of the factor in the
response – increase or decrease.

DoE & RSM Project Report pg. 17


July 2018
Figure 11. Half Normal Plot (α = 0.05) - screening experiment

In the analysis above we uses 2 different SE alpha α level of 5% and 15% (P-value 0.15) using
Length’s pseudo-standard error (PSE), the second largest level allows for the capturing all
statistically significant occurrences (Length’s PSE =22.6875).
Two different type of errors are possible with Half Normal Plots - false positives (type 1 error)
and false negatives (type 2 error) – i.e. points that are off the line by chance and small real
effects that cannot be isolated from actual random effects.

3.1.6 Conclusions for Screening Experiment Results and Analysis


The overall aim of this section with analysing the results of the screening experiment was to
confirm which 2 factors have the least effect from the 5, and which 3 factors are most
statistically significant and we should keep for the follow-up multilevel experiment.

In conclusion, based on all the plots analysed above, we can confirm that the Number of
Twists and Hold time have negligible effect on the result and can be removed from the
analysis.

Elastic Position is confirmed as the factor which most statistically significant effect, and Pull
Back Angle and Stop Position are similar magnitude, all 3 will be used in the Follow-up
Experiment.

3.2 Follow-up Experiment Results and Analysis


Figure 12 presents the results of the follow-up experiment. The runs where not randomised,
but the same operational definition and process where followed as per the screening
experiment.

The Average of three measurements was used as the response variable. This was done in
order to ensure that the confidence intervals obtained are greater, whilst still having an
efficient design.

DoE & RSM Project Report pg. 18


July 2018
Figure 12. Face Centred Composite Experiment Model

3.2.1 Deleted Residuals Plots


The normal probability plot (Figure 13) does not have a good linear pattern, and it apparent
that some outliers are present. This is important to be analysed before we can accept the
prediction model. Removing the outliers can lead to a better model and a normal Gaussian
distribution.

Figure 13. Normal Probability Plot – Follow-up Experiment


Figure 14 presents the deleted residuals plot, and it can be observed that the distribution
above and below the line is not constant, therefore not healthy. After deleting the outliers,
this should improve.

DoE & RSM Project Report pg. 19


July 2018
Figure 14. Deleted Residual vs Fit Plot – Follow-up Experiment

3.2.2 Pareto Plots


Figure 15 presents the Pareto plot for the standardised effects, and for an α = 0.05 we can
observe that the effect of 2 terms – AA and CC are below the 2.36.

Figure 15. Pareto Chart – Follow-up Experiment

3.2.3 Regression Analysis


From the Regression Analysis Table 6 below, we can see the same 2 interactions as above –
AA (Pull-Back Angle (A)*Pull-Back Angle (A)) and CC (Elastic Position (D)*Elastic Position
(D)) that are well over the 5% P value (α = 0.05). This means in practice that for those 2
interactions, we cannot reject the null hypothesis leading to them not being statistically
significant.

There are 2 more interactions – AC and BB – that are close to the limit, but a decision was
made for those to be left in and retest.

DoE & RSM Project Report pg. 20


July 2018
The VIF levels for the coded coefficients are all 1.00 (no multi-collinearity in the predictors),
except 3 that had 1.54.

Table 6. Response Surface Regression


Response Surface Regression: Average Distance versus ... Position (D)
Model Summary
S R-sq R-sq(adj) PRESS R-sq(pred)

58.6938 98.25% 96.01% 512426 62.86%


Coded Coefficients
Term Coef SE Coef 95% CI T-Value Term P-Value VIF

Constant 1017.6 25.1 (958.2, 1076.9) 40.52 Constant 0.000

Pull-Back Angle (A) 147.2 18.6 (103.3, 191.1) 7.93 Pull-Back Angle (A) 0.000 1.00

Stop Position (E) -95.2 18.6 (-139.1, -51.3) -5.13 Stop Position (E) 0.001 1.00

Elastic Position (D) 269.8 18.6 (225.9, 313.7) 14.54 Elastic Position (D) 0.000 1.00

Pull-Back Angle (A)*Pull-Back Angle (A) -29.9 35.9 (-114.7, 54.9) -0.83 Pull-Back Angle (A)*Pull-Back Angle (A) 0.432 1.54

Stop Position (E)*Stop Position (E) -128.2 35.9 (-213.0, -43.4) -3.58 Stop Position (E)*Stop Position (E) 0.009 1.54

Elastic Position (D)*Elastic Position (D) 0.1 35.9 (-84.7, 84.9) 0.00 Elastic Position (D)*Elastic Position (D) 0.998 1.54

Pull-Back Angle (A)*Stop Position (E) -107.3 20.8 (-156.4, -58.2) -5.17 Pull-Back Angle (A)*Stop Position (E) 0.001 1.00

Pull-Back Angle (A)*Elastic Position (D) 66.5 20.8 (17.4, 115.5) 3.20 Pull-Back Angle (A)*Elastic Position (D) 0.015 1.00

Stop Position (E)*Elastic Position (D) -114.8 20.8 (-163.9, -65.7) -5.53 Stop Position (E)*Elastic Position (D) 0.001 1.00
Regression Equation in Uncoded Units
Average = -18489 + 149 Pull-Back Angle (A) + 2927 Stop Position (E)- 201 Elastic Position (D) - 0.299 Pull-Back Angle (A)*Pull-
Distance Back Angle (A)- 128.2 Stop Position (E)*Stop Position (E) + 0.03 Elastic Position (D)*Elastic Position (D) - 10.73 Pull-
Back Angle (A)*Stop Position (E)+ 3.32 Pull-Back Angle (A)*Elastic Position (D) - 57.4 Stop Position (E)*Elastic Position (D)
Fits and Diagnostics for Unusual Observations
Average
Obs Distance Fit SE Fit 95% CI Resid Std Resid Del Resid HI Obs Cook’s D DFITS

4 646.7 582.8 52.3 (459.0, 706.5) 63.9 2.40 5.32 0.794718 4 2.24 10.4736 R

5 951.7 1018.4 52.3 (894.7, 1142.1) -66.8 -2.51 -7.35 0.794718 5 2.44 -14.4645 R
R Large residual

As presented in the methodology, PRESS RMSE will be calculated using Equation 2. We can
observe that PRESS is quite high and the number of runs quite low – which means we need
to be cautious when using Press RSME, however, it will be recalculated after the re-run and it
is expected to decrease.

PRESS 512426
Press RSME = √Number of Runs = √ 17
= 173.6165 mm (Model Confidence Interval)

3.2.4 Regression Analysis – Re-run


After the re-run, excluding the 2 terms AA and CC, we can see improvements in Table 7– all
the P-values are less than 0.05 and the VIF are all 1.00. In the Annexes, it is presented the
same analysis for α = 0.15, which displays the same results.

From the model summary, we can see that the R-sq value dropped (from 98.25% to 98.06%),
the PRESS reduced from 512426 to 433896.

DoE & RSM Project Report pg. 21


July 2018
Recalculating Press RSME, we now obtain an error margin of 159 mm which is slightly
improved from the previous one of 173 mm, therefore the confidence interval is greater.

Table 7. Response Surface Regression – Re-run

Response Surface Regression: Average Distance versus ... Position (D)


Model Summary
S R-sq R-sq(adj) PRESS R-sq(pred)

54.5841 98.06% 96.54% 433896 68.55%


Coded Coefficients
Term Coef SE Coef 95% CI T-Value P-Value Term VIF

Constant 1009.0 20.6 (962.4, 1055.7) 48.91 0.000 Constant

Pull-Back Angle (A) 147.2 17.3 (108.1, 186.2) 8.53 0.000 Pull-Back Angle (A) 1.00

Stop Position (E) -95.2 17.3 (-134.2, -56.1) -5.51 0.000 Stop Position (E) 1.00

Elastic Position (D) 269.8 17.3 (230.8, 308.9) 15.63 0.000 Elastic Position (D) 1.00

Stop Position (E)*Stop Position (E) -143.5 26.9 (-204.4, -82.7) -5.34 0.000 Stop Position (E)*Stop 1.00
Position (E)

Pull-Back Angle (A)*Stop Position (E) -107.3 19.3 (-150.9, -63.6) -5.56 0.000 Pull-Back Angle 1.00
(A)*Stop Position (E)

Pull-Back Angle (A)*Elastic Position (D) 66.5 19.3 (22.8, 110.1) 3.44 0.007 Pull-Back Angle 1.00
(A)*Elastic Position (D)

Stop Position (E)*Elastic Position (D) -114.8 19.3 (-158.4, -71.1) -5.95 0.000 Stop Position 1.00
(E)*Elastic Position (D)
Regression Equation in Uncoded Units
Average = -10104 + 47.66 Pull-Back Angle (A) + 3049 Stop Position (E)- 200 Elastic Position (D)
Distance - 143.5 Stop Position (E)*Stop Position (E)- 10.73 Pull-Back Angle (A)*Stop Position (E) + 3.323 Pull-
Back Angle (A)*Elastic Position (D) - 57.40 Stop Position (E)*Elastic Position (D)
Fits and Diagnostics for Unusual Observations
Average
Obs Distance Fit SE Fit 95% CI Resid Std Resid Del Resid HI Cook’s D Obs DFITS

2 711.7 764.0 48.1 (655.3, 872.7) -52.4 -2.02 -2.58 0.775 1.76 2 -4.7931 R

3 590.0 641.9 48.1 (533.2, 750.6) -51.9 -2.00 -2.54 0.775 1.73 3 -4.7102 R

4 646.7 588.7 48.1 (480.0, 697.4) 58.0 2.24 3.17 0.775 2.16 4 5.8834 R

5 951.7 1024.4 48.1 (915.7, 1133.1) -72.7 -2.81 -7.53 0.775 3.40 5 -13.9661 R

8 976.7 1031.7 48.1 (923.0, 1140.4) -55.0 -2.13 -2.84 0.775 1.95 8 -5.2719 R
R Large residual

DoE & RSM Project Report pg. 22


July 2018
Figure 16. Normal Probability Plot, Deleted Residuals, Pareto Chart and Half Normal – Follow-up
Experiment Re-run
Figure 16 presents the graphs after the 2 interactions terms were removed. It can be observed
that the linear pattern has improved and the deleted versus fit is more healthy – the
amplitude above and below is roughly constant. All the remaining terms are statistically
significant.

3.2.5 Response Surface Models


Figure 26 resents the response surface models plots. We can observe that these plots give a
more accurate prediction model, as they include the quadratic terms and the confidence
interval is greater, and the surface curved.

DoE & RSM Project Report pg. 23


July 2018
Surface Plots of Average Distance
Hold Values
Pull-Back Angle (A) 0
Stop Position (E) 0
Elastic Position (D) 0
1200 1400

a ge Distance1 0 00 a ge Distance1200
10 0 0
1 1
800 800
0 0
-1 St op Posit ion ( E ) -1 E la st ic Posit ion ( D)
0 -1 0 -1
1 1
Pull-Back Angle ( A) Pull-Ba ck Angle ( A)

1250

a ge Distance1 0 00
750 1

500 0
-1 E la st ic Posit ion ( D
0 -1
1
Stop Posit ion ( E )

Figure 17. Surface Plots – Follow-up Experiment


From this 3D view, we can see how if we change two factors simultaneously, what the effect
will be on the response. It is visually visible that Pull Back Angle of 180º and Elastic position
of 5 give the largest projectile distance.

3.2.6 Interaction Plots


Figure 18 presents the interaction Plots for the follow-up experiment, and we can observe the
curve of the interaction trajectories that will allow for a more accurate prediction model.

Figure 18. Interaction Plots – Follow-up Experiment


We can see little interaction between the Pull-back angle and the Elastic Position and strong
interaction between Elastic Position and Stop Position.

3.2.7 Contour Plots


Figure 28 and Figure 19 present the contour plots, and it is a visual aid that can be used to
predict and optimise the response.

We can observe that we need to have a pull-back angle as high as possible to maximise the
response, as well as a high elastic position setting. As a particular example, if we were to set
up the A=180º and D=5 we would have a firing distance larger than 1400 mm (for E=4), and
in contrast, the response would be less than 600 mm for all coded levels 0.

DoE & RSM Project Report pg. 24


July 2018
Contour Plots of Average Distance
Stop Position (E)*Pull-Back Angle (A) Elastic Position (D)*Pull-Back Angle (A)
1.0 1.0 Average
Distance
0.5 0.5
< 600
600 – 800
800 – 1000
0.0 0.0 1000 – 1200
1200 – 1400
-0.5 -0.5 > 1400

Hold Values
-1.0 -1.0
Pull-Back Angle (A) 0
-1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0
Stop Position (E) 0
1.0 Elastic Position (D)*Stop Position (E) Elastic Position (D) 0

0.5

0.0

-0.5

-1.0
-1.0 -0.5 0.0 0.5 1.0

Figure 19. Contour Plots – Follow-up Experiment

3.3 Validation: Predicting & Confirming Results / Prediction and


Optimisation
In order to test the model, run trials where conducted using few combinations of the settings
that where not previously used in the Multi-level Experiment. The prediction tool was used to
test the model by using the PRESS value to predict the firing distance for the given set up
form the trial runs. Minitab was used to input the setups, and the results are presented in
Table 8. From Press RSME, we know we can predict with +/- 159 mm accuracy.

Settings Prediction
Variable Setting Setting Setting Fit SE Fit 95% CI 95% PI

Pull-Back Angle (A) 175 165 179 1082.63 22.3633 (1032.04, 1133.22) (949.192, 1216.07)

Stop Position (E) 4 3 3 Fit SE Fit 95% CI 95% PI

Elastic Position (D) 3 5 5 1184.83 39.0763 (1096.44, 1273.23) (1032.98, 1336.69)

Fit SE Fit 95% CI 95% PI

1634.12 45.9448 (1530.18, 1738.05) (1472.72, 1795.51)

Table 8. Prediction vs. Actual

Predicted Actual

1082.63 1132.12
1184.83 1255.19
1634.12 1596.65

From the table above, we can observe that the predicted and the actual distance are close,
within the PRESS RMSE model confidence expectation of 95% confidence intervals. It can
also be observed that the longer the firing distance, the higher the variation, therefore the
accuracy is decreasing with distance.

DoE & RSM Project Report pg. 25


July 2018
4 Discussion
As identified at the beginning of this report, the scope was to use DoE and RSM statistical
tools and Minitab to study the catapult and the various factors that have an effect on the
projectile firing distance, leading to the development of a prediction model.

As seen in the Results section, it is important to follow the outlined methodology from the
DoE to enable the construction of a proper understanding of the system and the factors that
influence the response.

The methodology was appropriate because it enabled us to use the right statistical tools
appropriate for the problem, which included the selection of the most important factors from
the screening experiment, followed by in-depth model development and optimisation to
enable prediction of the response. This methodology is very powerful in a variety of scenarios
that could be under investigation or optimisation.

None of the steps outlined in the methodology should be missed, as under the false
impression of saving time, jumping over the screening experiment into the follow-up 3 factor
experiment can be very costly.

The report presents graphs and descriptions that where a duplication of other plots that
presented a similar message (such as Pareto) that where not essential. This was done for my
personal benefit – to explore all the angles and the options to look at the data and analyse the
problem, and less for the benefit of the marker. In turn, this lead to a large report and a large
number of graphs in the annexes.

Whilst running the experiments, the team (some members presented in Figure 1) was aligned
on the importance of the operational definition and planning stage, which led to us lagging
behind other teams and having to rush to perform the follow up experiment. It would have
been ideal to have more time to repeat and replicate in randomised runs the follow up
experiment and to get a larger accuracy. Even with our extensive work on the operational
definition, it was still the case that some of the observations had large residuals.

For me, in particular, the most important learning from this module and PMA is the capacity
to use the Minitab DoE and RSM tools and to plan and run an experiment following a robust
methodology. Although the catapult case was an academic exercise, I feel that I have learned
a lot from this example and the PMA and I would be confident to apply it into a future
project.

I feel that I have gained a lot from attending this module, and even if I will not be using this
tools in the DoE and RSM in the foreseeable future, there is always the possibility of
returning into engineering to support with BB work. Even beyond this, a detailed
understanding of advanced statistical tools is beneficial in my future work – as I am now
starting to observe opportunities for me to apply the regression analysis and RSM to
leadership data and the Pulse survey data.

DoE & RSM Project Report pg. 26


July 2018
The report could have been improved if more runs would have been done by the team,
however, it is important to be able to use the data available and to try to fit a model with a
relative small data set. This is particularly true if the runs are very expensive or they take a lot
of time.

There is scope for improvement to look to utilise more designs and the model fit selection, as
a theoretical exercise.

On a cautionary note, it is important to always use engineering judgement when making the
initial factor selection – as without this, some very important factors could be missed, or
others that have a drastic effect like the tensioning peg in our case could be selected that
would skew the model.

DoE & RSM Project Report pg. 27


July 2018
5 Conclusions & Recommendations
Operational Definition is paramount for any DoE for achieving good results and accurate
prediction models – therefore it is recommend that this step is not rushed and proper
attention is given to the steps and the method, as well as proving that it works – that is both
repeatable and reproducible though a number of experiments.

Noise factors are equally important to note and obverse and to capture proper mitigation
actions against them in the Operational definition – which will make it more robust.

We can draw the following main conclusions from the results – the screening experiment is
very powerful to identify from the initial factors studies which have the largest and smallest
effect on the response (projectile distance). In this study, it was concluded from that section
that Hold Time and the Twists have very little effect and where removed from the follow-up
experiment. The three remaining factors have a large effect – which is easily seen in the main
effects plot.

The conclusion from the follow-up experiment is to not accept a surface response and a
model until detailed analysis is not carried out – looking at the regression analysis – as it
might be the case that terms might need to be removed (outliers) and the model re-fitted.

It is recommended that in order to bring the Press RSME down, we should have done more
experiments (runs) – which was not possible due to time constraints. In the future, it is
recommended for repeats to be planned in.

Recommendations for future work include to expand the application of these tools in either
engineering projects or in HR projects. I have considered to use the RSM tools for the
prediction of different leadership behavioural tests with Pulse scores – however, for the
purpose of this report, would have not enabled me to apply the DoE tools, therefore the
Catapult was used instead.

The use of the tools was relatively easy and the data interpretation very visual and
straightforward.

I particularly liked the prediction tool – which enabled us to predict the firing distance
without needing to actually carry the experiment. Given that we would have a good model, it
is a very efficient way to predict responses and to do tests.

DoE & RSM Project Report pg. 28


July 2018
6 References

[1] M. 1. Support, “What is a main effects plot?,” Minitab 17 , [Online]. Available:


[Link]
statistics/anova/basics/what-is-a-main-effects-plot/. [Accessed May 2018].

[2] “Interpret the key results for Interaction Plot,” Minitab Express Support, [Online].
Available: [Link]
to/modeling-statistics/anova/how-to/interaction-plot/interpret-the-results/.
[Accessed May 2018].

[3] M. 1. Support, “Interpret the key results for Surface Plot,” [Online]. Available:
[Link]
statistics/using-fitted-models/how-to/surface-plot/interpret-the-results/key-results/.
[Accessed May 2018].

[4] M. 1. Support, “Effects plots for Analyze Factorial Design,” Minitab, [Online].
Available: [Link]
to/modeling-statistics/doe/how-to/factorial/analyze-factorial-design/interpret-the-
results/all-statistics-and-graphs/effects-plots/. [Accessed May 2018].

[5] M. 1. Support, “Interpret the key results for Analyze Response Surface Design,”
Minitab , [Online]. Available: [Link]
and-how-to/modeling-statistics/doe/how-to/response-surface/analyze-response-
surface-design/interpret-the-results/interpret-the-results/?SID=129050. [Accessed
May 2018].

[6] M. 18, “What is the difference between coded units and uncoded units?,” Minitab,
[Online]. Available: [Link]
to/modeling-statistics/doe/supporting-topics/basics/coded-units-and-uncoded-
units/. [Accessed May 2018].

[7] U. o. Bradford, “BB3 Design of Experiments and Response Surface Methods Course
Notes,” 2015.

[8] RecurDYN, “Central Composite Design (CCD),” [Online]. Available:


[Link]
[Link]. [Accessed May 2018].

DoE & RSM Project Report pg. 29


July 2018
7 Annexes and Appendices

Appendix A – Main Minitab file

catapult data 2 [Link]

Appendix B – Interpreting R2, R2 adj, PRESS & PRESS RMSE in DOE –

DoE & RSM Project Report pg. 30


July 2018
DoE & RSM Project Report pg. 31
July 2018
Appendix C – Other graphs and calculations
Uncoded

Figure 20. The Fractional Factorial Design and results (not coded)

Figure 21. The Fractional Factorial Design Factors and Levels (not coded)

Figure 22. Main effects plot - screening experiment (un-coded)

DoE & RSM Project Report pg. 32


July 2018
Figure 23. Interaction Plot - screening experiment

Surface Plots of Average


Hold Values
Pull Back Angle 170
Hold Time 10
Twists 5
1100 1100 1400
1000 1000
0 1250 1200
A ver age A ver age Aver age A ver age
1000
0
900 900 1000
800
10
15

Hold Tim e
800
5
10

Twis t s
750
500
0 3
5

Elas t ic Pos it ion


800
4
5

St op Pos it ion
Elastic Position 3
1
160 160 1
160 160

Stop Position 4
170 5 170 0 170 1 170 3
180 180 180 180
Pull B ack A ngle Pull B ack A ngle Pull B ack A ngle Pull B ack A ngle

1000 1200 1100


1200
0
1000
A ver age 950 A ver age 1000 A ver age A ver age 1000
0
900 800 900 800
10 5 5 5
850 800 600
5 600 3 4 3
Twis t s Elas t ic Pos it ion St op Pos it ion Elas t ic Pos it ion
5 5 5 0
10 0 10 1 10 3 5 1
15 15 15 10
Hold Tim e Hold Tim e Hold Tim e Twis t s

1100
0 1500
1000
A ver age A ver age
900 1000
5 5
800
0
4 500 4
0 St op Pos it ion 1 St op Pos it ion
5 3 3 3
10 5
Twis t s Elast ic Pos it ion

Figure 24. Surface Plot - screening experiment

DoE & RSM Project Report pg. 33


July 2018
Figure 25. Half Normal Plot (α = 0.15) - screening experiment

Surface Plots of Average Distance


Hold Values
Pull-Back Angle (A) 170
Stop Position (E) 4
Elastic Position (D) 3
1200 1500

1200
ge Distance1000 ge Distance
5 900 5
800

4 600 3
160 St op Posit ion ( E ) 160
16 E la st ic Posit ion ( D)
170 3 170 1
180 180
Pull-Back Angle ( A) Pull-Ba ck Angle ( A)

1 2 50
ge Distance1 0 0 0
75 0 5

500 3
3 E la st ic Posit ion ( D)
4 1
5
St op Posit ion ( E )

Figure 26. Surface Plots – Follow-up Experiment

Figure 27. Interaction Plots – Follow-up Experiment

DoE & RSM Project Report pg. 34


July 2018
Contour Plots of Average Distance
Stop Position (E)*Pull-Back Angle (A) Elastic Position (D)*Pull-Back Angle (A)
5.0 5 Average
Distance
4.5 4
< 600
600 – 800
800 – 1000
4.0 3 1000 – 1200
1200 – 1400
3.5 2 > 1400

Hold Values
3.0 1
Pull-Back Angle (A) 170
160 165 170 175 180 160 165 170 175 180
Stop Position (E) 4
5 Elastic Position (D)*Stop Position (E) Elastic Position (D) 3

1
3.0 3.5 4.0 4.5 5.0

Figure 28. Contour Plots – Follow-up Experiment

Figure 29. Normal Probability Plot – Follow-up Experiment Re-run

Figure 30. Deleted Residual vs Fit Plot – Follow-up Experiment Re-run

DoE & RSM Project Report pg. 35


July 2018
Figure 31. Pareto Chart – Follow-up Experiment – Re-run

At α = 0.15

DoE & RSM Project Report pg. 36


July 2018

You might also like