Ada 501245
Ada 501245
POSTGRADUATE
SCHOOL
MONTEREY, CALIFORNIA
THESIS
by
Terence A. Caliguire
June 2009
11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official policy
or position of the Department of Defense or the U.S. Government.
12a. DISTRIBUTION / AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE
Approved for public release; distribution unlimited A
13. ABSTRACT (maximum 200 words)
Tobyhanna Army Depot (TYAD) is required through Department of Defense (DoD) Lean initiatives and
directives to reduce the cycle time of its repair and overhaul lines. The activities involved at DoD repair and overhaul
depot facilities are a multi-billion dollar expenditure within the DoD budget. The DoD, in an attempt to reduce
expenditures, has focused on Lean manufacturing as an operational strategy oriented toward achieving the shortest
possible cycle time by eliminating waste across all depot systems and processes. We establish a discrete event
simulation model to study the AIM–9 Sidewinder Missile repair process line, specifically the repair of the Guidance
and Control Section (GCS) component of the missile. Currently TYAD does not employ a computer simulation
model to support the leaning technique for its repair and overhaul processes. This thesis is the first attempt to model
the Sidewinder Repair Line with a computer-aided discrete event simulation. TYAD will implement results from this
analysis to help reduce cycle time and garner insights into current policies and procedures employed on the
Sidewinder Repair Line. TYAD has identified potential for future use of this analysis by employing the technique of
discrete event simulation to augment its DoD-mandated Leaning procedures.
14. SUBJECT TERMS Discrete Event Simulation, Sidewinder Repair Line Model 15. NUMBER OF
PAGES
91
16. PRICE CODE
17. SECURITY 18. SECURITY 19. SECURITY 20. LIMITATION OF
CLASSIFICATION OF CLASSIFICATION OF THIS CLASSIFICATION OF ABSTRACT
REPORT PAGE ABSTRACT
Unclassified Unclassified Unclassified UU
NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89)
Prescribed by ANSI Std. 239-18
i
THIS PAGE INTENTIONALLY LEFT BLANK
ii
Approved for public release; distribution unlimited
Terence A. Caliguire
Major, United States Army
B.S., Clemson University, 1994
from the
Robert F. Dell
Chairman, Department of Operations Research
iii
THIS PAGE INTENTIONALLY LEFT BLANK
iv
ABSTRACT
v
THIS PAGE INTENTIONALLY LEFT BLANK
vi
TABLE OF CONTENTS
I. INTRODUCTION........................................................................................................1
A. OVERVIEW.....................................................................................................1
B. BACKGROUND ..............................................................................................2
1. Tobyhanna Army Depot......................................................................2
2. Sidewinder Missile Repair Line..........................................................2
3. AIM–9 Sidewinder Missile ..................................................................3
4. Discrete Event Simulation...................................................................3
5. Computer Aided Modeling and Simulation ......................................4
6. Past Approaches at TYAD ..................................................................4
C. HOW WE ARE HELPING.............................................................................6
II. TYAD SIDEWINDER REPAIR LINE......................................................................7
A. INTRODUCTION............................................................................................7
B. SIDEWINDER REPAIR LINE FLOOR LAYOUT AND REPAIR
FLOW ...............................................................................................................9
III. DEVELOPING THE MODEL .................................................................................15
A. ARENA ...........................................................................................................15
B. SIDEWINDER REPAIR LINE MODEL (SRLM) MODULES................15
1. Phase One—GCS Arrival and Diagnostic Testing .........................16
2. Phase Two—Pre-Final Repair and Testing.....................................18
3. Phase Three—Final Testing..............................................................20
C. SIDEWINDER REPAIR LINE MODEL (SRLM) DATA MODULES ...21
D. DATA COLLECTION FOR INPUT INTO THE SLRM ..........................23
E. SLRM ASSUMPTIONS ................................................................................34
F. COMPLETE SRLM FLOWCHART ..........................................................34
IV. ANALYSIS, VALIDATION, AND RESULTS .......................................................35
A. APPROACH...................................................................................................35
B. BASE ANALYSIS..........................................................................................36
C. VALIDATION................................................................................................38
D. INCREASE PROCESS TIME DISTRIBUTIONS .....................................38
E. RESOURCE FACTOR ANALYSIS ............................................................43
F. INCREASE ARRIVALS ...............................................................................48
G. SIMULATION OPTIMIZATION ...............................................................52
V. CONCLUSIONS AND FUTURE RESEARCH......................................................59
APPENDIX A. SRLM ANIMATION.........................................................................61
APPENDIX B. TREES AND REGRESSION MODELS .........................................63
APPENDIX C. OPTQUEST RESULTS ....................................................................67
LIST OF REFERENCES ......................................................................................................69
INITIAL DISTRIBUTION LIST .........................................................................................71
vii
THIS PAGE INTENTIONALLY LEFT BLANK
viii
LIST OF FIGURES
Figure 1. AIM–9 Sidewinder Missile with Guidance and Control Section (GCS)
circled [After (U.S. Air Force, 2007)]. ..............................................................8
Figure 2. Components of the AIM–9 Sidewinder Missile Guidance and Control
Section (GCS) [From (Kopp, 1994)]. ................................................................8
Figure 3. U.S. Navy Jet employing a Sidewinder Missile [From (U.S. Navy Digital
Imagery, 2005)...................................................................................................9
Figure 4. TYAD Sidewinder Repair Line Layout, with black circled letters
designating key process stations [After (Tobyhanna Army Depot Industrial
Modernization Division, 2008)].........................................................................9
Figure 5. This picture shows the Guidance and Control Section (GCS) Shell Rack
[From (Tobyhanna Army Depot, 2007)]. ........................................................12
Figure 6. This picture shows a Rate Table Station on the Sidewinder Repair Line
Floor [From (Tobyhanna Army Depot, 2007)]................................................12
Figure 7. This picture shows an Electronics Technician repairing a Guidance and
Control Section (GCS) of the Sidewinder Missile [From (Tobyhanna
Army Depot, 2007)].........................................................................................13
Figure 8. Flowchart Modules from Create module (GCS Arrival) through 4044 Test
Process module in Phase One (GCS Arrival and Diagnostic Testing). ...........16
Figure 9. Flowchart Process modules Diagnostic LF through Diagnostic Rate Table
in Phase One (GCS Arrival and Diagnostic Testing). .....................................17
Figure 10. Flowchart Decide modules Clean Room and Painting, with the Seeker
Repair Process module. End of Phase One (GCS Arrival and Diagnostic
Testing). ...........................................................................................................18
Figure 11. Flowchart modules Separate GCS Shell to GCS Shell To Paint Room and
No Painting through Rate Table, in Phase Two (Pre-final Repair and
Testing). ...........................................................................................................19
Figure 12. Flowchart modules for Paint Room Process module and Need to Batch
Decide module. The GCS is in its original or newly painted shell and
Phase two is complete......................................................................................20
Figure 13. Flowchart modules from Pre Final Assembly through Exit Repair Line.
This flows from the start to the end of Phase Three (Final Testing). ..............21
Figure 14. Clean Room resource capacity schedule for 16 hour work day (Arena
screen shot). .....................................................................................................23
Figure 15. Histogram for Leak and Flow Cycle Time Data Set........................................26
Figure 16. Histogram for Boresite Cycle Time Data set...................................................27
Figure 17. Histogram for Rate Table Cycle Time Data Set. .............................................27
Figure 18. This graph shows a screen shot of Arena’s Input Analyzer Histogram of
the Leak and Flow Data Set. The Y-axis is frequency and the X-axis is
Leak and Flow Cycle Time. The superimposed blue line is the best fit plot
of the Gamma distribution listed in Table 4 below..........................................29
Figure 19. This graph shows a screen shot of Arena’s Input Analyzer Histogram of
the Boresite Data Set. The Y-axis is frequency and the X-axis is Leak and
ix
Flow Cycle Time. The superimposed blue line is the best fit plot of the
Lognormal distribution listed in Table 5 below...............................................30
Figure 20. This graph shows a screen shot of Arena’s Input Analyzer Histogram of
the Rate Table Data Set. The Y-axis is frequency and the X-axis is Rate
Table Cycle Time. The superimposed blue line is the best fit plot of the
Beta distribution listed in Table 7 below. ........................................................31
Figure 21. Arena screen shot of complete SRLM flowchart.............................................34
Figure 22. Utilization rates of “normal operating” Sidewinder Repair Line resources. ...37
Figure 23. JMP Partition Tree with three splits for 10 percent increase simulation run...41
Figure 24. JMP Summary of Fit and Sorted Parameter regression estimates for 10
percent increase simulation run. ......................................................................42
Figure 25. JMP Partition Tree with three splits for 20 percent increase simulation run...42
Figure 26. JMP Summary of Fit and Sorted Parameter regression estimates for 20
percent increase simulation run. ......................................................................43
Figure 27. JMP Partition Tree with three splits of Leak and Flow, Clean Room, and
Induction resources. .........................................................................................45
Figure 28. JMP Summary of Fit and Sorted Parameter regression estimates for
resource factor regression model .....................................................................46
Figure 29. JMP Interaction Profiler for resource factor regression analysis. Circled in
black is the most significant interaction...........................................................47
Figure 30. JMP Prediction Profiler for resource factor regression analysis......................48
Figure 31. Mean cycle time per increased arrival distributions. .......................................50
Figure 32. Clean Room utilization rates per increased arrival distributions. ....................50
Figure 33. GCSs in the Clean Room queue per replication length (at one, five and ten
years) highlighting triangular (1,4,7), (1,4,8), and (1,4,9) distributions..........51
Figure 34. GCS mean cycle time per replication length (at one, five and ten years)
highlighting triangular (1,4,7), (1,4,8), and (1,4,9) distributions.....................51
Figure 35. SRLM animation screen shot from Arena. Mimics Sidewinder floor layout
with GCSs (silver and red) moving through the repair process. Red GCSs
signify the GCS shell visited the Paint Room..................................................61
Figure 36. Summary of Fit, Sorted Parameter regression estimates, and Partition Tree
for 30 percent increase simulation run. Mean cycle time of 2.8 days, R-
square of 0.98, and the Clean Room triangular distribution parameters of
max, mode, and min are the most significant factors effecting mean cycle
time. .................................................................................................................63
Figure 37. Summary of Fit, Sorted Parameter regression estimates, and Partition Tree
for 40 percent increase simulation run. Mean cycle time of 3.0 days, R-
square of 0.98, and the Clean Room triangular distribution parameters of
max, mode, and min are the most significant factors effecting mean cycle
time. .................................................................................................................64
Figure 38. Summary of Fit, Sorted Parameter regression estimates, and Partition Tree
for 50 percent increase simulation run. Mean cycle time of 3.2 days, R-
square of 0.98, and the Clean Room triangular distribution parameters of
max, mode, and min are the most significant factors effecting mean cycle
time. .................................................................................................................65
x
LIST OF TABLES
Table 1. Resource list, capacity, schedule name and rule for the base SRLM. .............22
Table 2. Process modules that compete for like resources. ...........................................22
Table 3. Summary Statistics for Leak Flow, Boresite, and Rate Table data sets ..........26
Table 4. Gamma distribution expression for Leak and Flow data set............................29
Table 5. Lognormal distribution expression for Boresite data set. ................................30
Table 6. Empirical distribution expression for Boresite data set. ..................................31
Table 7. Beta distribution expression for Rate Table data set. ......................................32
Table 8. ARENA Distribution summary for all flowchart modules in SRLM. .............33
Table 9. Base statistics (mean cycle time, throughput per year, and WIP) of
“normal operating” Sidewinder Repair Line ...................................................36
Table 10. Top five Process station mean queue waiting times under “normal
operating” Sidewinder Repair line...................................................................38
Table 11. Induction Process distributions per percent above mean.................................39
Table 12. Clean Room Process distribution parameters (min, mode, and max) per
percent above mean..........................................................................................39
Table 13. Process station Resource high and low bounds for NOLH matrix. .................44
Table 14. First ten scenarios (of 33) of the NOLH matrix for Resource factor
analysis.............................................................................................................44
Table 15. Summary statistics for the 33 resource factor scenarios..................................45
Table 16. Scenarios for GCS increase arrival distribution per day..................................49
Table 17. Base statistics (mean cycle time, standard deviation of mean cycle time,
throughput per year, and WIP) for increased arrival distributions. .................49
Table 18. Baseline SRLM resource configuration (22 total resources + one Painter).....53
Table 19. Current, lower and upper resources bounds for optimization..........................54
Table 20. Top 10 resource allocations, based on lowest mean cycle time and sum of
resources “no more than 16.”...........................................................................56
Table 21. Top 20 resource allocations based on lowest mean cycle time and sum of
resources “no more than 24.”...........................................................................56
Table 22. Top resource allocations above (lower mean cycle time) base case
allocation highlighted in gray ..........................................................................57
Table 23. Resource allocations below (higher mean cycle time) base case allocation
highlighted in gray. Yellow highlights are best allocations with two or
less resources from the base case of 22. ..........................................................58
Table 24. Top 50 resource allocation results from OptQuest optimization (base case
highlighted in gray)..........................................................................................67
xi
THIS PAGE INTENTIONALLY LEFT BLANK
xii
LIST OF ACRONYMS AND ABBREVIATIONS
xiii
THIS PAGE INTENTIONALLY LEFT BLANK
xiv
EXECUTIVE SUMMARY
The mean cycle time for the TYAD Sidewinder repair line under current
operating conditions is 2.35 days. The repair line should repair 476 GCSs per year. The
repair line operates far below maximum capacity. Workers at ten of the eleven stations
have a less than 30 percent utilization rate. Workers in the Clean Room have the highest
utilization rate at 54 percent. The process times at the Clean Room have the greatest
impact on the mean cycle time and reductions in these times would lead to the greatest
decrease in the mean cycle time. Doubling the GCS arrival rate puts the repair line at full
operating capacity. Re-allocation of the current workforce to an optimal configuration
will reduce mean cycle time by less than 1 percent. TYAD could reduce the workforce at
the repair line by 27 percent and only experience a 1.9 percent increase in mean cycle
time.
xv
THIS PAGE INTENTIONALLY LEFT BLANK
xvi
ACKNOWLEDGMENTS
I would like to thank my thesis advisors LTC Rob Shearer and COL Kirk Benson
for their tireless efforts, consistent mentorship, and unwavering dedication to the Army
ORSA community.
I would like to thank the men and women of the Tobyhanna Army Depot who
work tirelessly to support the Armed Services.
I would like to thank my fellow classmates for their professionalism and team
first effort.
And to my wife Holly thanks for all the support and unconditional love. You are
my inspiration.
xvii
THIS PAGE INTENTIONALLY LEFT BLANK
xviii
I. INTRODUCTION
A. OVERVIEW
TYAD will implement results from this analysis to help reduce cycle time and
garner insights into current policies and procedures employed on the Sidewinder Repair
Line. TYAD has identified potential for future use of this analysis by employing the
technique of discrete event simulation to augment its DoD-mandated Leaning procedures
and help reduce cycle time throughout all applicable repair and overhaul lines.
1
B. BACKGROUND
The Sidewinder Missile Repair Line is part of TYAD’s Tactical Missile Facility.
The repair line facility is a 21,000 square-foot facility, with a Clean Room for servicing
and repairing sensitive electronic components. The facility employs over 41 multi-skilled
and cross-trained electronics specialists. The United States (U.S.) Air Force, U.S. Navy
and several Foreign Nations, who employ the Sidewinder Missile, all send their
inoperable missiles to TYAD for repair. Age, excessive training use, environmental
damage, weather, excessive exposure to harsh climates, and vibration damage are some
of the reasons for depot-level repair.
2
3. AIM–9 Sidewinder Missile
The AIM–9 Sidewinder Missile is a heat seeking, short range, air-to-air missile
employed by fighter aircraft of the U.S. Air Force and Navy, and select allies. The
Sidewinder is a simple weapon, designed with the ability for rapid upgrade to the latest
technology. There have been various versions of the Sidewinder, which began with its
first successful test (AIM–9A) in 1953. The initial production version, designated AIM–
9B, entered operational use in 1956. The AIM–9L model was the first Sidewinder with
the ability to attack from all angles. The AIM–9M has the all-aspect capability of the
AIM–9L model while providing all-around higher performance. The AIM–9M has
improved defense against infrared countermeasures, enhanced background discrimination
capability, and a reduced-smoke rocket motor. These modifications increase its ability to
locate and lock on a target and decrease the missile’s chances for detection. The AIM–
9M–7 was a modification to the AIM–9M in response to threats expected in the Persian
Gulf War zone. The latest Sidewinder missile, the AIM–9X, reached initial operational
capability in late 2003 and was approved for full-rate production in May 2004. The AIM–
9X provides full day and night employment, resistance to countermeasures, extremely
high off-boresite acquisition and launch envelopes, greatly enhanced maneuverability and
improved target acquisition ranges. Over 110,000 Sidewinder missiles have been built,
with less than one percent fired in combat. The Sidewinder is the most widely used air-
to-air missile currently in use in the world. The AIM–9 is one of the oldest and least
expensive missiles in the U.S. weapons inventory (U.S. Navy, 2009).
It is very challenging for any organization to gain insights into the effects of
altering a process, without the use of a computer-generated simulation model. If a process
is very simple, then a computer simulation may not be needed, but if the process is
complex and involves random (stochastic) processes, then a computer-generated
simulation is a vital tool to assist in gaining valuable insights about the system. A
computer-aided simulation affords the ability to implement changes to the system in the
model (on the “screen”) and observe the effects of these changes. Many organizations,
without a DES to model their systems, physically implement changes on the “floor” of
the process location. It may take days to months to discern if the physical changes are
achieving the desired effects. This approach can be very expensive and time-consuming
if multiple changes to the system are required before the desired objective is met. DES
also helps organizations investigate the stochastic nature of the system. Multiple
replications of the model yield different realizations, or plausible futures, that the system
could experience. Statistical analysis of these futures yields insights into important
metrics for the organization (e.g., mean cycle time, maximum queue lengths, and
resource utilization).
4
time between a customer order and shipment, and radically improves profitability,
customer satisfaction, throughput time, and employee morale (Rockford Consulting,
2000).
5
C. HOW WE ARE HELPING
6
II. TYAD SIDEWINDER REPAIR LINE
A. INTRODUCTION
The TYAD Sidewinder Missile Repair Line is a 21,000 square-foot facility with a
Clean Room. The facility employs 41 multi-skilled and cross-trained electronics
specialists. The United States (U.S.) Air Force, U.S. Navy and several Foreign Nations,
who employ the Sidewinder missile, all send their inoperable missiles to TYAD for
repair. Sidewinder users identify the needed repair through their internal checklists,
maintenance procedures, and a 4044 field test machine. The 4044 field test machine
identifies electrical faults within the missile. Age, excessive training use, environmental
damage, weather, excessive exposure to harsh climates, and vibration damage all
necessitate depot-level repair. Users prepare the Guidance and Control Section (GCS) of
the missile for movement by removing the GCS from the warhead and propulsion system
of the missile. The TYAD Sidewinder Repair Line is only equipped to repair the GCS
section of the missile. TYAD receives the GCS with the paperwork identifying the initial
results of faults and the findings from the 4044 field test machine. After the GCS arrives
at the depot, workers remove the GCS from its packing configuration and store the GCS
at a facility near the repair line. Although there are three different customers (U.S. Air
Force, U.S. Navy, and Allies), all the repairs and procedures conducted are identical.
Figure 1 displays a full picture of the Sidewinder missile, with the GCS section circled.
Figure 2 highlights the GCS component of the Sidewinder missile, the component on the
repair line. Figure 3 shows a U.S. Navy FA–18 employing a Sidewinder Missile.
7
Figure 1. AIM–9 Sidewinder Missile with Guidance and Control Section (GCS)
circled [After (U.S. Air Force, 2007)].
8
Figure 3. U.S. Navy Jet employing a Sidewinder Missile [From (U.S. Navy Digital
Imagery, 2005).
f
g Clean Room
i j
Paint
e Rm
c d
h
b
Figure 4. TYAD Sidewinder Repair Line Layout, with black circled letters
designating key process stations [After (Tobyhanna Army Depot
Industrial Modernization Division, 2008)].
9
Figure 4 shows the repair line facility. The black circles with white letters denote
the locations of process stations. We will refer to processes in this paper by these letters.
The GCS arrives to the depot in large can-type containers, where receiving workers
remove the packing material at the Can/De-Can area of the floor (a). The workers then
place the GCS into the line repair process. There are three phases of testing and repair
conducted on the Sidewinder Repair Line: Diagnostic Testing, Pre-Final Repair and
Testing, and Final Testing.
10
are complete. If the Induction worker determines that the GCS shell required painting
and stenciling, the worker removes the shell and places it outside the shop floor for
movement to the Paint Room (h). This occurs after Diagnostic Testing is complete. The
operations at the Paint Room include component etching, stenciling, and painting.
Workers at the Paint Room, outside of the repair line floor, conduct the painting
procedures.
Final Testing consists of verifying that all required repairs and adjustments were
made and that the GCS is functioning correctly. Testing begins with the GCS at the Pre-
Final assembly (i) station, where one assigned worker checks all modifications to the
GCS, visually inspects the GCS, and properly torques interior parts. Final Testing
continues at the Vibration Test station (j), where one assigned worker utilizes a vibration
test machine to simulate flight vibration conditions for the missile. The machine
confirms the stability of the interior parts and repairs. Testing continues as the GCS
moves through the Final Leak and Flow (d), Final Boresite (e), and Final Rate Table (f)
stations where available station (d, e, f) workers verify that the repairs conducted were
proper and valid. Testing continues at the Final Assembly (i), where one assigned worker
returns the GCS to its final form, checks the exterior of the missile, and tightens all
exterior parts. Testing continues with one final 4044 test (c) to ensure no faults remain.
11
Final Testing finishes at the Final Inspection station (i), as one assigned worker ensures
all paperwork is complete to standard and prepares the GCS for return to the Can/De-Can
room (a). This completes the Sidewinder Repair Line operation. Workers then pack and
ship the GCS to the original user (Hazlett, 2008).
Figure 5 displays a rack of GCS shells. Figure 6 shows the Rate Table station and
Figure 7 illustrates an electronic technician repairing a GCS.
Figure 5. This picture shows the Guidance and Control Section (GCS) Shell Rack
[From (Tobyhanna Army Depot, 2007)].
Figure 6. This picture shows a Rate Table Station on the Sidewinder Repair Line
Floor [From (Tobyhanna Army Depot, 2007)].
12
Figure 7. This picture shows an Electronics Technician repairing a Guidance and
Control Section (GCS) of the Sidewinder Missile [From (Tobyhanna
Army Depot, 2007)].
13
THIS PAGE INTENTIONALLY LEFT BLANK
14
III. DEVELOPING THE MODEL
A. ARENA
Flowchart and Data modules are the building blocks for an Arena simulation.
Flowchart modules are objects that represent processes in the simulation. Placing
flowchart modules on the window screen allows the user to define the processes within
the model that represent a current or future system. Data modules are objects that specify
the characteristics of various processes.
We chose Arena for this research because of its ability to handle repair service
systems. It can identify overall service times, queue build-up, resource utilization, Work
in Process (WIP), and potential bottlenecks in the system. It also allows the varying of
user chosen factors to see the effects on identified response variables through its
OptQuest tool package. Arena’s interface with Microsoft Excel provides the ability to
read and write data files for statistical analysis.
15
decisions as to which paths the GCS will take; (3) Delay modules that model the time
delays that the GCS incur during testing and repairs; and (4) Splitting and (5) Batching
modules that model the disassembling and reassembling of the GCS. Data modules
include: (1) Entity modules that specify entity (GCS) characteristics; (2) Resource
modules that model resource (worker) allocation and scheduled downtime; and (3) Queue
modules that specify process queue logic. We developed multiple versions of the SRLM,
each tailored to a particular analysis studied in this thesis. We present here the base
model that represents the Sidewinder Repair Line in its current configuration. Italicized
names represent the SRLM modules for the remainder of this section. Later in this
chapter, we discuss the inputs for all modules, underlying distributions, data collection,
and input data analysis.
The SRLM portrays the three phases of the system discussed in Chapter II.
This is the starting point for the repair line process. A GCS is created as an entity
at the GCS Arrival Create module and enters the SRLM. Entities are “physical objects”
that possess attributes, seize resources, move around the model, can change status, and
are affected by other entities. An entity for this model is a single GCS that requires some
type of repair. This module is the starting point for the simulation and generates entities
based on an inter-arrival time and a number of entities per arrival. Entities then leave the
Create module to start processing through the model. The entity then moves to Time
Stamp, an Assign module. Assign modules designate new values to entity attributes (a
value tied to a specific entity) or user defined variables. The Time Stamp module assigns
the current time to an attribute, capturing when the entity enters the SRLM (Figure 8).
Figure 8. Flowchart Modules from Create module (GCS Arrival) through 4044 Test
Process module in Phase One (GCS Arrival and Diagnostic Testing).
16
The entity begins Diagnostic Testing at the Induction Process module. Process
modules model action in Arena. These actions include repairs, tests, and reviews of the
entity in the SRLM. Process modules specify the time that an entity spends in the
module to complete the action. The module also specifies if the entity requires a resource
(worker) to complete the action (Rockwell Software, 2005). A resource is a person,
equipment, or space that conducts the actions defined in the Process module. We further
discuss resources later in this chapter. The entity leaves the Induction module and enters
the 4044 Test Process module. The Induction and 4044 Test modules represent the first
work stations in the Diagnostic Phase.
The entity exits the 4044 Test module and enters, in succession, the Diagnostic
LF, Diagnostic Boresite, and Diagnostic Rate Table Process modules (Figure 9). All
GCS entities move through the three modules. This mimics the three diagnostic testing
stations (Leak and Flow, Boresite, and Rate Table) that identify needed repairs.
The last portion of Diagnostic Testing is the determination of (1) whether the
entity requires a visit to the Clean Room and (2) whether the shell of the entity requires
painting and stenciling (Figure 10). The entity leaves the Diagnostic Rate Table Process
module and enters the Clean Room Decide module. Decide modules allow for decision-
making in the model to determine how the entities will move through the system.
Conditions dictate the path the entities move along (Rockwell Software, 2005). The
entity moves to the Seeker Repair Process module if the Seeker requires repair. The
Seeker Repair is another Process Module, in which the seeker is removed from the entity,
repairs are conducted, and the seeker is re-installed inside the Clean Room. The entity,
with its repaired Seeker, then moves to the Painting Decide module. The GCS entity
moves directly to the Painting Decide module from the Clean Room Decide module if the
Seeker requires no repair (Rockwell Software, 2005).
17
0 True
Seeker Repair
Clean Room
0 False
0 True
Painting
0 False
Figure 10. Flowchart Decide modules Clean Room and Painting, with the Seeker
Repair Process module. End of Phase One (GCS Arrival and Diagnostic
Testing).
This phase begins downstream from the Painting Decision module. If the entity
shell was required to visit the Paint Room, only the entity shell moves to the Paint Room.
The remainder of the entity continues on with the primary repair and testing procedures at
the Leak and Flow, Boresite, and Rate Table Process modules (Figure 11). We separate
the entity from its shell by using the Separate GCS Shell Separate module, where Arena
makes a copy of the incoming entity. The original entity (representing the shell) moves
to the GCS Shell to Paint Room Route module. Route modules transfer an entity along a
path, with a user-defined delay time, to a Station module. The Route module also
facilitates animating the SRLM. Animation is a user construct of the system that helps
show movement of the entities through a model (See Appendix A for full animation of
SRLM). The duplicate entity (representing the GCS) moves to the Leak and Flow,
Boresite, and Rate Table Process modules and then continues downstream until it is
matched up with its painted shell at the Batch 1 Batch module. If the entity shell does not
require painting, it moves to the No Painting Assign module, where the module assigns
the entity shell an attribute value. This attribute will later identify the entity as having its
original shell (and will not require batching further downstream). An entity that did not
require painting would then flow to the Primary Repair and Testing Process modules
(Leak and Flow, Boresite, and Rate Table).
18
True
False
Figure 11. Flowchart modules Separate GCS Shell to GCS Shell To Paint Room and
No Painting through Rate Table, in Phase Two (Pre-final Repair and
Testing).
The original separated entity (shell) leaves the GCS Shell To Paint Room Route
module and enters the Arrive Paint Rm Station module (Figure 12). Station modules
refer to physical locations where processes occur and also facilitate animation (Rockwell
Software, 2005). The entity leaves the Arrive Paint Rm module and moves to the Paint
Room Process module, where resources begin the painting, stenciling, and etching work.
Upon completion of work, the original entity enters the To Shop Floor Route module and
travels a user-determined time back to the Sidewinder Repair Line Floor. Upon arrival at
the Arrive Shop Floor Station module, the resources (workers) combine the original
entity (shell) with the duplicate entity (GCS) at the Batch 1 Batch module to form one
complete entity. The matched entities may not arrive at the same time, as the process
times for the painted entity shell and the duplicate entity may be different. Therefore, the
entities (either the shell or the GCS) enter a queue and await their serial-numbered
counterpart to complete the batching. Batch modules in Arena are grouping mechanisms
based upon a user-defined attribute. We use the serial number attribute to mate the
original and duplicate GCS entities. Arena automatically assigns a specific serial number
to every entity created. The combined entities move to the Painted Assign module and
receive another attribute to change the entity’s color in the animation.
Entities that did not require painting (because they were not separated) and
entities with pre-fabricated shells move from the Rate Table Process module to the Need
19
to Batch Decide module. Further upstream, at the No Painting Assign module, Arena
assigned an attribute value to the entities that did not require a visit to the Paint Room.
This attribute value is now used as the condition test for the Need to Batch Decide
module. If the entity does not have its original shell, it moves to the Batch 1 Batch
module and is married up with its shell and then moves to Final Testing. If the entity has
its original shell, it moves directly into Final Testing (Rockwell Software, 2005).
0
From Rate Table T ru e
Need to Batch B atch 1 P ainted
0 Fals e
Figure 12. Flowchart modules for Paint Room Process module and Need to Batch
Decide module. The GCS is in its original or newly painted shell and
Phase two is complete.
The entity enters Final Testing in its original or newly painted shell with all major
repairs complete. The entity moves to the Pre Final Assembly and Vibration Test Process
modules (Figure 13), where resources (workers) at these modules prepare the entity for
the final round of testing. The entity then enters the Final Leak and Flow, Final Boresite,
and the Final Rate Table Process modules. These three modules are identical to the
previous Diagnostic Leak and Flow, Diagnostic Boresite, and Diagnostic Rate Table
Process modules that the entity entered during Diagnostic Testing. The entity exits the
Diagnostic Rate Table module and moves to the Final Assembly, Final 4044, and Final
Inspection Process modules and completes Final Testing. The entity, having completed
the repair line, next moves to the Time Record Record and the Mean Cycle Time Assign
modules. A Record module collects statistical information. We used the Record module
20
here to capture the mean cycle time for all entities. The GCS entity exits the system at
the Exit Repair Line Dispose module, which signals the end of the simulation (Rockwell
Software, 2005).
Pre Final
Vibration Test
Assembly
Figure 13. Flowchart modules from Pre Final Assembly through Exit Repair Line.
This flows from the start to the end of Phase Three (Final Testing).
Data modules are spreadsheet-type interfaces embedded within the SLRM that
allow the user to define the characteristics of various process elements. The Entity Data
module assigns a picture of a Sidewinder Missile during entity creation. This picture
allows the user to visually follow the entity through the SLRM.
The Resource Data module allocates resource capacity and schedules resource
downtime. Resources are machines, or people, that perform tasks designated in the
Process modules. An entity entering a Process module attempts to seize a resource
(space, worker, or machine) that is needed to perform the task within the module. If a
resource is not immediately available, the entity waits in a queue within the Process
module and waits for a resource to become available. The SRLM Resource Matrix
(Table 1) represents the current Sidewinder Repair Line resource capacities, the type of
schedule the resources follow, and the schedule rule. Four sets of similar process
modules (Diagnostic Leak and Flow, Leak and Flow, Final Leak and Flow; Diagnostic
21
Boresite, Boresite, Final Boresite; Diagnostic Rate Table, Rate Table, Final Rate Table;
and 4044 Test, Final 4044) compete for like resources (Table 2) with priority of
resources based upon arrival time into the queue (first-in, first-out).
Table 1. Resource list, capacity, schedule name and rule for the base SRLM.
The SRLM Schedule Resource module allows the modeler to vary the resource
capacity over time. The Sidewinder Repair Line operates two consecutive eight-hour
shifts, five days a week. Each resource follows this schedule, with scheduled downtime
for lunch (Figure 14 for the Clean Room schedule). Note that the scheduled downtime
22
for lunch decreases the resource capacity. The SRLM uses the “Wait” schedule rule,
which allows the resource to continue working on an entity (GCS) within a process until
the task is complete, and then start the scheduled downtime. This mirrors the downtime
policy at TYAD. All SRLM resources, except the 4044 Machines, have similar break
schedules of one hour downtime per eight-hour shift.
Figure 14. Clean Room resource capacity schedule for 16 hour work day (Arena
screen shot).
The Queue Data module establishes the “First In, First Out” logic for queues. The
Variable Data module designates and tracks information, through defined variables,
necessary to conduct follow-on analysis.
We obtained the GCS arrival rate data from the engineers at TYAD. The
engineers believe that one to twenty GCSs arrive every week, with ten arriving on
average. We converted this weekly rate to a daily rate and, lacking any other information
about the rate, decided to model the number of daily arrivals with a triangular(1,2,4)
distribution. This distribution reflects our belief that at least one GCS will arrive per day,
two will arrive most frequently, and no more than four will arrive per day (Law &
Kelton, 2000).
We used the VSA determined upper and lower time estimates for following
process stations: Induction, 4044 Test, Diagnostic Leak and Flow, Diagnostic Boresite,
Diagnostic Rate Table, Pre-Final Assembly, Vibration Test, Final Leak and Flow, Final
Boresite, Final Rate Table, Final Assembly, Final 4044, and Final Inspection. We
modeled these process times with uniform distributions. We selected the uniform
distribution for two reasons: (1) we knew the minimum and maximum values that the
process times could take and (2) we knew nothing about the shape of the underlying
distribution (Law & Kelton, 2000).
The VSA determined neither the Seeker Repair process times (Clean Room) nor
the Painting/Stenciling (Paint Room) process times. Again, we turned to the experts at
TYAD for assistance. We interviewed the Clean Room supervisors and process
engineers, soliciting their best estimates for the minimum, most likely (mode), and
maximum process times in the Clean Room. They advised us that one day was the
minimum, two days was most likely, and five days was the maximum. We then modeled
24
the Clean Room process times, in hours, with a triangular(16,32,80) distribution. TYAD
also provided the billing times for the procedures that take place on the GCS Shell at the
Industrial Facility. TYAD reported to us that the procedures of refinishing, etching, and
painting a GCS Shell take 132 minutes with a “give or take” factor of ten minutes. We
modeled these times with a uniform(122,142) distribution.
Next we estimated what percentage of the GCSs will need to go to the Clean
Room and what percentage will require painting and stenciling. Lacking historical data,
we turned to the engineers at TYAD, who told us that 40 percent of GCSs go to the Clean
Room and that 90 percent of GCS shells require painting and stenciling. We cannot
overemphasize the reliance on subject matter experts, engineers and repair line
supervisors for data input into the SRLM. They provided the parameter estimates for the
uniform and triangular distributions when no data existed. We did recognize the dangers
inherent in relying exclusively on expert opinion and decided to later conduct sensitivity
analysis on the parameter estimates to determine the robustness of the system.
TYAD did have, and provided, historical data for the Leak and Flow, Boresite,
and Rate Table process times from Pre-Final Repair and Testing (Esopi, 2009). TYAD
utilizes a computer-based data system to track the repair times during the above three
processes. TYAD provided two data sets, each with times from the three processes, from
2008 with 88 and 75 serial numbered GCSs repair times, respectively. During Pre-final
Repair and Testing, the GCS returns to each station, as required, repairing all
deficiencies. Accordingly, we combined the two data sets into one set, summing the
process times that each GCS experienced at each station, and then determined the best
distribution with which to model these times in the simulation. Due to the possibility of
multiple trips to each station, the data showed considerable variability in process times
(Table 3).
25
95% Lower 95% Upper
Mean Standard
Process Confidence Confidence Data Reference
(minutes) Deviation
Level Level
Table 3. Summary Statistics for Leak Flow, Boresite, and Rate Table data sets
We created histograms from the process times of the three stations to better
understand the probabilistic nature of the underlying distribution (Figures 15, 16, and 17
are screen shots from S-plus).
30
Frequency
20
10
0
25.3 67.3 109.3 151.3 193.4 235.4 277.4 319.4 361.4 403.5 445.5 487.5 529.5
LeakFlow Cycle Time (min)
Figure 15. Histogram for Leak and Flow Cycle Time Data Set.
26
S-plus Histogram plot of Boresite Data Set
Frequency 60
40
20
0
13.8 25.3 36.8 48.2 59.7 71.1 82.6 94.1 105.5 117.0 128.5 139.9 151.4
Boresite Cycle Time (min)
30
Frequency
20
10
0
5.2 110.3 215.3 320.4 425.4 530.5 635.6 740.6 845.7 950.7 1055.8 1160.9
Rate Table Cycle Time (min)
Figure 17. Histogram for Rate Table Cycle Time Data Set.
27
We hypothesized whether specific continuous distributions could plausibly
account for the service times from the Leak Flow, Boresite, and Rate Table processes.
We used the Input Analyzer from the Arena Simulation Tool Kit to conduct the input
probability distribution selection (Rockwell Software, 2005). The Input Analyzer is a
tool that helps determine the quality of fit of probability distribution functions to input
data. It fits all the distributions that are part of the Input Analyzer to the input data,
estimates the required parameters for each, and ranks them according to the values of
their respective square errors. The Input Analyzer also conducts a goodness-of-fit test for
each distribution. A goodness-of-fit test assesses how plausible it is to assume that the
observed data came from a specified distribution, specifically testing the following null
hypothesis (Law & Kelton, 2000).
H 0 : xi are iid f ( x | θ )
We set the α level for the χ 2 goodness-of-fit test at 0.10, to reject only
hypothesized distributions that were highly implausible candidates for the underlying,
unknown distribution. Input Analyzer provided the best fit plot (Figure 18), a
recommended distribution, and results from the χ 2 goodness-of-fit test (Table 4) for the
Leak and Flow process time data set.
28
Figure 18. This graph shows a screen shot of Arena’s Input Analyzer Histogram of
the Leak and Flow Data Set. The Y-axis is frequency and the X-axis is
Leak and Flow Cycle Time. The superimposed blue line is the best fit
plot of the Gamma distribution listed in Table 4 below.
Distribution: Gamma
Expression: 25 + GAMM(98.8, 1.42)
Square Error: 0.004
Table 4. Gamma distribution expression for Leak and Flow data set.
The hypothesized gamma distribution had a p-value of 0.27, much greater than
our significance level of 0.10. We considered this a plausible distribution for the Leak
and Flow process time in the SRLM. Input Analyzer provided the best fit plot (Figure
19), a recommended distribution, and results from the χ 2 goodness-of-fit test (Table 5)
for the Boresite process time data set.
29
Figure 19. This graph shows a screen shot of Arena’s Input Analyzer Histogram of
the Boresite Data Set. The Y-axis is frequency and the X-axis is Leak and
Flow Cycle Time. The superimposed blue line is the best fit plot of the
Lognormal distribution listed in Table 5 below.
Distribution: Lognormal
Expression: 13 + LOGN(32.9, 29.3)
Square Error: 0.030
The lognormal distribution had a p-value less than 0.005, much less than our
significance level of 0.10. We did not consider this a plausible distribution for the
Boresite process time in the SRLM. Given the lack of a plausible known distribution, but
the presence of a large historical data set, we decided to model the underlying distribution
with a continuous empirical distribution. The drawback of this approach is that the
SRLM Boresite process time will never exceed the largest observed data value and never
fall below the smallest observed value. This will limit the ability of the simulation to
choose extreme values for Boresite process times (Law & Kelton, 2000). Input Analyzer
provided the empirical distribution function and summary statistics from the data set
(Table 6).
30
Distribution Empirical
CONT (0.000, 13.000,0.115, 24.583,0.547, 36.167,0.649,
47.750,0.757, 59.333,0.851, 70.917,0.912, 82.500,0.946,
Expression
94.083,0.959, 105.667,0.966, 117.250,0.980, 128.833,0.986,
140.417, 1.0, 152.000)
Input Analyzer provided the best fit plot (Figure 20), a recommended distribution,
and results from the χ 2 goodness-of-fit test (Table 7) for the Rate Table process time
data set.
Figure 20. This graph shows a screen shot of Arena’s Input Analyzer Histogram of
the Rate Table Data Set. The Y-axis is frequency and the X-axis is Rate
Table Cycle Time. The superimposed blue line is the best fit plot of the
Beta distribution listed in Table 7 below.
31
Distribution: Beta
Expression: 5 + 1.16e+003 * BETA(1.32, 4.4)
Square Error: 0.003499
The hypothesized beta distribution had a p-value of 0.225, much greater than our
significance level of 0.10. We considered this a plausible distribution for the Rate Table
process time in the SRLM.
We summarize the processes and their distributions in the table below (Table 8).
32
Table 8. ARENA Distribution summary for all flowchart modules in SRLM.
33
E. SLRM ASSUMPTIONS
• All GCS entity movements between modules are instantaneous, expect for
the movement of a GCS shell to the Paint Room.
• All repair equipment and parts are readily available to the workers.
• 260 working days represent one full calendar year.
• None of the machines fail or require downtime due to maintenance.
34
IV. ANALYSIS, VALIDATION, AND RESULTS
A. APPROACH
We utilized the base model discussed in Chapter III to investigate how the
processes at the repair line interact and to establish baseline metrics. These baseline
metrics included: GCS mean cycle time, resource utilization rates, GCS throughput,
Work in Process (WIP), queue lengths, and queue times. Mean cycle time is the average
time that a GCS spends in the system, starting with the creation of the entity and ending
with the disposal of the entity. Resource utilization is the percentage of time that a
resource (worker or machine) is busy and not idle. Throughput is the number of entities
that exit the system. WIP is the number of entities currently in the system. Queue length
is the number of entities waiting for a resource at a process. Queue time is the time that
an entity spent waiting for a resource at a queue.
Standard validation for a DES model compares model output with historical data.
TYAD did not have sufficient data to conduct such a validation. We conducted multiple
face validations by providing the base model and results to the repair line’s engineers and
supervisors. They agreed the model closely represented the repair line.
Our analysis of the base model identified that the system operates far below
maximum capacity. This insight led us to investigate what impact reductions in the
workforce would have on mean cycle time and the other metrics. It also led us to
investigate what arrival rate would drive the system to utilize all of its capacity.
35
B. BASE ANALYSIS
We used the base SRLM to establish the baseline analysis of the repair line under
normal operating conditions. We ran the simulation for 100 years to investigate the
steady state behavior of the system and executed 100 replications to better understand the
stochastic nature of the system. The system rapidly achieves stationarity for mean cycle
time, so we investigated the need to incorporate a warm-up period in the replications.
Mean cycle time appears to achieve stationarity (the graph “flattens out”) somewhere
after one month into a run (Law & Kelton, 2000). We choose three different warm-up
periods (0 days, 25 days, and 50 days), ran the simulation, and determined that there was
no statistical difference between the mean cycle times. The length of the simulation
(26,000 days) sufficiently outweighs the need for a warm-up period. We then ran the
simulation, without a warm-up period, and obtained the baseline metrics (Table 9).
Throughput (# Work in
Cycle Time Cycle Time
of GCS per Process (# of
(days) (hours)
year) GCS)
Table 9. Base statistics (mean cycle time, throughput per year, and WIP) of
“normal operating” Sidewinder Repair Line
The mean cycle time for the system is roughly 2.4 days for a GCS to complete the
repair line. The mean annual number of GCSs repaired is 476.7. The mean number of
GCSs in the system (inventory of GCSs from the start to the end of the repair process) is
4.3 (Hopp & Spearman, 2008). We also calculated 95 percent confidence intervals for
these metrics. All three confidence intervals are short in length, suggesting that we have
accurately identified the mean values for the metrics.
36
Resources are the people or machines that perform the identified task at each
process station. We calculated the utilization rates of the resources at each process for
the base model (Figure 22).
Utilization of Resources
60.0 53.75
50.0
Percent Busy
40.0
29.36
30.0 22.69
21.54
20.0 14.33
11.07 8.59
10.0 4.77 4.77 5.73 3.82
0.0
Utilization Rate
Figure 22. Utilization rates of “normal operating” Sidewinder Repair Line resources.
The Clean Room resources had the highest utilization rate at 54 percent and the
Final Inspection resource had the lowest rate at 4 percent. Ten out of eleven resource sets
had a utilization rate of less than 30 percent, and five of the resource sets (4044 machines,
Assembly station, Vib station, Final Assembly station, and Final Inspection station) had
utilization rates below 10 percent. These low utilization rates imply that the resources are
under-utilized. These rates provide insight into where the repair line managers could re-
allocate workers to reduce cycle time. They also suggest potential areas for cross-
training workers on different tasks.
We also calculated the maximum queue length and queue times for each process
in the system. A queue is a location where something, or someone, waits until it can
move (Hopp & Spearman, 2008). The queuing discipline for the SLRM is first come,
first-serve for all process queues. We present the five queues with the longest average
queue waiting times below (Table 10).
37
Mean Wait
Queue Mean Wait Time (days)
Time (hours)
Table 10. Top five Process station mean queue waiting times under “normal
operating” Sidewinder Repair line.
The Clean Room queue had the longest average wait time (2.6 hours), largely
driven by the process times in the Clean Room (mode-32 hours) in the SRLM. Not
surprisingly, the Clean Room resources had the highest utilization rates. These metrics
suggest that re-allocating resources with lower utilization rates to the Clean Room might
reduce mean cycle time.
C. VALIDATION
Standard validation for a DES model compares model output with historical data.
TYAD did not have sufficient data to conduct such a validation, so we conducted
multiple face validations with experts at TYAD. During the development of the base
model we periodically shared the base model structure to ensure accurate portrayal of the
repair line. TYAD identified errors in the model flow on several occasions, which we
corrected. We added animation to the model that we recorded and shared with TYAD.
They agreed the model closely represented the repair line. A more thorough validation
based on historical data would further authenticate the model (Hazlett, 2008).
The baseline analysis depends upon the validity of the service times provided by
the subject-matter experts. We conducted a sensitivity analysis of the service time
distribution parameters to determine the robustness of the baseline results. We developed
an experimental design in which we incrementally increased process times. The design
would then yield insights into when process times would influence mean cycle time and
38
identify the most significant process times. We constructed the design by shifting the
distributions’ upper limits to the right, increasing the maximum value that each
distribution could return as a percentage of the minimum value. Specific construction
methods depended upon the process time distribution. The lower bounds for the uniform
distributions were shifted right to the base mean value. The upper bounds for the uniform
distributions were set percentages above the base mean value (Table 11).
Induction Process
Percent above Mean New Distribution
10 UNIF[75.0,82.5]
20 UNIF[75.0,90.0]
30 UNIF[75.0,97.5]
40 UNIF[75.0,105.0]
50 UNIF[75.0,112.5]
The lower bounds, modes, and upper bounds for the triangular distributions were
constructed as set percentages above their corresponding base value (Table 12).
Table 12. Clean Room Process distribution parameters (min, mode, and max) per
percent above mean
Shifting the fitted continuous distributions used to model the process times for the
Leak and Flow, Boresite, and Rate Table proved more problematic. We included them in
the SRLM by taking the actual data sets for the processes, added the same incremental
percentage to the data, and then fit a new distribution to the process.
39
This design of experiment (DOE) yielded 17 factors and five distinct simulation
sets. Rather than run thousands of scenarios we decided to reduce the number by
utilizing a nearly orthogonal Latin Hypercube (NOLH) experimental design. This DOE
allows the modeler to efficiently explore the parameter space with a much smaller
number of scenarios than a full experimental design. A Latin hypercube in fundamental
form is a matrix (each column is an n-run and k-factor) permutation of the integers
(1,2,…,n). The n integers represent the levels across the range of the factor. Latin
hypercubes exhibit good all-purpose design, efficiency, “space-filling” properties, and are
flexible for conducting analysis. Spreading the design points throughout the
experimental region in a uniform manner leads to a good “space-filling” design and
minimizes the unsampled space. By spreading the design points throughout the region,
this design facilitates the analyst’s ability to extract desired statistical information and
insights. Cioppa and Lucas (2007) developed an algorithm for constructing such nearly
orthogonal Latin hypercubes. NOLHs sacrifice orthogonality for better space filling
properties. We used this algorithm, which led to a design with only 129 scenarios for the
17 factors.
The NOLH matrix also provides a method to investigate which factor (process
time distributions) have the most impact on the response variable (mean cycle time). We
regressed the 129 mean cycle times against the NOLH matrix to further investigate the
relationship between the input variables (process time distributions) and the response
variable (mean cycle time). The fitted model included both the linear terms and their two-
k k −1 k
way interactions (Cioppa & Lucas, 2007), in the form g ( x) = β o + ∑ β i xi + ∑∑ β i , j xi x j .
i =1 i =1 j > i
Num ber
RSquare N of Splits
0.725 129 3
All Row s
Count 129.000 LogWorth Difference
Mean 2.507 53.92 0.047
Std Dev 0.030
Figure 23. JMP Partition Tree with three splits for 10 percent increase simulation run.
JMP regressed the mean cycle times, obtained from the simulation set based on a
10 percent increase in process times, against the main effects and the two-way
interactions of the input variables (process time distributions) in the simulation (Figure
24). The JMP output sorts the parameters in the resulting model by level of significance
(SAS Institute Inc, 2007). The final model, obtained after conducting a multiple
regression, had a mean cycle time of 2.51 days with an r-squared value of 0.91. The most
significant factors were the three parameter values for the Clean Room triangular
distribution.
41
Summary of Fit
RSquare 0.907
RSquare Adj 0.901
Root Mean Square Error 0.009
Mean of Response 2.507
Observations (or Sum Wgts) 129.000
Figure 24. JMP Summary of Fit and Sorted Parameter regression estimates for 10
percent increase simulation run.
JMP conducted three splits in the Regression Trees with the simulation set based
on a 20 percent increase in process times (Figure 25). The model had a minimum mean
cycle time of 2.61 days with an r-squared value of 0.75. The upper bound for the Clean
Room process time was the most significant factor.
Num ber
RSquare N of Splits
0.748 129 3
All Row s
Count 129.000 LogWorth Difference
Mean 2.672 54.36 0.0956
Std Dev 0.061
Clean Room Tri-Max<84.5 Clean Room Tri-Max>=84.5 Clean Room Tri-Max<93.13 Clean Room Tri-Max>=93.13
Count 36.000 Count 36.000 Count 33.000 Count 24.000
Mean 2.606 Mean 2.652 Mean 2.705 Mean 2.752
Std Dev 0.031 Std Dev 0.032 Std Dev 0.031 Std Dev 0.030
Figure 25. JMP Partition Tree with three splits for 20 percent increase simulation run.
JMP regressed the mean cycle times, obtained from the simulation set based on a
20% increase in process times, against the main effects and the two-way interactions of
42
the input variables (process time distributions) in the simulation (Figure 27). The final
model, obtained after conducting a multiple regression, had a mean cycle time of 2.67
days with an r-squared value of 0.95. The most significant factors were again the three
parameter values for the Clean Room triangular distribution.
Summary of Fit
RSquare 0.949
RSquare Adj 0.946
Root Mean Square Error 0.014
Mean of Response 2.672
Observations (or Sum Wgts) 129.000
Figure 26. JMP Summary of Fit and Sorted Parameter regression estimates for 20
percent increase simulation run.
The additional increases of 30, 40, and 50 percent yielded similar results (see
Appendix B). The three parameter values for the Clean Room process time distribution
were the most significant factors effecting mean cycle time. However, the mean cycle
time did not increase dramatically when the process time distribution were increased.
Mean cycle time is not sensitive to the parameter estimates provided by the subject matter
experts at TYAD.
The baseline analysis identified that the utilization rates for the resources in the
system were extremely low. Ten of the eleven resources had utilization rates below 30
percent. We decided to look closer at the resources in order to understand the
relationship between the type of resources and mean cycle time, hoping to find ways to
reduce the mean cycle time. We developed a design of experiments to measure the effect
of varying the number of resources on the mean cycle time. We repeated the approach
43
taken in the baseline analysis (Regression Trees and Regression Models) to identify the
significant process time distributions. We varied the number of resources across a range
of values (Table 13). The lower range value reflects the need to have at least one
resource at each station. The upper range values reflect the space limitations on the
repair line. We maintained the GCS arrival rates and numbers at the levels found in the
base model.
Low 1 1 1 1 3 3 1 1 1 1
High 2 3 3 4 6 6 2 2 2 2
Table 13. Process station Resource high and low bounds for NOLH matrix.
Executing this design of experiment would require running over 18,000 scenarios,
an untenable number. We then decided to reduce the number of scenarios by a utilizing
NOLH experimental design. The matrix reduced the number of simulations from 18,000
to 33. Table 14 displays the first ten scenarios of the NOLH matrix.
Table 14. First ten scenarios (of 33) of the NOLH matrix for Resource factor
analysis.
We ran these 33 scenarios through the SRLM, holding all other elements of the
model at their base model values. The process rapidly reached stationarity, as in the base
case, removing the need for a warm-up period. Summary statistics of the scenarios
44
yielded a mean cycle time of 2.32 days per GCS with a standard deviation of 0.09 (Table
15). The mean cycle time exhibited very little variability across the scenarios. This lack
of variability suggests that none of the factors (resources) effecting mean cycle time have
much impact.
Mean Cycle Time (days) 2.316
Max Mean Cycle Time of Runs 2.528
Min Mean Cycle Time of Runs 2.225
Standard Deviation 0.086
95% Upper Confidence Level
2.345
of true Mean
95% Lower Confidence Level
2.286
of true Mean
Num ber
RSquare N of Splits
0.911 33 3
All Row s
Count 33.000 LogWorth Difference
Mean 2.316 10.931861 0.15043
Std Dev 0.086
Figure 27. JMP Partition Tree with three splits of Leak and Flow, Clean Room, and
Induction resources.
45
The first split occurred at the Leak and Flow resources. The mean cycle time
drops from 2.32 to 2.28 days when the Leak and Flow has two or more resources (as it
did in 25 out of 33 scenarios). The second split occurred at the Clean Room resources.
The mean cycle time drops from 2.28 to 2.25 days when the Clean Room has four or
more resources (as it did in 20 out of 33 scenarios). The third split occurred at the
Induction resources. The mean cycle time drops from 2.25 to 2.24 days when the
Induction has two or more resources (as it did in 11 out of 33 scenarios). The partition
tree identified three major factors affecting mean cycle time. Yet mean cycle time
remained largely insensitive to the number and allocation of resources. The last model
from the Regression Tree analysis dropped mean cycle time only 3 percent from the
baseline.
JMP regressed the mean cycle time against the main effects and the two-way
interactions of the input variables (resources) in the simulation. The first model had a
total of 55 terms. We then directed JMP to execute a stepwise regression, with a
significance level α = 0.05 , to remove insignificant factors. JMP provided a summary of
the fit and sorted parameter estimates as output (Figure 28).
Summary of Fit
RSquare 0.931
RSquare Adj 0.899
Root Mean Square Error 0.027
Mean of Response 2.316
Observations (or Sum Wgts) 33.000
Figure 28. JMP Summary of Fit and Sorted Parameter regression estimates for
resource factor regression model
46
The JMP output sorts the parameters in the resulting model by level of
significance. The final model shows the Leak and Flow and Clean Room resources as the
most significant factors. This confirms the findings from our earlier partition analysis.
Interaction Profiles
2.35 Induction 1 1 1 1
2 1 2 2
2.25 Resources
2 2
2.15
2.45
Cycle Time
Mean
2.35 1 1 1
3 4044 3 3
Machine 1
3 1
2.25 3
2.15
2.45
Cycle Time
1 1 1
1
Mean
2.35 L and F 1
2.25 3 Resources 3 3
3 3
2.15
3 3
3 3
Mean
Vib Resources
2.45
Cycle Time
2
Mean
2.35 1 1 Vib
2 2 1
2.25 1
2 1
2 Resources
2.45
Cycle Time
2
Mean
1.4
1.8
2.2
1
1.5
2
2.5
3
1
1.5
2
2.5
3
1.4
1.8
2.2
1
1.4
1.8
2.2
Figure 29. JMP Interaction Profiler for resource factor regression analysis. Circled in
black is the most significant interaction.
Solid, non-parallel lines indicate interactions (SAS Institute Inc, 2007). The Final
Assembly and Vibration (Vib) interaction, (circled in Figure 29) is the most significant
two-way interaction. If we have two Final Assembly resources and One Vibration
resource the predicted mean cycle time is the same as having the opposite configuration.
47
We also had JMP create a prediction profiler (Figure 30). The profiler is an
interactive tool that allows the modeler to adjustment factor levels and computes the
predicted response variable.
Prediction Profiler
2.2
Cycle Time
2.046247
±0.0473
Mean
2.1
2
1.9
0.8
1.2
1.6
1
1.5
2
2.5
3
1
1.5
2
2.5
3
2.5
3.5
4.5
5.5
6.5
0.8
1.2
1.6
0.8
1.2
1.6
2
2 2 3 6 1 2
Induction 4044 L and F Clean Room Vib Final Assembly
Resources Machine Resources Resources Resources Resources
Figure 30. JMP Prediction Profiler for resource factor regression analysis.
We adjusted the resources in the profiler (only the significant factors are
displayed) to achieve the minimum predicted mean cycle time. The model returned a
mean cycle time of 2.05 days when the number of resources at Leak and Flow, Clean
Room, and Induction were at their NOLH matrix upper bounds. Increasing the number of
resources at the two most significant processes decreased the mean cycle time by only
11 percent, from 2.32 to 2.05 days. Neither an increase in the workforce nor a re-
allocation of the current workers will make significant reductions in mean cycle time on
the repair line.
F. INCREASE ARRIVALS
The baseline analysis identified that the repair line operates far below maximum
capacity. This insight led us to seek the arrival rate that would drive the system to full
capacity. Under normal operating conditions, the Sidewinder Repair Line inducts
between one and twenty GCSs per week. The base model modeled the number of daily
arrivals with a triangular(1,2,4) distribution. We developed an experimental design that
increased the number of arrivals per day, while keeping all other parameters at their base
model values (Table 16).
48
Arrivals per Day
Scenario
Distribution
Base TRI(1,2,4)
1 TRI(1,3,5)
2 TRI(1,3,6)
3 TRI(1,4,7)
4 TRI(1,4,8)
5 TRI(1,4,9)
6 TRI(1,5,10)
7 TRI(1,5,11)
8 TRI(1,6,12)
Table 16. Scenarios for GCS increase arrival distribution per day.
Arena conducted 20 replications for each of these eight scenarios and calculated
our metrics for each scenario (Table 17).
Mean Cycle
2.35 2.53 2.68 3.65 14.47 370.73 907.77 1101.86 1347.92
Time
Standard
0.01 0.01 0.01 0.11 3.76 20.06 17.17 16.99 17.07
Deviation
Throughput
476.70 649.87 736.47 909.53 995.49 1052.09 1154.93 1208.24 1311.22
per Year
WIP 4.32 6.31 7.59 12.76 55.40 1500.17 4032.37 5120.42 6797.76
Table 17. Base statistics (mean cycle time, standard deviation of mean cycle time,
throughput per year, and WIP) for increased arrival distributions.
Mean cycle time increased in a linear manner as we shifted the arrival rate from a
triangular(1,2,4) to a triangular(1,4,8) distribution. Interestingly, mean cycle time
exploded to more than 370 days when the arrival rate distribution shifted to a
triangular(1,4,9) distribution. The repair line appears to reach full capacity at this arrival
rate (Figure 31).
49
Mean Cycle Time per Arrival Distribution
370.73
375.00
300.00
Cycle Time
225.00
150.00 Mean
Cycle
75.00 Time
14.47 (days)
2.35 2.53 2.68 3.65
0.00
Earlier analysis revealed that the Clean Room process time to have the most effect
on the mean cycle time, particularly the maximum value for the triangular distribution.
Arena calculated the utilization rates for the Clean Room resources for each of the arrival
rate distributions (Figure 32).
80 99.99
Percent Busy
94.22 99.48
60 80.62
72.75
53.75
40
20
0
TRI(1,2,4) TRI(1,3,5) TRI(1,3,6) TRI(1,4,7) TRI(1,4,8) TRI(1,4,9)
Number of Arrivals per Day
Figure 32. Clean Room utilization rates per increased arrival distributions.
The Clean Room utilization rate approached 100 percent as the arrival rate
distribution shifted to triangular(1,4,9). Only perfect repair lines, without variability, can
achieve 100 percent utilization (Hopp & Spearman, 2008). The Clean Room queue also
50
had the longest GCS wait time for repair in the base SRLM analysis. Arena also
provided the Clean Room queue lengths over time (see Figure 33) and the mean cycle
times (Figure 34) for each of the arrival rate distributions.
0.000
1 yr 5 yr 10 yr
Rep Length
Figure 33. GCSs in the Clean Room queue per replication length (at one, five and ten
years) highlighting triangular (1,4,7), (1,4,8), and (1,4,9) distributions.
40.000
Cycle Time
30.000
Tri (1,4,9)
20.000 Tri (1,4,8)
Tri (1,4,7)
10.000
0.000
1 yr 5 yr 10 yr
Rep Length
Figure 34. GCS mean cycle time per replication length (at one, five and ten years)
highlighting triangular (1,4,7), (1,4,8), and (1,4,9) distributions.
The repair line reaches full capacity when the arrival rate is triangular(1,4,9). The
Clean Room resources are fully utilized at this rate, limiting the ability of the repair line
to reduce cycle time. The arrival rate of GCSs would need to double for the repair line to
reach full capacity.
51
G. SIMULATION OPTIMIZATION
The baseline analysis identified that the utilization rates for the resources in the
system were extremely low. This insight led us to investigate the impact reductions in
the workforce would have on mean cycle time and the other metrics. We also sought to
determine the resource allocation plan that would minimize mean cycle time. Arena
provides an optimization capability (OptQuest) that we utilized to conduct a simulation
optimization. The optimization sought to minimize mean cycle time subject to
constraints on the number of resources at different stations.
OptQuest is a tool that uses a simulation model constructed in Arena to search for
optimal solutions to a user-defined problem. When trying to evaluate the performance of
a system using various resources, one must first decide the inputs for the various
resources and then evaluate the performance for that particular arrangement of resources.
This provides a baseline for the performance of the system, but to see the effects of
varying the resources to increase the performance of the system, one must manually
change the number of resources and then run the simulation again. This method is a
repetitive and cumbersome task depending on the number of changes that are required,
and it can result in a poor search for ways to improve the performance of the system.
OptQuest performs this search for an optimal solution based on the performance variable
the user selects. OptQuest updates and changes user-controlled variables within the
Arena simulation and evaluates the user-defined performance parameter, then repeats
until finding an optimal solution (Rockwell Automation, 2005). OptQuest uses the
heuristics of tabu search, neural networks, and scatter search, and combining these
heuristics into a single fused algorithm to locate the optimal solution (Jie & Li, 2008).
Controls, responses, objectives, and constraints are the four main inputs required by
QptQuest. Controls are variables or resources defined in the Arena model. OptQuest
automatically assigns a control value to the resources defined in the model. The user
selects a low and high bound for the controls. Responses can be included in the objective
function or constraints and they are outputs of the simulation. Objective is the function
that OptQuest is trying to minimize or maximize, based on the defined performance of
52
the system. The objective function will include a selected response variable. Constraints
define relationships between controls and responses to assist in the efficiency of the
optimization (Rockwell Automation, 2005)
OptQuest used the SRLM to determine the number of resources per process
station that minimizes the mean cycle time. The baseline SRLM had the normal
operating conditions resource configuration (Table 18). Note that schedules do not bind
these resources in OptQuest; rather, the resource capacity remains constant (there are no
break times) throughout the optimization.
Induction Station 1
4044 Machine 2
Clean Room Station 5
L and F Station 2
Boresite Station 3
Rate Table Station 5
Assembly Station 1
Vib Station 1
Final Assembly Station 1
Final Inspection Station 1
Painter 1
Table 18. Baseline SRLM resource configuration (22 total resources + one Painter).
We modeled the Paint Room resource (Painter) as having fixed capacity in the
optimization. The Sidewinder Repair Line management does not have any direct
influence on the TYAD Industrial Facility operations, and requested that we omit this
worker from the analysis. We developed upper and lower bounds for the capacity of the
remaining controls, fixing all lower bounds at one and setting all upper bounds equal to
one more than the base model value (Table 19).
53
Control Suggested Value Lower Bound Upper Bound
Induction Station 1 1 2
4044 Machine 2 1 3
L and F Station 2 1 4
Boresite Station 3 1 4
Rate Table Station 5 3 6
Clean Room Station 5 3 6
Assembly Station 1 1 2
Vib Station 1 1 2
Final Assembly Station 1 1 2
Final Inspection Station 1 1 2
Table 19. Current, lower and upper resources bounds for optimization.
The response selected for this optimization was a user-specified tally value of
cycle time. The tally value was the mean GCS cycle time throughout all replications.
The constraint for the optimization scenarios was the resource total. We discovered the
resource utilization rates in the base analysis and determined to constrain the optimization
both above and below the current capacity (22). We began with a resource total of no
more than 16 for the first scenario and then incremented the total to 18, 20, 22, then 24.
The objective function was to minimize the response of tally time (Tally 1, mean cycle
time in days). Summarized below is the simulation optimization model in standard Naval
Postgraduate School (NPS) format (Brown & Dell, 2007).
54
Decision Variables: [units]
Formulation:
Objective:
minV
subject to:
∑t
i ,k
i
k
X ik ≤ V
∑ kX
i ,k
i ≤c
∑X
k
i
k
= 1 ∀i
where:
X ik ∈ {0,1} ∀i, k
The optimization ran three replications (based on time restrictions), 100 years per
replication. Arena identified the top ten permutations and ran an additional seven
replications, to estimate mean cycle time for each permutation. We show the results for
these ten permutations, with the sum of resources constraint “no more than 16,” below
(Table 20).
55
Rate Clean Final Final
Induction 4044 L and F Boresite Assembly Vib Sum of Tally 1
Table Room Assembly Inspection
Station Machine Station Station Station Station Resources (days)
Station Station Station Station
1 1 2 1 3 4 1 1 1 1 16 2.2968
1 1 2 2 3 3 1 1 1 1 16 2.4010
2 1 2 1 3 3 1 1 1 1 16 2.4095
1 1 3 1 3 3 1 1 1 1 16 2.4120
1 1 2 1 4 3 1 1 1 1 16 2.4147
1 1 2 1 3 3 2 1 1 1 16 2.4193
1 1 2 1 3 3 1 1 1 1 15 2.4194
1 1 1 1 3 5 1 1 1 1 16 2.4238
1 1 2 1 3 3 1 1 1 2 16 2.4247
1 2 2 1 3 3 1 1 1 1 16 2.4249
Table 20. Top 10 resource allocations, based on lowest mean cycle time and sum of
resources “no more than 16.”
We repeated this approach four more times, increasing the maximum number of
resources allowed by two each time (see Appendix C top 50 allocations). OptQuest rank-
ordered the resulting twenty outputs by mean cycle time (Table 21). The yellow
highlighted row identifies the configuration of resources with the lowest mean cycle time,
while the gray highlighted row identifies the optimal configuration of resources for the
base case resource capacity.
2 2 2 2 5 5 2 1 2 1 24 2.2261
2 2 2 2 5 5 2 1 1 1 23 2.2288
2 2 2 3 5 5 2 1 1 1 24 2.2294
2 2 2 2 3 5 2 2 2 2 24 2.2296
2 2 2 3 3 6 2 1 2 1 24 2.2312
2 1 3 2 3 5 2 2 2 2 24 2.2326
2 3 2 2 3 5 2 1 2 1 23 2.2333
2 2 2 2 3 5 2 1 1 1 21 2.2342
2 2 2 3 3 5 2 1 2 1 23 2.2344
2 2 2 2 6 5 2 1 1 1 24 2.2345
1 1 2 4 4 6 1 1 1 1 22 2.2532
2 2 2 1 3 6 1 2 2 1 22 2.2537
1 2 2 3 5 5 1 1 1 1 22 2.2548
1 1 2 3 5 6 1 1 1 1 22 2.2565
2 2 2 1 3 6 1 1 2 2 22 2.2565
1 2 2 2 6 5 1 1 1 1 22 2.2567
1 2 2 2 3 6 1 2 1 1 21 2.2587
2 2 2 1 3 5 1 2 2 2 22 2.2606
2 1 2 2 3 4 2 2 1 1 20 2.2610
1 1 2 4 3 6 1 1 1 1 21 2.2611
Table 21. Top 20 resource allocations based on lowest mean cycle time and sum of
resources “no more than 24.”
56
Not surprisingly, utilizing 24 resources yielded the minimum mean cycle time.
More surprisingly, the addition of two additional resources did not result in a significant
reduction in mean cycle time. The optimal configuration with 24 resources reduced mean
cycle time slightly more than 1 percent, from 2.25 to 2.23 days. OptQuest also provided
insights into which resource allocation plans, constrained by the base case number of
resources (22), minimized mean cycle time (Table 22).
Percent
Rate Clean Final Final
Induction 4044 L and F Boresite Assembly Vib Sum of Tally 1 Change
Table Room Assembly Inspection
Station Machine Station Station Station Station Resources (days) from
Station Station Station Station
Baseline
2 2 2 2 5 5 2 1 2 1 24 2.2261 -1.27
2 2 2 2 5 5 2 1 1 1 23 2.2288 -1.15
2 2 2 3 5 5 2 1 1 1 24 2.2294 -1.12
2 2 2 2 3 5 2 2 2 2 24 2.2296 -1.12
2 2 2 3 3 6 2 1 2 1 24 2.2312 -1.05
2 1 3 2 3 5 2 2 2 2 24 2.2326 -0.98
2 3 2 2 3 5 2 1 2 1 23 2.2333 -0.95
2 2 2 2 3 5 2 1 1 1 21 2.2342 -0.91
2 2 2 3 3 5 2 1 2 1 23 2.2344 -0.90
2 2 2 2 6 5 2 1 1 1 24 2.2345 -0.90
1 1 2 4 4 6 1 1 1 1 22 2.2532 -0.07
2 2 2 1 3 6 1 2 2 1 22 2.2537 -0.05
1 2 2 3 5 5 1 1 1 1 22 2.2548 0.00
Table 22. Top resource allocations above (lower mean cycle time) base case
allocation highlighted in gray
Twelve resource allocation plans yielded smaller mean cycle times than the base
model. We obtained a mean cycle time of 2.23 days with only 21 resources. The repair
line could reduce mean cycle time from 2.25 to 2.23 days with one less resource (Table
22 yellow highlight). OptQuest also provided insights into which resource allocation
plans, constrained by fewer resources than in the base case (22), minimized mean cycle
time (Table 23).
57
Percent
Rate Clean Final Final
Induction 4044 L and F Boresite Assembly Vib Sum of Tally 1 Change
Table Room Assembly Inspection
Station Machine Station Station Station Station Resources (days) from
Station Station Station Station
Baseline
1 2 2 3 5 5 1 1 1 1 22 2.2548 0.00
1 1 2 3 5 6 1 1 1 1 22 2.2565 0.07
2 2 2 1 3 6 1 1 2 2 22 2.2565 0.08
1 2 2 2 6 5 1 1 1 1 22 2.2567 0.08
1 2 2 2 3 6 1 2 1 1 21 2.2587 0.18
2 2 2 1 3 5 1 2 2 2 22 2.2606 0.26
2 1 2 2 3 4 2 2 1 1 20 2.2610 0.27
1 1 2 4 3 6 1 1 1 1 21 2.2611 0.28
2 1 2 1 3 5 1 1 1 1 18 2.2642 0.42
1 1 3 1 3 5 1 1 1 1 18 2.2742 0.86
1 1 2 1 4 5 1 1 1 1 18 2.2757 0.93
1 1 2 1 3 5 1 1 1 2 18 2.2758 0.93
1 1 2 1 3 6 1 1 1 1 18 2.2762 0.95
2 1 3 1 3 4 1 1 1 1 18 2.2778 1.02
1 1 2 1 3 5 1 2 1 1 18 2.2785 1.05
1 1 2 1 3 5 1 1 1 2 18 2.2786 1.06
2 1 2 1 3 4 1 2 1 1 18 2.2791 1.08
2 1 2 1 3 4 1 1 2 1 18 2.2791 1.08
1 1 2 1 3 4 1 1 1 1 16 2.2968 1.86
Table 23. Resource allocations below (higher mean cycle time) base case allocation
highlighted in gray. Yellow highlights are best allocations with two or
less resources from the base case of 22.
The optimal resource allocation plan with 20 resources yielded a mean cycle time
of 2.26 days, a 0.27 percent increase from the base case. The optimal resource allocation
plan with 18 resources yielded a mean cycle time of 2.26 days, a 0.42 percent increase
from the base case. The optimal resource allocation plan with 16 resources yielded a
mean cycle time of 2.30 days, a 1.86 percent increase from the base case. The repair line
could reduce the number of resources from 22 to 16 (27 percent) and experience an
increase in mean cycle time of only 1.86 percent.
58
V. CONCLUSIONS AND FUTURE RESEARCH
The mean cycle time for the TYAD Sidewinder repair line under current
operating conditions is 2.35 days. The repair line should repair 476 GCSs per year. The
repair line operates far below maximum capacity. Workers at ten of the eleven stations
have a less than 30 percent utilization rate. Workers at the Clean Room have the highest
utilization rate at 54 percent. The process times at the Clean Room have the greatest
impact on the mean cycle time and reductions in these times would lead to the greatest
decrease in the mean cycle time in the simulation. The repair line does not achieve full
operating capacity until the GCS arrival rate doubles. Re-allocation of the current
workforce to an optimal configuration will reduce mean cycle time by less than 1 percent.
TYAD could reduce the workforce at the repair line by 27 percent and only experience a
1.9 percent increase in mean cycle time.
Several additions to the work discussed in this thesis could prove useful to
TYAD. Follow-on work could include better collection of process time data at TYAD to
further enhance the station process time distributions. Building a sub-model of the Clean
Room station to gather summary statistics and determine significant factors effecting
Clean Room process time could guide the implementation of time saving measures.
Expanding the SRLM to include the entire GCS repair process, from customer
identification of faults to the return of a repair GCS, would provide TYAD better
59
understanding of how they support their customers. Expanding the model to include wait
time for repair parts not on-hand and failure times for machines and equipment would
provide a more accurate picture of the repair line. Conducting a cost-benefit analysis that
considered the lost/gain of cycle time against the addition/deletion of resources would
better inform TYAD on the budgetary implications of their policies for the repair line.
60
APPENDIX A. SRLM ANIMATION
Figure 35. SRLM animation screen shot from Arena. Mimics Sidewinder floor layout
with GCSs (silver and red) moving through the repair process. Red GCSs
signify the GCS shell visited the Paint Room.
61
THIS PAGE INTENTIONALLY LEFT BLANK
62
APPENDIX B. TREES AND REGRESSION MODELS
Summary of Fit
RSquare 0.978924
RSquare Adj 0.97733
Root Mean Square Error 0.014251
Mean of Response 2.841829
Observations (or Sum Wgts) 129
Num ber
RSquare N of Splits
0.777 129 3
All Row s
Count 129 LogWorth Difference
Mean 2.8418295 61.453337 0.15023
Std Dev 0.0946499
Clean Room Tri-Max<84.5 Clean Room Tri-Max>=84.5 Clean Room Tri-Max<97.44 Clean Room Tri-Max>=97.44
Count 24 Count 39 Count 30 Count 36
Mean 2.7204583 Mean 2.792359 Mean 2.8753667 Mean 2.9483889
Std Dev 0.0433088 Std Dev 0.0427899 Std Dev 0.045026 Std Dev 0.0488971
Figure 36. Summary of Fit, Sorted Parameter regression estimates, and Partition Tree
for 30 percent increase simulation run. Mean cycle time of 2.8 days, R-
square of 0.98, and the Clean Room triangular distribution parameters of
max, mode, and min are the most significant factors effecting mean cycle
time.
63
Summary of Fit
RSquare 0.984603
RSquare Adj 0.98301
Root Mean Square Error 0.017325
Mean of Response 3.012853
Observations (or Sum Wgts) 129
Num ber
RSquare N of Splits
0.729 129 3
All Row s
Count 129 LogWorth Difference
Mean 3.0128527 57.318205 0.20835
Std Dev 0.132917
Figure 37. Summary of Fit, Sorted Parameter regression estimates, and Partition Tree
for 40 percent increase simulation run. Mean cycle time of 3.0 days, R-
square of 0.98, and the Clean Room triangular distribution parameters of
max, mode, and min are the most significant factors effecting mean cycle
time.
64
Summary of Fit
RSquare 0.984745
RSquare Adj 0.983167
Root Mean Square Error 0.022692
Mean of Response 3.195984
Observations (or Sum Wgts) 129
Num ber
RSquare N of Splits
0.752 129 3
All Row s
Count 129 LogWorth Difference
Mean 3.1959845 61.1233 0.27915
Std Dev 0.1748952
Figure 38. Summary of Fit, Sorted Parameter regression estimates, and Partition Tree
for 50 percent increase simulation run. Mean cycle time of 3.2 days, R-
square of 0.98, and the Clean Room triangular distribution parameters of
max, mode, and min are the most significant factors effecting mean cycle
time.
65
THIS PAGE INTENTIONALLY LEFT BLANK
66
APPENDIX C. OPTQUEST RESULTS
Percent
Rate Clean Final Final
Induction 4044 L and F Boresite Assembly Vib Sum of Tally 1 Change
Table Room Assembly Inspection
Station Machine Station Station Station Station Resources (days) from
Station Station Station Station
Baseline
2 2 2 2 5 5 2 1 2 1 24 2.2261 -1.27
2 2 2 2 5 5 2 1 1 1 23 2.2288 -1.15
2 2 2 3 5 5 2 1 1 1 24 2.2294 -1.12
2 2 2 2 3 5 2 2 2 2 24 2.2296 -1.12
2 2 2 3 3 6 2 1 2 1 24 2.2312 -1.05
2 1 3 2 3 5 2 2 2 2 24 2.2326 -0.98
2 3 2 2 3 5 2 1 2 1 23 2.2333 -0.95
2 2 2 2 3 5 2 1 1 1 21 2.2342 -0.91
2 2 2 3 3 5 2 1 2 1 23 2.2344 -0.90
2 2 2 2 6 5 2 1 1 1 24 2.2345 -0.90
1 1 2 4 4 6 1 1 1 1 22 2.2532 -0.07
2 2 2 1 3 6 1 2 2 1 22 2.2537 -0.05
1 2 2 3 5 5 1 1 1 1 22 2.2548 0.00
1 1 2 3 5 6 1 1 1 1 22 2.2565 0.07
2 2 2 1 3 6 1 1 2 2 22 2.2565 0.08
1 2 2 2 6 5 1 1 1 1 22 2.2567 0.08
1 2 2 2 3 6 1 2 1 1 21 2.2587 0.18
2 2 2 1 3 5 1 2 2 2 22 2.2606 0.26
2 1 2 2 3 4 2 2 1 1 20 2.2610 0.27
1 1 2 4 3 6 1 1 1 1 21 2.2611 0.28
2 1 2 1 3 5 1 1 1 1 18 2.2642 0.42
1 1 3 1 3 5 1 1 1 1 18 2.2742 0.86
1 1 2 1 4 5 1 1 1 1 18 2.2757 0.93
1 1 2 1 3 5 1 1 1 2 18 2.2758 0.93
1 1 2 1 3 6 1 1 1 1 18 2.2762 0.95
2 1 3 1 3 4 1 1 1 1 18 2.2778 1.02
1 1 2 1 3 5 1 2 1 1 18 2.2785 1.05
1 1 2 1 3 5 1 1 1 2 18 2.2786 1.06
2 1 2 1 3 4 1 2 1 1 18 2.2791 1.08
2 1 2 1 3 4 1 1 2 1 18 2.2791 1.08
1 1 2 1 3 4 1 1 1 1 16 2.2968 1.86
2 1 3 1 4 3 1 1 2 1 19 2.3944 6.19
2 1 3 1 4 3 1 2 2 1 20 2.3949 6.22
2 1 3 1 5 3 1 1 2 1 20 2.3967 6.30
2 1 3 1 5 3 1 1 1 2 20 2.3970 6.31
2 1 2 1 6 3 1 2 1 1 20 2.3973 6.32
2 1 2 1 3 3 1 2 1 1 17 2.3980 6.35
2 1 3 1 3 3 1 1 2 1 18 2.3981 6.36
2 1 3 1 6 3 1 1 1 1 20 2.3987 6.38
2 1 3 1 3 3 1 1 1 2 18 2.3993 6.41
2 1 2 1 4 3 1 1 1 1 17 2.3994 6.41
1 1 2 2 3 3 1 1 1 1 16 2.4010 6.49
2 1 2 1 3 3 1 1 1 1 16 2.4095 6.86
1 1 3 1 3 3 1 1 1 1 16 2.4120 6.97
1 1 2 1 4 3 1 1 1 1 16 2.4147 7.09
1 1 2 1 3 3 2 1 1 1 16 2.4193 7.30
1 1 2 1 3 3 1 1 1 1 15 2.4194 7.30
1 1 1 1 3 5 1 1 1 1 16 2.4238 7.50
1 1 2 1 3 3 1 1 1 2 16 2.4247 7.53
1 2 2 1 3 3 1 1 1 1 16 2.4249 7.54
Table 24. Top 50 resource allocation results from OptQuest optimization (base case
highlighted in gray).
67
THIS PAGE INTENTIONALLY LEFT BLANK
68
LIST OF REFERENCES
April, J., Glover, F., Kelly, J. P., & Laguna, M. (2003). Practical introduction to
simulation optimization. Proceedings of the 2003 Winter Simulation Conference,
(pp. 1–7).
Cioppa, T. M., & Lucas, T. W. (2007). Efficient nearly orthogonal and space filling latin
hypercubes. Technometrics, 45–55.
Doerr, K. H., Kang, K., & Sanchez, S. M. (2006). A design of experiments approach to
readiness risk analysis. Proceedings of the 2006 Winter Simulation Conference,
(pp. 1332–139).
Esopi, R. (2009, March). Process Time Data Sets. (T. A. Caliguire, Interviewer).
Hopp, W. J., & Spearman, M. L. (2008). Factory Physics. New York: McGraw–
Hill/Irwin.
Jie, W., & Li, L. (2008). Simulation for constrained optimization of inventory system by
using Arena and OptQuest. International Conference on Computer Science and
Software Engineering (pp. 313–316). IEEE Computer Society.
Kelton, W. D., Sadowski, D. A., & Sadowski, R. P. (1998). Simulation with Arena.
McGraw–Hill.
Law, A. M., & Kelton, W. D. (2000). Simulation Modeling and Analysis. McGraw–Hill.
Montgomery, D. C., Peck, E. A., & Vining, G. (2006). Introduction to Linear Regression
Analysis. Hoboken: John Wiley & Sons, Inc.
Naval Postgraduate School. (2007). Software Downloads. Retrieved March 2009, from
SEED Center for Data Farming: http://harvest.nps.edu
OptTek Systems Inc. (2000, October 16). Combining simulation & optimization for
improved business decisions. A White Paper from OptTek . Boulder, CO, USA.
69
Rockford Consulting. (2000). Rockford Consulting Group Lean Manufacturing.
Retrieved January 2009, from Rockford Consulting Group:
http://rockfordconsulting.com/lean-manufacturing-consulting-firm.htm
U.S. Army. (2009, January). Retrieved February 2009, from Tobyhanna Army Depot:
http://www.tobyhanna.army.mil
U.S. Navy. (2009). United States Navy Fact File—Sidewinder AIM–9. Retrieved
February 2009, from Navy: http://www.navy.mil/navydata/fact
70
INITIAL DISTRIBUTION LIST
6. Jim Dwyer
Army Material Command
Fort Belvoir, Virginia
71