Using A Fun Six Sigma Project To Teach Quality Concepts Tools and Techniques
Using A Fun Six Sigma Project To Teach Quality Concepts Tools and Techniques
Using A Fun Six Sigma Project to Teach Quality Concepts, Tools, and Tech-
niques
Dr. Mustafa Shraim, Ohio University
Dr. Mustafa Shraim is an Assistant Professor in the Department of Engineering Technology & Man-
agement at Ohio University in Athens, Ohio. He received both of his B.S. and M.S. degrees from Ohio
University, and a Ph.D. in Industrial Engineering from West Virginia University. He has over 20 years
of industrial experience as a quality engineer, manager, and consultant in quality management systems,
statistical methods, and Lean/ Six Sigma. In addition, he coaches and mentors Green & Black Belts
on process improvement projects in both manufacturing and service. He is a Certified Quality Engineer
(CQE) & a Certified Six Sigma Black Belt (CSSBB) by ASQ, and a certified QMS Principal Auditor by
IRCA in London. He was elected a Fellow by ASQ in 2007.
American
c Society for Engineering Education, 2018
Using A Fun Six Sigma Project to Teach Quality Concepts, Tools, and
Techniques
Abstract
Research has shown that students learn better if they are engaged in, and motivated to struggle
with, their own learning [5]. For this reason, if no other, students appear to learn better if they
work cooperatively in small groups to solve problems. Furthermore, learning quality engineering
concepts, such as variation, using traditional methods can be challenging for many college
students with no prior background. It makes it even more challenging when methods such as
statistical process control, process capability analysis, and design of experiments are involved.
This paper presents a Six Sigma project utilizing a catapult as a process with multiple
controllable factors as input variables and the distance where a ball lands as the output
(dependent variable). The aim is to minimize variation and attain a target distance. The Six
Sigma improvement model: Define-Measure-Analyze-Improve-Control (DMAIC) was
employed. Each member of the team assumed the role of a project leader for at least one of the
DMAIC phases. In addition to applying quality tools manually, students also utilized a statistical
software to analyze experimental data.
Results show that students were able to take an existing process and make significant
improvements in terms of reducing variation and centering the process using the tools and
techniques learned in class throughout the semester. In their presentations and feedback, teams
commented on how this learning-by-doing experience has helped them see how such tools can be
used together.
Introduction
Teaching statistics and applied statistical methods can be challenging for both educators and
students. Students may not be ready for not having sufficient mathematical or statistical
preparation [1]. As a result, it is not uncommon to have misconceptions about statistics, in
addition to lack of interest. Many students have negative attitude or when it comes to learning
statistics, besides the anxiety that comes with it [2]. As misconceptions and attitudes have been
found to correlate with performance in statistics courses [3], changing them can be challenging
for educators [4].
Research shows that students learn better if they are engaged in, and motivated to struggle with,
their own learning. For this reason, if no other, students appear to learn better if they work
cooperatively in small groups to solve problems [5]. Collaborative learning has been described
in college level statistics courses in various forms [6-10]. Educators employing collaborative or
cooperative learning methods reported greater student satisfaction with the learning experience
[8, 9], reduction of anxiety [10, 11], and concluding that student performance was greater than
individual students could have achieved working independently [6, 10]. Similar results were
found in applied statistics courses where frequent and regular encounters of planned
collaborative work appear to be effective in improving performance for undergraduate students
[13].
The three essential elements for collaborative learning are: co-labor, intentional design, and
meaningful learning [15]. That is, everyone on the team must be actively engaged (co-labor) in
an activity or peer-led project designed to complement the course learning outcomes. As a result,
this activity or project will increase student’s knowledge and understanding of course content
(meaningful learning).
Combining the collaborative learning with a Six Sigma project using a process improvement
methodology like DMAIC can have many benefits. Six Sigma training using projects is more
effective than traditional statistical courses and is even used in a master’s level courses [16],
[17]. Cudney and Kanigolla found that inclusion of a Lean Six Sigma project had a positive
impact on students’ learning of concepts included in the course [18, 19].
Another issue is the fact that the course includes many tools and techniques that are traditionally
taught as individual topics. Linking these tools together using a quality improvement project
methodology like Six Sigma demonstrates how they are used in a systematic way.
The Process
Learning-by-doing for a Six Sigma project requires availability of a process that needs
improvement. Finding such a process in a college environment can be difficult, particularly with
logistics, timing, etc., where a real project may take 3 to 6 months to complete. This becomes
more challenging when multiple teams of students are involved and looking for such processes.
Therefore, a process needs to be available to students throughout the semester to ensure the
completion of all the project phases in a timely manner. Furthermore, one of the statistical
techniques of interest is design of experiments (DOE). Applying this off-line method at an
external organization only adds to the challenge.
With these requirements and limitations, it would be best to use a process simulator that can be
readily available to students. Furthermore, it is important that the process simulator not be
computer-based and requires physical cooperation among team members in making process
adjustments to variables and measuring the response.
One of the best process simulators to satisfy the above requirements is the catapult. The catapult
launches a small-sized ball (like table-tennis), based on a given setup. Therefore, the response
(dependent variable) is the travelled distance when the ball first touches the floor (sometimes
called in-flight distance). This in-flight distance can be affected by many controllable factors.
However, for this project we used the following factors:
A. Tension setting - fixed arm
B. Tension setting - moving arm
C. Ball seat
D. Elevation
E. Ball Type
F. Hight of catapult placement
G. Reclining distance before release
The in-flight distance is measured using a tape measure to the closest inch. This is done visually
by an inspector. As a result, the determined distance will also include variation from the
measurement system, mainly the inspector.
Project Details
This project is an element of a required Quality Improvement course taught at a major
Midwestern public university. Below are some of the learning outcomes of this course that relate
to the Six Sigma project:
• Apply knowledge of engineering and statistical fundamentals to solve technical problems
• Understand the concept of variation and statistical quality control
• Understand how a company can address continuous improvement programs using Six
Sigma or the seven-step A3 process
• Select and use the appropriate quality control or management and planning tool
• Work in a team environment to complete a project using applicable tools identified in in
this course and report results in written and presentation formats
This project follows the Six Sigma DMAIC methodology, where the catapult is used as a process.
The “product” is the horizontal traveled (in-flight) distance between the catapult itself and the
point where the ball first hits the ground. The measurement is visually taken by an inspector
using a measuring tape. The actual specifications (customer needs) are to hit the target value
consistently with minimal variation. The students work in teams of four or five each.
For each phase (milestone) of the project, there is a list of deliverables that each team must
produce by a due date. One of the deliverables in the Define phase is the project schedule or
Gantt chart. This chart is used as a tool for outlining steps that need to be taken to complete each
phase along with due dates and responsibilities. Table 1 lists minimum deliverables for each
phase.
Table 1: Deliverables for A Six Sigma Project
: ̅=0
: ̅≠0
The hypothesis testing was performed using α = 0.05 level of significance. If there is
a significant difference, corrective action would be taken to bring the readings closer
verified by running the test again. About half of the teams reported issues initially
then resolved by creating a standardized way of reading the measured distances.
Figure 1 displays the results of the paired t-test for one of the teams.
b. SPC Chart: Once the measurement system is deemed adequate, a variable control (X-
bar and R) chart was used to study variation and the stability of the process. Each
team member took five catapult launches in a row to make a sample while another
located where the ball landed (inspector) and read the measurement to a third student
(recorder) who manually entered the numbers onto a control chart template. This
rotation took place until 25 samples were generated (Figure 2). Each team can only
generate 5 samples (subgroups) at a time to simulate shifts so data collection was
completed over a period of at least five days. This data was used as baseline for
improvement.
c. The team determined the average and the range for each sample and plotted them on
the chart manually during sampling and by using the software later. After about 25
different samples, the centerline and control limits were determined and graphed on
the control chart. Control chart rules were followed, and actions were taken as needed
[20].
90.0
__
X=88.54
87.5
LCL=85.471
85.0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample
UCL=11.25
10.0
Sample Range
7.5
_
5.0 R=5.32
2.5
0.0 LCL=0
1 3 5 7 9 11 13 15 17 19 21 23 25
Sample
LSL USL
Process Data Overall
LSL 78.5 Within
Target *
USL 81.5 Overall Capability
Sample Mean 88.54 Pp 0.18
Sample N 125 PPL 1.22
StDev(Overall) 2.73832 PPU -0.86
StDev(Within) 2.42639 Ppk -0.86
Cpm *
Potential (Within) Capability
Cp 0.21
CPL 1.38
CPU -0.97
Cpk -0.97
81 84 87 90 93 96
c. Design of Experiments (DoE): This was the most challenging tool for students to use,
but it helped in identifying which factors to control for minimizing variation in the
distance and locating the best settings for optimum. This team used a factorial design
each at 3 levels 3k with three factors (k=3) for a total of 27 combinations. The
experiment was replicated for a total number of runs (N=54). It should be mentioned
here that teams were free to choose an appropriate design as long as they included at
least three factors. Results of the design of experiments included analysis of variance
(Table 2), factorial plots (Figure 5) and interaction plots (Figure 6). Results indicated
which factors must be controlled closely (the most significant). As for interactions,
and even though showing statistical significance, the contribution is minimal when
compared with the main effects (controllable factors).
Table 2: Analysis of Variance
Catapult Experiment
Fitted Means
Band Moving Tension Ball Seat
90
80
70
Mean of Distance
60
50
40
30
20
Black White Plain Low Med High Low Med High
40
20
Band * Ball Seat Moving Tensi * Ball Seat Ball Seat
100 Low
Med
80 High
60
40
20
b. SPC Chart: After the implementation of the action plan, the process “after
improvement” is sampled. Each team repeated the data collection process on a control
chart similar to what was done in the Measure phase. Figure 7 shows process
performance after improvements are made as compared to the baseline. It can be seen
in Figure 7 that significant improvements were made in reducing variation and
moving the process towards the target value of 80 inches.
Xbar-R Chart for Baseline (Stage 1) and After Improvement (Stage 2)
1 2
87.5
85.0
82.5
__
UCL=80.76
80.0 X=79.92
LCL=79.08
1 6 11 16 21 26 31 36 41 46
Sample
1 2
10.0
Sample Range
7.5
5.0
1
UCL=3.09
2.5 _
R=1.46
0.0 LCL=0
1 6 11 16 21 26 31 36 41 46
Sample
c. Process Capability Analysis: This was again run using a statistical software with
“after improvement” data. Process capability indices (Cp, and Cpk), among other
measures, were compared against what was obtained in the Measure phase. The
standard deviation was reduced by about 70%. Similarly, Cp and Cpk show
significant improvements but still below the standard requirements for capability of
being equal or greater than 1.0. This is because the tolerance is set arbitrarily, and on
the narrow (tight) side, to seek greater improvement. Table 3 summarizes statistics
before and after improvement. Figure 8 displays the process capability analysis after
improvement.
LSL USL
Process Data Overall
LSL 78.5 Within
Target *
USL 81.5 Overall Capability
Sample Mean 79.92 Pp 0.64
Sample N 125 PPL 0.61
StDev(Overall) 0.776157 PPU 0.68
StDev(Within) 0.720827 Ppk 0.61
Cpm *
Potential (Within) Capability
Cp 0.69
CPL 0.66
CPU 0.73
Cpk 0.66
5. Control: This phase is concerned with implementing measures to ensure that realized
improvements are sustained in the long run. For this project, it included the following
deliverables:
a. Control Plan / Instructions: This is designed for future users of the catapult so that the
process is in control. In real-world situations, this may also be used for training
purposes.
b. On-going SPC Chart: A long-term control chart is used to plot data, so it can be
studied for out of control conditions over a long period of time to ensure
sustainability. At set points, say 30, 60, and 90 days, this information can be used to
run and study process capability analysis and compare against original improvements.
Concluding Remarks
This project was instrumental in achieving the objectives of this course of applying knowledge of
engineering and statistical fundamentals to solve technical problems and collaboratively
complete its phases using applicable tools and techniques. Using the catapult as a process helped
achieve our objectives in a timely manner. Students were able to identify and remove variation
from the output by applying root cause analysis methodology. Teams were able to see how
improvements can be made and sustained when using such methods.
Industry is always looking for incoming workforce who can lead projects, use statistical methods
to analyze problems, and work in a team environment. Student surveys showed positive
comments on learning quality engineering and management methods from this project when
compared to traditional methods. About 87% of students indicated that this project helped them
understand the concept of variation and the quality tools and techniques covered in class. In
addition, 90% agreed or strongly agreed that this project helped them understand the Six Sigma
DMAIC methodology. Students also indicated that they would like an opportunity to apply the
techniques learned in a manufacturing environment. To do this, the department’s machining,
fabrication, and plastics labs may be utilized in future studies using techniques such as gages
repeatability and reproducibility (GR&R) studies and design of experiments.
References
[1] Johnson, M & Kuennen, E., “Basic Math Skills and Performance in an Introductory Statistics
Course,” Journal of Statistics Education Vol. 14, Iss. 2, 2006
[3] Finney, S. and Schraw, G., “Self-efficacy beliefs in college statistics courses,” Contemporary
Educational Psychology, Volume: 28 Issue 2 (2003)
[4] Joan Garfield, J. and Ahlgren A., “Difficulties in Learning Basic Concepts in Probability and
Statistics: Implications for Research,”, Journal for Research in Mathematics Education
Vol. 19, No. 1 pp. 44-63, Jan 1988)
[5] National Research Council (1989). Everybody counts: A report to the nation on the future of
mathematics education. Washington, D.C.: National Academy Press, 1989.
[6] Auster, C. J., “Probability sampling and inferential statistics: An interactive exercise using
M&M’s. Teaching Sociology, no. 28:379–385, 2000.
[7] Helmericks, S., “Collaborative testing in social statistics: Toward gemeinstat.”. Teaching
Sociology, no. 21, pp. 287–97, 1993.
[8] Perkins, D. V., and R. N. Saris. 2001. “A ‘jigsaw classroom’ technique for undergraduate
statistics courses.” Teaching of Psychology, No. 28, pp.111–113, 2001.
[9] Potter, A. M. 1995. Statistics for sociologists: Teaching techniques that work. Teaching
Sociology, no. 23: pp. 259–63, 1995
[10] Wybraniec, J., and Wilmoth, J., “Teaching Students Inferential Statistics: A ‘tail’ of Three
Distributions,” Teaching Sociology, No. 27, pp. 74–80, 1999.
[11] Schacht, S. P., and Stewart B., “Interactive/ user-friendly gimmicks for teaching statistics,”,
Teaching Sociology, no.20: pp. 329–332, 1992.
[13] Johnson, D, Johnson, R & Smith, K, “Cooperative learning returns to college: What
evidence is there that it works?” Change: The Magazine of Higher Learning, vol. 30, no. 4, pp.
26-35, 1998.
[14] Curran, E., Carlson, K., and Celotta, D., “Changing Attitudes and Facilitating
Understanding in the Undergraduate Statistics Classroom: A Collaborative Learning Approach,”,
Journal of the Scholarship of Teaching and Learning, Vol. 13, No. 2, pp. 49-71, 2013.
[15] Barkley, E.F., Cross, K.P., & Majro, C.H. Collaborative learning techniques: A handbook
for college faculty. San Francisco, CA: Jossey-Bass, 2005.
[16] Anderson-Cook, C., Hoerl, R., & Patterson, A. (2005). “A Structured Problem-solving
Course for Graduate Students: Exposing Students to Six Sigma as Part of their University
Training,” Quality and Reliability Engineering International, 21, pp. 249-256, 2005.
[17] Castellano, J., Petrick, J., Vokurka, R., & Weinstein, L, “Integrating Six Sigma Concepts
in an MBA Quality Management Class,” Journal of Education for Business, 83, pp. 233-238,
2008.
[18] Cudney, E.A. & Kanigolla, D., “Measuring the Impact of Project-Based Learning in Six
Sigma Education,” Journal of Enterprise Transformation, 4, pp. 272-288, 2014.
[19] Dinesh Kanigolla, Elizabeth A. Cudney, Steven M. Corns, V.A. Samaranayake, “Enhancing
Engineering Education Using Project-based Learning for Lean and Six Sigma", International
Journal of Lean Six Sigma, Vol. 5 Issue: 1, pp.45-61, 2014.