Measure Phase: A. Deriving Measures
Measure Phase: A. Deriving Measures
1. Deriving Measurements
2. Data Collection
4. Understanding Process
A. DERIVING MEASURES
The first and the most basic question is WHAT TO MEASURE? We are basically
Critical to cost (CTC): They are similar to CTQs but deal exclusively with the
impact of cost on the customer. However, CTQs and CTCs may be similar, yet
Critical to process (CTP): CTPs are typically the key process input variables
Critical to safety (CTS): CTSs are stated customer needs regarding the safety
of the product or process. Though identical to the CTQ and CTC, it is identified
We can use the following a form like this to define and select measures
B. DATA COLLECTION
2. Primary Data: Data collected fresh for the purpose of the project.
SAMPLING
Sampling means selecting a few items out of a larger group in order to question,
examine, or test those items and then to draw conclusions about the entire group.
When it is expensive, difficult, or would take too long to examine the entire
group
For example:
Simple Random Sampling: Randomly picking up a desired number of units from the
population.
Gung Ho started the discussion. “I’ve got it figured out. For a margin of error of
need a sample of 371. We’ll use a random number generator from the Internet
to choose 371 random numbers between 1 and 10,458. Every employee already
has a unique employee number. So the employees with those numbers would
Ima Thinker said, “That employee list is already randomly ordered by location,
start with the fourth person on the list. 10,458 divided by 371 is 28, so we’ll
Cluster Sampling: sampling method in which the entire population of the study is
clusters.
Vera Practical said, “You don’t understand. In order to get more and better
information, this survey must be done with face-to-face interviews using trained
interviewers. With the methods you’re discussing, the travel cost for interviewers
select a number of them. Then we’ll randomly sample from those “cluster”
locations. The sample size will have to be larger, but overall costs will be lower
required.”
population are identified and in which the number of elements drawn from each
Will Prevail spoke up. “We must ensure representative response from people
not the most efficient way to do this. A clustered sample won’t do it either. Let’s
divide all employees into four groups: under 5 years experience, 5 to 10 years
group, we’ll select random samples proportional to the number of people in the
group. The statistician will determine the sample size. It will be less than 371.
here in this building. We’ll just survey each one and have our answers by this
afternoon.” Fortunately, the rest of the group persuaded him that the sample
size of 371 was only valid if the selection was totally random. Besides, even if
every employee in the building were surveyed, that sample would not be
survey by questioning the first 100 customers to telephone the call center. In
this biased convenience sample, East Coast customers and early risers were
Charts & Graphs: Bar Chart, Histogram, Pie Chart, Pareto Chart, Box Plot
Defects per unit (DPU) is the number of defects in a sample divided by the number of
units sampled.
Your printing business prints custom stationary orders. Each order is considered a unit.
Fifty orders are randomly selected and inspected and the following defects are found.
Two orders are incomplete
Six of the orders have problems and there are a total of 7 defects out of the 50 orders
sampled; therefore DPU = 7/50 = 0.14. On average, this is your quality level and each
Defects per opportunity (DPO) is the number of defects in a sample divided by the total
Each custom stationary order could have four defects - incorrect, typo, damaged, or
incomplete. Therefore, each order has four opportunities. Fifty orders are randomly
Six of the orders have problems, and there are a total of 7 defects out of the 200
the number of defects at the opportunity level and is useful because you can compare
Each custom stationary order could have four defects - incorrect, typo, damaged, or
incomplete. Therefore, each order has four opportunities. Fifty orders are randomly
There are a total of 7 defects out of the 200 opportunities. Therefore, DPO = 0.035 and
DPMO = 0.035 * 1000000 = 35,000. If your process remains at this defect rate over
the time it takes to produce 1,000,000 orders, it will generate 35,000 defects.
4. YIELD
Yield in Six Sigma is a classic process performance estimate. Calculate yield by using
Example:
5. THROUGHPUT YIELD
Throughput Yield is a Lean Six Sigma metric indicating the ability of the process to
produce defect-free units. The Throughput Yield (Yt) is calculated using the Defects per
Unit (DPU). As such, it provides more information than the classic Yield metric, which
considers the number of defective units rather than the total number of defects
occurring on those units. The classic yield estimate also often only considers defects
that are passed onto the customer, ignoring defects that are corrected (reworked), a
Example:
orders
The Defective Rate is the percent of units that have one or more defects on them. In
this example, there are 20 orders that have one or more critical defects in a random
sample of 400 orders, so the Defective Rate is 5%. (20 defectives/400 units = 0.05).
This corresponds to a Yield (the percent of units that have no defects) of 95%.
quality level after multiple steps in a process. If we calculate the Throughput Yield for n
For example, suppose there are six possible Critical to Quality steps required to process
a customer order, with the Throughput Yields of 99.7%, 99.5%, 95%, 89%, 92.3%,
Thus, only 73% of the orders will be processed defect free. It is interesting to see how
bad the Throughput Yield will be, even though none of the CTQ steps are that bad. It
all adds up! You can see that as processes become more complex (i.e. more CTQ
7. SIGMA LEVEL
You can calculate sigma level from DPMO by consulting a conversion table.
To calculate sigma level with Excel, use the formula:
=ABS(NORMSINV(1-DPO)+1.5)
8. PROCESS CAPABILITY
represents the performance of the process over a period of stable operations. Process
process capability formulas. A process is said to be capable when the output always
The process capability index is a dimensionless number that is used to represent the
ability to meet customer specification limits. This index compares the variability of a
USL−LSL
Cp=
6σ
where
However, the Cp index is limited in its use since it does not address the centering of a
process relative to the specification limits. For that reason, the Cpk was developed. It is
defined as Cpk = Min(CpkU, CpkL) where
USL−x
Cp ku=
3σ
And
x−LSL
Cpk l=
3σ
Notice that the Cpk determines the proximity of the process average to the nearest
specification limit. Also note that at least one specification limit must be stated in order
to compute a Cpk value. Cpk values of 1.33 or 1.67 are commonly set as goals since
they provide some room for the process to drift left or right of the process nominal
setting. The Cpk can be easily converted to a sigma level using Sigma level = 3Cpk
Example
An engine manufacturer uses a forging process to make piston rings. The quality
engineers want to assess the process capability. They collect 25 subgroups of five
piston rings and measure the diameters. The specification limits for piston ring
diameter are 74.0 mm ± 0.05 mm.
measurement result and a true value. For example, if one measures the
length of 10 pieces of rope that range from 1 foot to 10 feet and always
concludes that the length of each piece is 2 inches shorter than the true
bias of a measurement system over time and usage when that system is used
b) Precision
precision include:
system variation when the conditions of measurement are fixed and defined
can refer to any changes in the measurement system. For example, assume the
same appraiser uses the same material, equipment, and environment but uses
two different measurement methods. The reproducibility calculation will show the
analyses to determine the amount of total variation that is from the measurement
process. Evaluate your measurement system before using control charts, capability
analysis, or other analyses, to prove that your measurement system is accurate and
evaluates the combined effects of bias and repeatability from multiple measurements of
one part. This should be performed before performing any other type of gauge studies.
Example:
measure parts consistently and accurately when the tolerance range is 0.0007.
units a multiple number of times. For instance, an engineer selects 10 parts that
represent the expected range of the process variation. Three operators measure the 10
parts, three times per part, in a random order. A blindness approach is extremely
desirable, so that the operator do not know that the part being measured is part of a
special test. At a minimum, they should not know which of the test parts they are
currently measuring. Then, analyze the variation in the study results to determine how
much of it comes from differences in the operators, techniques, or the units
themselves.
It is used to assess the accuracy of your measurement device across its operating
range.
For example, an engineer wants to assess the linearity and bias of a measurement gage
that is used to measure inner diameters of bearings. The engineer chooses five parts
that represent the expected range of measurements. Each part was measured by layout
inspection to determine its master measurement, and then one operator randomly
measured each part 12 times. The engineer performed a crossed gage R&R study
previously, using the ANOVA method, and determined that the total study variation is
16.5368.
Attribute Gage Study (Analytic Method) to assess the amount of bias and repeatability
Example:
is used for accepting or rejecting bolts. The engineer selects 10 parts that have known
reference values and runs each part through a go/no-gage 20 times. The engineer
The engineer uses an attribute gage study to assess the bias and repeatability of the
The system has a lower tolerance of −0.020 and an upper tolerance of 0.020.
ratings or subjective ordinal ratings by multiple appraisers and to determine how likely
Nominal data
Are categorical variables that have multiple levels of a characteristic with no natural
ordering, such as (for a study of food texture) crunchy, mushy, and crispy.
Ordinal data
Are categorical variables that have three or more levels of a characteristic with a natural
ordering, such as strongly disagree, disagree, neutral, agree, and strongly agree.
Does the appraiser agree with the known standard on all trials?
Do all appraisers agree with themselves (within appraiser) and others (between
Do all appraisers agree with themselves, with others, and with the standard?
Example:
Fabric appraisers at a textile printing company rate the print quality of cotton fabric on
a 1 to 5 point scale. The quality engineer wants to assess the consistency and
correctness of the appraisers' ratings. The engineer asks four appraisers to rate print
Because the data include a known standard for each sample, the quality engineer can
assess the consistency and correctness of ratings compared to the standard as well as
Sources:
1. The Certified Six Sigma Black Belt Handbook by Kubaik & Benbow
6. Minitab Blog