0% found this document useful (0 votes)
134 views19 pages

Measure Phase: A. Deriving Measures

The document outlines the Measure Phase of a Six Sigma process improvement project. It discusses key steps including deriving measures, data collection through sampling methods, understanding the current process using descriptive statistics and charts, and calculating current process performance using metrics like Defects per Unit, Defects per Opportunity, and Throughput Yield. The goal of the Measure Phase is to establish a quantitative understanding of the current process in order to identify improvement opportunities.

Uploaded by

AtifHussain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views19 pages

Measure Phase: A. Deriving Measures

The document outlines the Measure Phase of a Six Sigma process improvement project. It discusses key steps including deriving measures, data collection through sampling methods, understanding the current process using descriptive statistics and charts, and calculating current process performance using metrics like Defects per Unit, Defects per Opportunity, and Throughput Yield. The goal of the Measure Phase is to establish a quantitative understanding of the current process in order to identify improvement opportunities.

Uploaded by

AtifHussain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

MEASURE PHASE

Measure Phase consists of:

1. Deriving Measurements

2. Data Collection

3. Measurement System Analysis

4. Understanding Process

5. Calculating Current Process Performance

A. DERIVING MEASURES

The first and the most basic question is WHAT TO MEASURE? We are basically

interested in measuring one or more of the following variables:

 Critical to quality (CTQ): They describe the requirements of quality in general

but lack the specificity to be measurable. CTQs can be physical measurements

like cycle time, height, weight, pressure, intensity, and radius.

 Critical to cost (CTC): They are similar to CTQs but deal exclusively with the

impact of cost on the customer. However, CTQs and CTCs may be similar, yet

stated by the customer for different reasons

 Critical to process (CTP): CTPs are typically the key process input variables

(or independent variables)

 Critical to safety (CTS): CTSs are stated customer needs regarding the safety

of the product or process. Though identical to the CTQ and CTC, it is identified

by the customer preference to quality


 Critical to Delivery (CTD): CTDs represent those customers with stated needs

regarding delivery. On-time delivery is always appreciated

We can use the following a form like this to define and select measures

Variable Measure 1 Measure 2 Measure 3 Measure 4


CTQ1 (e.g. on time payment of Outstanding
dues) Dues
CTQ2
CTP1
CTD1

B. DATA COLLECTION

Two types of data can be used:

1. Secondary Data: Data already collected.

2. Primary Data: Data collected fresh for the purpose of the project.

When population is large we may need to do sampling first.

SAMPLING

Sampling means selecting a few items out of a larger group in order to question,

examine, or test those items and then to draw conclusions about the entire group.

We need to use sampling:

 When you need to make conclusions about a large group, and . . .

 When it is expensive, difficult, or would take too long to examine the entire

group

For example:

 When monitoring the quality of manufactured items during or after production

 When examining documents during an audit of compliance with procedures


 When gathering employee or customer preferences or feedback

 When testing a new procedure before widespread introduction

Some types of sampling:

Simple Random Sampling: Randomly picking up a desired number of units from the

population.

Gung Ho started the discussion. “I’ve got it figured out. For a margin of error of

5 and a 95% confidence level, an Internet sample-size calculator showed we

need a sample of 371. We’ll use a random number generator from the Internet

to choose 371 random numbers between 1 and 10,458. Every employee already

has a unique employee number. So the employees with those numbers would

form our sample.”

Systematic Sampling: sample members from a larger population are selected

according to a random starting point but with a fixed, periodic interval.

Ima Thinker said, “That employee list is already randomly ordered by location,

department, gender—everything except date of employment. Why don’t we do

this: Randomly pick a starting point, say by throwing a die. If it comes up 4, we

start with the fourth person on the list. 10,458 divided by 371 is 28, so we’ll

sample every 28th person on the list after that.”

Cluster Sampling: sampling method in which the entire population of the study is

divided into externally homogeneous, but internally heterogeneous, groups called

clusters.
Vera Practical said, “You don’t understand. In order to get more and better

information, this survey must be done with face-to-face interviews using trained

interviewers. With the methods you’re discussing, the travel cost for interviewers

will be prohibitive. Here’s an idea: Out of our 24 locations, we will randomly

select a number of them. Then we’ll randomly sample from those “cluster”

locations. The sample size will have to be larger, but overall costs will be lower

because of reduced travel. A statistician can determine the sample size

required.”

Proportional Stratified Sample : sampling method in which different strata in a

population are identified and in which the number of elements drawn from each

stratum is proportionate to the relative number of elements in each stratum.

Will Prevail spoke up. “We must ensure representative response from people

with different amounts of experience. A simple random or systematic sample is

not the most efficient way to do this. A clustered sample won’t do it either. Let’s

divide all employees into four groups: under 5 years experience, 5 to 10 years

experience, 10 to 20 years experience, over 20 years experience. Within each

group, we’ll select random samples proportional to the number of people in the

group. The statistician will determine the sample size. It will be less than 371.

And Vera Marie, we can use telephone interviews.”

Convenience Sampling: sampling technique where subjects are selected because of

their convenient accessibility


Manny Moneybags said, “Why waste time? There are more than 371 employees

here in this building. We’ll just survey each one and have our answers by this

afternoon.” Fortunately, the rest of the group persuaded him that the sample

size of 371 was only valid if the selection was totally random. Besides, even if

every employee in the building were surveyed, that sample would not be

representative of all employees. (Manny Moneybags once conducted a customer

survey by questioning the first 100 customers to telephone the call center. In

this biased convenience sample, East Coast customers and early risers were

overrepresented, and Internet customers were missed entirely.)

C. UNDERSTANDING THE PROCESS:

Descriptive Statistics: Measures of Central Tendency (Mean, Mode, Median),

Measures of Dispersion (Range, IQ Range, Standard Deviation)

Charts & Graphs: Bar Chart, Histogram, Pie Chart, Pareto Chart, Box Plot

D.CALCULATING CURRENT PROCESS PERFORMANCE

We can use a number of metrics to calculate process performance

1. DEFECTS PER UNIT (DPU)

Defects per unit (DPU) is the number of defects in a sample divided by the number of

units sampled.

Example of calculating DPU

Your printing business prints custom stationary orders. Each order is considered a unit.

Fifty orders are randomly selected and inspected and the following defects are found.
 Two orders are incomplete

 One order is both damaged and incorrect (2 defects)

 Three orders have typos

Six of the orders have problems and there are a total of 7 defects out of the 50 orders

sampled; therefore DPU = 7/50 = 0.14. On average, this is your quality level and each

unit of product on average contains this number of defects.

2. DEFECTS PER OPPORTUNITY (DPU)

Defects per opportunity (DPO) is the number of defects in a sample divided by the total

number of defect opportunities.

Example of calculating DPO

Each custom stationary order could have four defects - incorrect, typo, damaged, or

incomplete. Therefore, each order has four opportunities. Fifty orders are randomly

selected and inspected and the following defects are found.

 Two orders are incomplete

 One order is both damaged and incorrect (2 defects)

 Three orders have typos

Six of the orders have problems, and there are a total of 7 defects out of the 200

opportunities (50 units * 4 opportunities / unit); therefore DPO = 7/200 = 0.035.

3. DEFECTS PER MILLION OPPORTUNITY (DPMO)


Defects per million opportunities (DPMO) is the number of defects in a sample divided

by the total number of defect opportunities multiplied by 1 million. DPMO standardizes

the number of defects at the opportunity level and is useful because you can compare

processes with different complexities.

Example of calculating DPMO

Each custom stationary order could have four defects - incorrect, typo, damaged, or

incomplete. Therefore, each order has four opportunities. Fifty orders are randomly

selected and inspected and the following defects are found.

 Two orders are incomplete

 One order is both damaged and incorrect (2 defects)

 Three orders have typos

There are a total of 7 defects out of the 200 opportunities. Therefore, DPO = 0.035 and

DPMO = 0.035 * 1000000 = 35,000. If your process remains at this defect rate over

the time it takes to produce 1,000,000 orders, it will generate 35,000 defects.

4. YIELD

Yield in Six Sigma is a classic process performance estimate. Calculate yield by using

the equation below.

 Yield = Output / Input

  = 100% - [Scrap Rate]

Example:

20 parts with critical errors in random sample of 400 parts

Scrap Rate = 20/ 400 * 100% = 5%


Yield = 95 %

5. THROUGHPUT YIELD

Throughput Yield is a Lean Six Sigma metric indicating the ability of the process to

produce defect-free units. The Throughput Yield (Yt) is calculated using the Defects per

Unit (DPU). As such, it provides more information than the classic Yield metric, which

considers the number of defective units rather than the total number of defects

occurring on those units. The classic yield estimate also often only considers defects

that are passed onto the customer, ignoring defects that are corrected (reworked), a

source of internal waste.

Example:

Find 60 errors for 6 critical characteristics on 20 orders in a random sample of 400

orders

The Defective Rate is the percent of units that have one or more defects on them. In

this example, there are 20 orders that have one or more critical defects in a random

sample of 400 orders, so the Defective Rate is 5%. (20 defectives/400 units = 0.05).

This corresponds to a Yield (the percent of units that have no defects) of 95%.

Defects Per Unit DPU = 60 defects / 400 units = 0.15 = 15%

Throughput Yield Yt = (1 – DPU) = (1 - 0.15) = 0.85 = 85%

Interpretation: On average, 85% of units will have no defects

6. ROLLED THROUGHPUT YIELD


The Rolled Throughput Yield (Yrt) is a Lean Six Sigma metric that provides the expected

quality level after multiple steps in a process. If we calculate the Throughput Yield for n

process steps as Yt1, Yt2, Yt3,...,Ytn, then:

Yrt = Yt1 * Yt2 * Yt3 * ... * Ytn

 For example, suppose there are six possible Critical to Quality steps required to process

a customer order, with the Throughput Yields of 99.7%, 99.5%, 95%, 89%, 92.3%,

94%. The Rolled Throughput Yield is calculated as:

Yrt = .997 * .995 * .95 * .89 * .923 * .94 = .728

Thus, only 73% of the orders will be processed defect free. It is interesting to see how

bad the Throughput Yield will be, even though none of the CTQ steps are that bad. It

all adds up! You can see that as processes become more complex (i.e. more CTQ

steps), the defect rates can climb rather quickly.

7. SIGMA LEVEL

You can calculate sigma level from DPMO by consulting a conversion table.
To calculate sigma level with Excel, use the formula:

=ABS(NORMSINV(1-DPO)+1.5)

8. PROCESS CAPABILITY

Process capability is defined as the inherent variability of a characteristic of a product. It

represents the performance of the process over a period of stable operations. Process

capability is expressed as 6σˆ, where σ ˆ is the sample standard deviation of the

process under a state of statistical control. Note: σ ˆ is often shown as σ in most

process capability formulas. A process is said to be capable when the output always

conforms to the process specifications.

The process capability index is a dimensionless number that is used to represent the

ability to meet customer specification limits. This index compares the variability of a

characteristic to the specification limits. Three basic process capability indices


commonly used are Cp, Cpk, and Cpm. The Cp index describes process capability in

relation to the specified tolerance of a characteristic divided by the natural process

variation for a process in a state of statistical control. The formula is given by

USL−LSL
Cp=

where

USL is upper specification limit

LSL is lower specification limit

6σ is the natural process variation

Voice of Customer VOC USL−LSL


In simpler terms, Cp= = =
Voice of Process VOP 6σ

Occasionally, natural process variation is referred to as normal process variation,


natural process limits, or natural tolerance. Cp values greater than or equal to 1 indicate
the process is technically capable. A Cp value equal to 2 is said to represent 6σ
performance.

However, the Cp index is limited in its use since it does not address the centering of a
process relative to the specification limits. For that reason, the Cpk was developed. It is
defined as Cpk = Min(CpkU, CpkL) where
USL−x
Cp ku=

And
x−LSL
Cpk l=

Notice that the Cpk determines the proximity of the process average to the nearest
specification limit. Also note that at least one specification limit must be stated in order
to compute a Cpk value. Cpk values of 1.33 or 1.67 are commonly set as goals since
they provide some room for the process to drift left or right of the process nominal
setting. The Cpk can be easily converted to a sigma level using Sigma level = 3Cpk
Example
An engine manufacturer uses a forging process to make piston rings. The quality
engineers want to assess the process capability. They collect 25 subgroups of five
piston rings and measure the diameters. The specification limits for piston ring
diameter are 74.0 mm ± 0.05 mm.

E. MEASUREMENT SYSTEM ANALYSIS

E-1 MEASUREMENT SYSTEM

A measurement system consists of the following key components:


 Measurement instrument
 Appraiser(s) (also known as operators)
 Methods or procedures for conducting the measurement
 Environment

E-2 CHARACTERSITCS OF A MEASUREMENT SYSTEM

Two important characteristics a measurement system are accuracy and precision.


a) Accuracy

It is the closeness of agreement between a measurement result and the true or

accepted reference value. The components of accuracy include:


Bias —This is a systematic difference between the mean of the test result or

measurement result and a true value. For example, if one measures the

length of 10 pieces of rope that range from 1 foot to 10 feet and always

concludes that the length of each piece is 2 inches shorter than the true

length, then the individual exhibits a bias of 2 inches.

Linearity —This is the difference in bias through the range of

measurements. A measurement system that has good linearity will have a

constant bias no matter the magnitude of measurement. In the previous

example, the range of measurement was from 1 foot to 10 feet with a

constant linear bias of 2 inches.

Stability (of a measurement system) —This represents the change in

bias of a measurement system over time and usage when that system is used

to measure a master part or standard. Thus, a stable measurement system is

one in which the variation is in statistical control, which is typically

demonstrated through the use of control charts.

b) Precision

It is the closeness of agreement between randomly selected individual measurements

or test results. It is this aspect of measurement that addresses repeatability or

consistency when an identical item is measured several times.. The components of

precision include:

Repeatability– This is the precision under conditions where independent

measurement results are obtained with the same method on identical


measurement items by the same appraiser (that is, operator) using the same

equipment within a short period of time. Although misleading, repeatability is

often referred to as equipment variation (EV). It is also referred to as within-

system variation when the conditions of measurement are fixed and defined

(that is, equipment, appraiser, method, and environment).

Reproducibility—This is the precision under conditions where independent

measurement results are obtained with the same method on identical

measurement items with different operators using different equipment. Although

misleading, reproducibility is often referred to as appraiser variation (AV). The

term “appraiser variation” is used because it is common practice to have

different operators with identical measuring systems. Reproducibility, however,

can refer to any changes in the measurement system. For example, assume the

same appraiser uses the same material, equipment, and environment but uses

two different measurement methods. The reproducibility calculation will show the

variation due to a change in the method. It is also known as the average

variation between systems, or between conditions variation of measurement.


E-3 DEFINING MEASUREMENT SYSTEM ANALYSIS

Measurement systems analysis is a method for determining whether a measurement

system is acceptable. For a continuous response variable, use measurement system

analyses to determine the amount of total variation that is from the measurement

system. For an attribute response variable, use measurement system analyses to

evaluate the consistency and accuracy of appraisers.

A measurement system analysis is a critical component for any quality improvement

process. Evaluate your measurement system before using control charts, capability

analysis, or other analyses, to prove that your measurement system is accurate and

precise, and that your data are reliable.

E-4 GAUGE STUDIES FOR CONTINUOUS DATA

a) Type-1 Gage Study


Type 1 Gage Study to evaluate the capability of a measurement process. This study

evaluates the combined effects of bias and repeatability from multiple measurements of

one part. This should be performed before performing any other type of gauge studies.

Example:

An engineer wants to certify an ultrasonic measurement system that is used to

measure the thickness of a protective coating on painted doors. The engineer

obtains a reference sample with a known coating thickness of 0.025 inches. An

operator measures the reference panel 50 times.

The engineer wants to determine whether the measurement system can

measure parts consistently and accurately when the tolerance range is 0.0007.

See the Minitab demo.

b) Gage Repeatability and Reproducibility (R & R) Study

It is used to evaluate the performance of a test method or measurement system. Such

a study quantifies the capabilities and limitations of a measurement instrument, often

estimating its repeatability and reproducibility.

In a gage R&R study of measurement system, multiple operators measure multiple

units a multiple number of times. For instance, an engineer selects 10 parts that

represent the expected range of the process variation. Three operators measure the 10

parts, three times per part, in a random order. A blindness approach is extremely

desirable, so that the operator do not know that the part being measured is part of a

special test. At a minimum, they should not know which of the test parts they are

currently measuring. Then, analyze the variation in the study results to determine how
much of it comes from differences in the operators, techniques, or the units

themselves.

See Minitab demo for further information.

c) Gage Linearity & Bias Study

It is used to assess the accuracy of your measurement device across its operating

range.

For example, an engineer wants to assess the linearity and bias of a measurement gage

that is used to measure inner diameters of bearings. The engineer chooses five parts

that represent the expected range of measurements. Each part was measured by layout

inspection to determine its master measurement, and then one operator randomly

measured each part 12 times. The engineer performed a crossed gage R&R study

previously, using the ANOVA method, and determined that the total study variation is

16.5368.

See Minitab demo for further information.

E-5 GAUGE STUDIES FOR ATTRIBUTE DATA

a) Attribute Gage Study (Analytic Method)

Attribute Gage Study (Analytic Method) to assess the amount of bias and repeatability

in an attribute gage that gives a binary response, such as pass or fail.

Example:

A manufacturing engineer assesses the automated attribute measurement system that

is used for accepting or rejecting bolts. The engineer selects 10 parts that have known
reference values and runs each part through a go/no-gage 20 times. The engineer

records the number of acceptances for each part.

The engineer uses an attribute gage study to assess the bias and repeatability of the

measurement system, and to determine whether to improve the measurement system.

The system has a lower tolerance of −0.020 and an upper tolerance of 0.020.

See Minitab demo for further information.

b) Attribute Gage Study (Analytic Method)

Use attribute agreement analyses to evaluate the agreement of subjective nominal

ratings or subjective ordinal ratings by multiple appraisers and to determine how likely

your measurement system is to misclassify a part.

Nominal data

Are categorical variables that have multiple levels of a characteristic with no natural

ordering, such as (for a study of food texture) crunchy, mushy, and crispy.

Ordinal data

Are categorical variables that have three or more levels of a characteristic with a natural

ordering, such as strongly disagree, disagree, neutral, agree, and strongly agree.

Use attribute agreement analysis to answer questions such as:

 Does the appraiser agree with himself on all trials?

 Does the appraiser agree with the known standard on all trials?

 Do all appraisers agree with themselves (within appraiser) and others (between

appraisers) on all trials?

 Do all appraisers agree with themselves, with others, and with the standard?
Example:

Fabric appraisers at a textile printing company rate the print quality of cotton fabric on

a 1 to 5 point scale. The quality engineer wants to assess the consistency and

correctness of the appraisers' ratings. The engineer asks four appraisers to rate print

quality on 50 samples of fabric twice, in random order.

Because the data include a known standard for each sample, the quality engineer can

assess the consistency and correctness of ratings compared to the standard as well as

compared to other appraisers.

See Minitab demo for further information.

Sources:

1. The Certified Six Sigma Black Belt Handbook by Kubaik & Benbow

2. Six Sigma for Organizational Excellence: A Statistical Approach by Murlidharan

3. Six Sigma + Lean Toolset by Meran et al.

4. Six Sigma Demystified by Paul Keller

5. The Quality Toolbox by Tague

6. Minitab Blog

You might also like