IE Lab Manual
IE Lab Manual
JAIPUR
INDUSTRIAL ENGINEERING-I
LAB MANUAL
COMPILED BY
Mr. Suresh Sharma
Mr. Sohan Singh
INDEX
6ME8A: INDUSTRIAL ENGG LAB 1
MM 75
S NO. NAME OF EXPERIMENT P. NO. REMARK
1 Case study on X bar charts
2 Case study on process capability analysis
3 Verify the Binomial Distribution of the number of
defective balls by treating the balls with a red colour to be
defective.
4 p-Chart- Plot a P-chart by taking a sample of n=20 and
establish control limits
5 To plot C-chart using given experimental setup
6 Operating Characteristics Curve:
(a) Plot the operating characteristics curve for single
sampling attribute plan for n = 20 ; c = 1 , 2 , 3 Designate
the red ball to defective.
(b) Compare the actual O.C. curve with theoretical O.C.
curve using approximation for the nature of distribution
7 Distribution Verification:
(a) Verification of Normal Distribution.
(b) To find the distribution of numbered cardboard chips
by random drawing one at a time with replacement. Make
25 subgroups in size 5 and 10 find the type of distribution
of sample average in each case. Comment on your
observations
8 Verification of Poisson distribution
9 Central Limit Theorem:
(a) To show that a sample means for a normal universe
follow a normal distribution
(b) To show that the sample means for a non-normal
universe also follow a normal Distribution.
10 Solve problems using available Statistical Process Control
software in lab
11 Study of Statistical process control techniques
12 Study of Quality function deployment
LAB ASSESMENT CRITERIA:
6ME8A: INDUSTRIAL ENGG LAB 1
INTERNAL: 45 Marks
EXTERNAL: 30 Marks
For assessment of work done during mid semester the Internal marks (60% component) is to be
distributed under the following heads:
Lab Objective
To understand the difference between cold working and hot working processes.
EXPERIMENT NO. 1(X-BAR CHART)
OBJECT:
INTRODUCTION:
Control charts are used to analyze variation within processes. There are many different
flavours of control charts, categorized depending upon whether you are tracking variables
directly (e.g. height, weight, cost, temperature, density) or attributes of the entire process (e.g.
number of defective parts produced, proportion of defectives). The X-Bar control chart is one of
these flavours. It's used for variable data when the data is readily available. This is one of the
most commonly encountered control chart variants. The X-Bar chart shows how much variation
exists in the process over time.
A process that is in statistical control is predictable, and characterized by points that fall between
the lower and upper control limits. When an X-Bar chart is in statistical control, the average
value for each subgroup is consistent over time, and the variation within a subgroup is also
consistent. Control limits are not the same as specification limits, but both are important when
we are performing process analysis:
Control limits are characteristics of the process. They reflect the actual amount of variation
that is observed. We assume that the variation can be described by the normal distribution, which
means that 99.73% of all of our observations will fall somewhere between three standard
deviations below the mean (-3) and three standard deviations above the mean (+3). We use
this principle to set our control limits.
Specification limits give us a sense of the voice of the customer. They reflect the variation that
the customer is willing to accept from the process. We use the target value for our variable,
and the customer-specified tolerance around that variable to determine the specification
limits. (There is no connection between the tolerance that the customer specifies, and the
observed standard deviation of the process.
There are two type of variation exist in the process (a) variation due to chance causes and (b)
variation due to assignable causes. The process is out of control due to the assignable cause of
variations.
The information given by the control chart depends n the basis used for selection of subgroups,
therefore the careful determination of subgroup is very important in the setting up of a control
chart. The following factors should be considered while selecting a subgroup:
1. Each sample should be a homogeneous as possible
2. There should be maximum opportunity for variation from one sample to another.
3. Samples should not be taken at exactly equal intervals of time.
FREQUENCY OF SAMPLING
σ = R-BAR/d2
X-bar
Range
Subgroup Size A2 d2 D3 D4
EXAMPLE:
A quality control inspector at the Cocoa Fizz soft drink company has taken twenty-five samples
with four observations each of the volume of bottles filled. The data and the computed means are
shown in the table. If the standard deviation of the bottling operation is 0.14 ounces, use this
information
to develop control limits of three standard deviations for the bottling operation.
QUESTIONS:
OBJECT:
INTRODUCTION:
Process capability compares the output of an in-control process to the specification limits by
using capability indices. The comparison is made by forming the ratio of the spread between
the process specifications (the specification "width") to the spread of the process values, as
measured by 6 process standard deviation units (the process "width").
Process Capability Indices
A process capability indices uses both the process variability and the process specifications to
determine whether the process is "capable".
We are often required to compare the output of a stable process with the process specifications
and make a statement about how well the process meets specification. To do this we compare
the natural variability of a stable process with the process specification limits.
A process where almost all the measurements fall inside the specification limits is a capable
process. This can be represented pictorially by the plot below:
There are several statistics that can be used to measure the capability of a process: Cp, Cpk.
Most capability indices estimates are valid only if the sample size used is "large enough". Large
enough is generally thought to be about 50 independent data values.
The Cp, Cpk statistics assume that the population of data values is normally distributed.
Assuming a two-sided specification, if μ and σ are the mean and standard deviation, respectively,
of the normal data and USL, LSL, and T are the upper and lower specification limits and the
target value, respectively, then the population capability indices are defined as follows.
𝑈𝐶𝐿 − 𝐿𝐶𝐿
𝐶𝑝 =
6𝜎
UCL−μ μ−LCL
Cpk = [ , ]
3σ 3σ
As seen from the earlier discussions, there are three components of process capability:
Specification Limit)
A minimum of four possible outcomes can arise when the natural process variability is
Steps:
This process will produce conforming products as long as it remains in statistical control. The
process owner can claim that the customer should experience least difficulty and greater
reliability with this product. This should translate into higher profits.
This process will produce greater than 64 ppm but less than 2700
non-conforming ppm. This process has a spread just about equal to specification width. It should
be noted that if the process mean moves to the left or the right, a significant portion of product
will start falling outside one of the specification limits. This process must be closely monitored.
It is impossible for the current process to meet specifications even when it is in statistical control.
If the specifications are realistic, an effort must be immediately made to improve the process (i.e.
reduce variation) to the point where it is capable of producing consistently within specifications.
This process will also produce more than 2700 non-conforming ppm.
The variability (s) and specification width is assumed to be the same as in case 2, but the process
average is off-center. In such cases, adjustment is required to move the process mean back to
target. If no action is taken, a substantial portion of the output will fall outside the specification
limit even though the process might be in statistical control.
CONCLUSION
In the real world, very few processes completely satisfy all the conditions and assumptions
required for estimating Cpk. Also, statistical debates in research communities are still raging on
the strengths and weaknesses of various capability and performance indices. Many new
complicated capability indices have also been invented and cited in literature. However, the key
to effectual use of process capability measures continues to be the level of user understanding of
what these measures really represent. Finally, in order to achieve continuous improvement, one
must always attempt to refine the "Voice of the Process" to match and then to surpass the
"Expectations of the Customer".
QUESTIONS:
To Verify the Binomial Distribution of the number of defective balls bytreating the balls with a
red color to be defective.
INTRODUCTION:
A successful outcome doesn't mean that it's a favorable outcome, but just the outcome being
counted. Let's say a discrete random event was the number of persons shot by firearms last year.
We'd be looking for the probability of obtaining some number of victims out of the pool of
shootings. Being shot is neither a favorable nor a successful outcome for the victim, yet it is the
outcome we are counting for this discrete variable.
The binomial distribution is used to model the probabilities of occurrences when specific rules
are met.
Rule 1: There are only two mutually exclusive outcomes for a discrete random variable
(i.e., success or failure).
Rule 2: There is a fixed number of repeated trials (i.e., successive tests with no outcome
excluded).
Rule 3: Each trial is an independent event (meaning the result of one trial doesn't affect
the results of subsequent trials).
Rule 4: The probability of success for each trial is fixed (i.e., the probability of obtaining
a successful outcome is the same for all trials).
The probability that a random variable X with binomial distribution B(n,p) is equal to the value k,
where k = 0, 1,....,n , is given by
, where
The binomial distribution for a random variable X with parameters n and p represents the sum of
n independent variables Z which may assume the values 0 or 1. If the probability that each Z
variable assumes the value 1 is equal to p, then the mean of each variable is equal to 1*p + 0*(1-
p) = p, and the variance is equal to p(1-p). By the addition properties for independent random
variables, the mean and variance of the binomial distribution are equal to the sum of the means
and variances of the n independent Z variables, so
This exercise set contains some solved exercises on the Binomial distribution. The theory needed
to solve these exercises is introduced in the lecture entitled Binomial distribution.
Exercise 1:
Suppose you independently flip a coin times and the outcome of each toss can be either head
(with probability ) or tails (also with probability ). What is the probability of obtaining exactly
tails?
Solution
Denote by the number of times the outcome is tails (out of the tosses). has a binomial
distribution with parameters and . The probability of obtaining exactly tails can be
OBJECT:
INTRODUCTION:
The day’s production of any manufactured article or part can be thought of as a sample
from a larger quantity with some unknown fraction defective. This unknown universe fraction
defective depends upon a complete set of causes influencing the production and inspection
operations.
P-charts are used to measure the proportion that is defective in a sample. This chart is called
control chart for attribute. The computation of the center line as well as the upper and lower
control limits is similar to the computation for the other kinds of control charts. The center line is
computed as the average proportion defective in the population. This is obtained by taking a
number of samples of observations at random and computing the average value of p across all
samples. This chart is based on binomial distribution.
1) To discover the average proportion of defective articles submitted for inspection, over a
period of time.
2) To bring to attention of the management, any changes in average quality level.
3) To discover, identify and correct causes of bad quality.
4) To discover, identify and correct the erratic causes of quality improvement.
In case of p-chart, the sample size must be fairly large, so that the
normal distribution will approximate to a sufficient degree. If p is small then n should be large
enough. Mostly sample size should not be less than 100.
1) Record the data for each subgroup (sample) on number inspected and number of
defectives.
2) Compute p (fraction defective) for each subgroup sample).
p= Number of defective in sample / number inspected in subgroup (=np/p)
To construct the upper and lower control limits for a p-chart, we use the following formulas.
INTRODUCTION
A nonconforming item is a unit of product that does not satisfy one or more of the specifications
for that product. Each specific point at which a specification is not satisfied results in a defect or
non-conformity. A nonconforming item will contain at least one nonconformity. It is quite
possible for a unit to contain several nonconformities. It is possible to develop control charts for
either the total number of nonconformities in a unit or the average number of
nonconformities per unit. These control charts usually assume that the occurrence of
nonconformities in samples of constant size is well modeled by the Poisson distribution.
EX: include the number of defective welds in 100 m of oil pipeline, the number of broken rivets
in an aircraft wing, the number of functional defects in an electronic logic device, the number of
errors on a document
1. The first condition specifies that the areas of opportunity for occurrence of defects
should be fairly constant from period to period.
2. Second condition specifies that opportunities for defects are large, while the chances of
defects occurring in anyone spot are small.
defects or nonconformities occur in this inspection unit according to the Poisson distribution;
that is,
where x is the number of nonconformities and c > 0 is the parameter of the Poisson distribution.
a control chart for nonconformities, or c chart with three sigma limits would be defined as
follows,
AREA OF APPLICATION
Example:
Answer:
The control chart is shown in Fig. The number of observed nonconformities from the preliminary
samples is plotted on this chart. Two points plot outside the control limits, samples 6 and 20.
Investigation of sample 6 revealed that a new inspector had examined the boards in this sample
and that he did not recognize several of the types of nonconformities that could have been
present. Furthermore, the unusually large number of nonconformities in sample20 resulted from
a temperature control problem in the wave soldering machine, which was subsequently repaired.
Therefore, it seems reasonable to exclude these two samples and revise the trial control limits.
The estimate now computed as
QUESTIONS
1. Which chart is used for number of imperfections observed in a cloth per unit area?
2. What is difference between p-chart and c-chart?
3. What is difference between chart for defective and chart for defects?
4. What are the areas of applications of c-chart?
5. State the conditions for use C-chart.
6. What type of sample size should be considered for use of C-chart?
EXPERIMENT NO. 6 (OC Curve)
OBJECT:
(a) Plot the operating characteristics curve for single sampling attribute plan for n = 20 ; c = 1 , 2
, 3 Designate the red ball to defective.
(b) Compare the actual O.C. curve with theoretical O.C. curve using approximation for the
nature of distribution.
INTRODUCTION:
The specified sampling plan may be singular, sequential or iterative and may be using a
particular size of a sample depending upon the demands of the project and could yield the results
of acceptance or rejection based on a specified criteria. The α and β risks referred to earlier are
shown on the OC curve as below.
Let’s A lot of size N has been submitted for inspection
Sample size n
Operating characteristic (OC) curve plots the probability of accepting the lot (Pa) versus the
lot fraction defective (p)
Calculations:
The probability of acceptance for different values of p, fraction defective are shown in the
following table.
The ideal OC curve which has perfect discriminatory power is as shown below.
.
The OC curve shows discriminatory power of the sampling plan.In the plan n=100, c=2, if the
lots are 2% defective, Pa is approximately 0.74. This means in 100 lots 74 are expected to be
accepted and 26 to be rejected.
Comparison of two different OC curves with regard to their discriminatory power is done in the
following.
The following figure shows the effect of sample size on the OC curves. It is noted that the
discriminatory power of the sampling plan increases with sample size.
Effect of c on the OC curve: This is shown in the following figure
The Operating Characteristic (OC) curve describes the probability of accepting a lot as a function
The first thing to notice about the OC curve in Figure 1 is the shape; the curve is not a straight
line. Notice the roughly “S” shape. As the lot per cent nonconforming increases, the probability
of acceptance decreases, just as you would expect. Historically, acceptance sampling is part of
the process between a part’s producer and consumer.
To help determine the quality of a process (or lot) the producer or consumer can take a sample
instead of inspecting the full lot. Sampling reduces costs, because one needs to inspect or test
fewer items than looking at the whole lot. Sampling is based on the idea that the lots come from
a process that has a certain non-conformance rate (but there is another view described below).
The concept is that the consumer will accept all the producer’s lots as long as the process percent
nonconforming is below a prescribed level. This produces the, so called, ideal OC curve shown
in Figure 2. When the process per-cent nonconforming is below the prescribed level, 4.0% in this
example, the probability of acceptance is 100%. For quality worse than this level, higher than
4%, the probability of acceptance immediately drops to 0%. The dividing line between 100% and
0% acceptance is called the Acceptable Quality Level (AQL).
The original idea for sampling takes a simple random sample, of n units, from the lot. If the
number of nonconforming items is below a prescribed number, called the acceptance number and
denoted c, we accept the lot. If the sample contains more nonconforming items, we reject the lot.
The only way to realize the ideal OC curve is 100% inspection. With sampling, we can come
close. In general, as the sample size increases, keeping the acceptance number proportional, the
OC curve approaches the ideal.
CONCLUSIONS:
QUESTIONS:
(b) To find the distribution of numbered cardboard chips by random drawing one at a time with
replacement. Make 25 subgroups in size 5 and 10 find the type of distribution of sample average
in each case. Comment on your observations
INTRODUCTION:
The normal distribution is the most important and most widely used distribution in statistics.
It is sometimes called the "bell curve," although the tonal qualities of such a bell would be
less than pleasing. It is also called the "Gaussian curve" after the mathematician Karl
Friedrich Gauss
A random variable X is said to be normally distributed with mean µ and variance σ 2 if its
probability density function (pdf) is
F(x:µ,σ2)= -∞<x<+∞
Example: We are given the following information: µ = 450, σ = 25 Find the following: P(X >
475) and P(460 < X < 470) by using normal distribution.
Answer:
follows:
A Z value of 1 means that X is located exactly one standard deviation to the right of the mean.
We need to the find the area of the normal curve that corresponds to this Z value. Consult the
Normal Distribution Table to find an area of 0.84134 that corresponds to Z = 1. We want to find
P(X > 475) so this means we need the area to the right of X,
This is a 2-step procedure where we find P(X < 470) and P(X < 460) and then compute the
difference.
For reference, the formula to compute the Z value appears to the right. First, we apply that
formula to find the Z value for X = 470 as follows:
We consult the Normal Distribution Table to find the area of 0.78814 that corresponds to Z =
0.8.
Second, we apply that formula to find the Z value for X = 460 as follows:
We consult the Normal Distribution Table to find the area of 0.65542 that corresponds to Z =
0.4.
Finally, we compute the difference between the 2 areas as follows: P(460 < X < 470) = P(X <
460 < X) - P(X < 470) = 0.78814 - 0.65542 = 0.13272.
QUESTIONS:
OBJECT:
To study lathe machine construction and various parts including attachments, lathe tools
cutting speed, feed and depth of cut.
INTRODUCTION:
The Poisson distribution can be used to calculate the probabilities of various numbers of
"successes" based on the mean number of successes. In order to apply the Poisson
distribution, the various events must be independent. Keep in mind that the term "success"
does not really mean success in the traditional positive sense. It just means that the outcome in
question occurs.
The Poisson distribution is an appropriate model if the following assumptions are true.
X is the number of times an event occurs in an interval and X can take values 0, 1, 2, …
The occurrence of one event does not affect the probability that a second event will
occur. That is, events occur independently.
The rate at which events occur is constant. The rate cannot be higher in some intervals
and lower in other intervals.
Two events cannot occur at exactly the same instant.
The probability of an event in an interval is proportional to the length of the interval.
If these conditions are true, then X is a Poisson random variable, and the distribution of X is a
Poisson distribution.
If μ is the average number of successes occurring in a given time interval or region in the
Poisson distribution, then the mean and the variance of the Poisson distribution are both equal to
μ.
E(X) = μ
and
V(X) = σ2 = μ
APPLICATIONS
the number of deaths by horse kicking in the Prussian army (first application)
birth defects and genetic mutations
rare diseases (like Leukemia, but not AIDS because it is infectious and so not
independent) - especially in legal cases
car accidents
traffic flow and ideal gap distance
number of typing errors on a page
hairs found in McDonald's hamburgers
spread of an endangered animal in Africa
failure of a machine in one month
EXAMPLE:
A life insurance salesman sells on the average 3 life insurance policies per week. Use Poisson's
law to calculate the probability that in a given week he will sell
a. Some policies
b. 2 or more policies but less than 5 policies.
c. Assuming that there are 5 working days per week, what is the probability that in a given
day he will sell one policy?
ANSWER:
(a) "Some policies" means "1 or more policies". We can work this out by finding 1 minus the
"zero policies" probability:
QUESTIONS:
1. Under what conditions do the binomial and Poisson distributions give approximately the
same results?
2. Which probability distribution in which outcome is very small with a very small period of
time?
3. If the outcomes of a discrete random variable follow a Poisson distribution, then what is
the relation between mean and variance?
4. Poisson distribution is uesd for what type of data?
EXPERIMENT NO. 9 (CLT)
OBJECT:
(a) To show that a sample means for a normal universe follow a normal distribution
(b) To show that the sample means for a non-normal universe also follow a normal Distribution.
INTRODUCTION:
The central limit theorem (CLT) is a statistical theory that states that given a sufficiently
large sample size from a population with a finite level of variance, the mean of all samples from
the same population will be approximately equal to the mean of the population. Furthermore, all
of the samples will follow an approximate normal distribution pattern, with all variances being
approximately equal to the variance of the population divided by each Sample’s size.
The Central Limit Theorem tells us that with a large enough sample (50 or more), most of the
sample means will be close to the population mean. You also know that the sampling distribution
of the mean is approximately normally distributed and you know a lot about the characteristics of
normal curves.
Sample mean
The sample mean from a group of observations is an estimate of the population mean .
Given a sample of size n, consider n independent random variables X1, X2, ..., Xn, each
corresponding to one randomly selected observation. Each of these variables has the distribution
of the population, with mean and standard deviation . The sample mean is defined to be .
By the properties of means and variances of random variables, the mean and variance of the
sample mean are the following:
𝑛
If ≤ 0.05
𝑁
If this condition is not satisfied, we use the following formula to calculate
distribution of the sample mean is normal, with mean and standard deviation .
If is the mean of a random sample X1, X2, ... , Xn of size n from a distribution with a finite
mean and a finite positive variance 2 then the distribution of
W=
is N(0,1) in the limit as n approaches infinity. This means that the variable is distributed
N( , ).
Example:
In a recent SAT test, the mean score for all examinees was 1020. Assume that the distribution
of SAT scores of all examinees is normal with a mean of 1020 and a standard deviation of 153.
Let be the mean SAT score of a random sample of certain examinees. Calculate the mean and
standard deviation of x-bar and describe the shape of its sampling distribution when the sample
size is
(a) 16 (b) 50
Answer:
Let µ and σ be the mean and standard deviation of SAT scores of all examinees
Because the SAT scores of all examinees are assumed to be normally distributed, the sampling
distribution of x-bar for samples of 16 examinees is also normal. Figure 1 shows the population
distribution and the sampling distribution of x-bar. Note that because σ is greater than σ x-bar the
population distribution has a wider spread and less height than the sampling distribution of x-bar
in Figure 1.
Figure 1
Again, because the SAT scores of all examinees are assumed to be normally distributed, the
sampling distribution of x-bar for samples of 50 examinees is also normal. The population
distribution and the sampling distribution of x are shown in Figure 2
(b) SAMPLING FROM A POPULATION THAT IS NOT NORMALLY DISTRIBUTED
Most of the time the population from which the samples are selected is not normally distributed.
In such cases, the shape of the sampling distribution of x-bar is inferred from a very important
theorem called the central limit theorem.
Note that when the population does not have a normal distribution, the shape of the sampling
distribution is not exactly normal but is approximately normal for a large sample size. The
approximation becomes more accurate as the sample size increases. Another point to remember
is that the central limit theorem applies to large samples only. Usually, if the sample size is 30 or
more, it is considered sufficiently large to apply the central limit theorem to the sampling
distribution of x-bar Thus, according to the central limit theorem,
1. When n ≥30, the shape of the sampling distribution of x-bar is approximately normal
irrespective of the shape of the population distribution.
2. The mean of x-bar, µx-bar is equal to the mean of the population, µ.
3. The standard deviation of x-bar, σx-bar is equal to
Again, remember that for to apply, n/N must be less than or equal to .05.
Figure 3a shows the probability distribution curve for a population. The distribution curves in
Figure 3b through Figure 3e show the sampling distributions of x-bar for different sample sizes
taken from the population of Figure 3a. As we can observe, the population is not normally
distributed. The sampling distributions of x-bar shown in parts b and c, when n < 30, are not
normal. However, the sampling distributions of x-bar shown in parts d and e, when n ≥30, are
(approximately) normal. Also notice that the spread of the sampling distribution of decreases as
the sample size increases.
Example:
The mean rent paid by all tenants in a large city is $1550 with a standard deviation of $225.
However, the population distribution of rents for all tenants in this city is skewed to the right.
Calculate the mean and standard deviation of x-bar and describe the shape of its sampling
distribution when the sample size is
Answer:
Although the population distribution of rents paid by all tenants is not normal, in each case the
sample size is large (n ≥30). Hence, the central limit theorem can be applied to infer the shape of
the sampling distribution of x-bar.
(a) Let x-bar be the mean rent paid by a sample of 30 tenants. Then, the sampling
distribution of x-bar is approximately normal with the values of the mean and standard
deviation as
Figure 4 shows the population distribution and the sampling distribution of x-bar.
(b) Let x-bar be the mean rent paid by a sample of 100 tenants. Then, the sampling
distribution
of is approximately normal with the values of the mean and standard deviation as
Figure 5 shows the population distribution and the sampling distribution of x-bar.
QUESTIONS:
1. What condition or conditions must hold true for the sampling distribution of the sample
mean to be normal when the sample size is less than 30?
2. Explain the central limit theorem.
3. What is the standard deviation of the sampling distribution of x equal to? Assume n/N
≤05.
4. How does the value of σx-bar change as the sample size increases?
5. How central limit theorem is important in statistic?
EXPERIMENT NO. 10 (Software)
OBJECT:
INTRODUCTION:
Statistical Process Control (SPC) is an industry-standard methodology for
measuring and controlling quality during the manufacturing process. Quality data in the form of
Product or Process measurements are obtained in real-time during manufacturing. This data is
then plotted on a graph with pre-determined control limits. Control limits are determined by the
capability of the process, where as specification limits are determined by the client's needs.
Data that falls within the control limits indicates that everything is operating as expected. Any
variation within the control limits is likely due to a common cause—the natural variation that is
expected as part of the process. If data falls outside of the control limits, this indicates that an
assignable cause is likely the source of the product variation, and something within the process
should be changed to fix the issue before defects occur.
With real-time SPC you can:
Dramatically reduce variability and scrap
Scientifically improve productivity
Reduce costs
Uncover hidden process personalities
Instantly react to process changes
Make real-time decisions on the shop floor
1. SQCpack:
SQCpack is the proven statistical process control solution that helps
organizations utilizes the power of data analysis to drive strategic quality outcomes. Combining
powerful SPC techniques with flexibility, SQCpack is an easy and scalable application that
includes all the tools needed to optimize process performance, comply with critical quality
standards, reduce variability, and improve profitability.
2. Minitab 17:
Analyze your data and improve your products and services with the leading
statistical software used for quality improvement worldwide. The car you drive. The medicine
you take. The bank you use. The device or computer you're looking at right now. Chances are
that all of them have been developed or improved using Minitab. Minitab is the leading statistical
software used for quality improvement and statistics education worldwide.
SPSS has scores of statistical and mathematical functions, scores statistical procedures, and a
very flexible data handling capability. It can read data in almost any format (e.g., numeric,
alphanumeric, binary, dollar, date, time formats), and version 6 onwards can read files created
using spread sheet/data base software. It also has excellent data manipulation utilities. The
following is a brief overview of some of the functionalities of SPSS:
Data transformations
Data Examination
Descriptive Statistics
Contingency tables
Reliability tests
Correlation
T-tests
ANOVA
In SPSS we can use np (number of non-conforming units from all produced) and c
(number of non-conformities) chart – if each sample used has an equal size, and p (proportion
of non-conforming units from all produced) and u (number of non-conformities per unit)
for the unequal size samples.
Example of making control chart in Excel
Click on Insert tab, click on Line Chart and then Click on Line.
You have created your chart. Resize it. Remove the small black lines by double clicking on them
and pressing Delete. That’s it, you're done.
QUESTIONS:
OBJECT:
INTRODUCTION:
Statistical process control (SPC) is a method of quality control which uses statistical
methods. SPC is applied in order to monitor and control a process. Monitoring and controlling
the process ensures that it operates at its full potential.
Important SPC tolls
Check Sheet
Cause-and-Effect Sheet
Pareto Chart
Scatter Diagram
Probability Plot
Histogram
Control Charts
1. Check Sheet:
The check sheet is a form (document) used to collect data in real time at the location where the
data is generated. The data it captures can be quantitative or qualitative. When the information is
quantitative, the check sheet is sometimes called a tally sheet. Check sheets are simply charts for
gathering data. When check sheets are designed clearly and cleanly, they assist in gathering
accurate and pertinent data, and allow the data to be easily read and used.
The design should make use of input from those who will actually be using the check sheets.
This input can help make sure accurate data is collected and invites positive involvement from
those who will be recording the data. Check sheets can be kept electronically, simplifying the
eventual input of the data into SQC.
SQC can use data from all major spreadsheets, including Excel and Lotus 123, all major database
programs and some other SPC software programs. Since most people have a spreadsheet
program on their desktop PC, it might be easiest to design a check sheet in a spreadsheet format.
Check sheets should be easy to understand. The requirements for getting the data into an
electronic format from paper should be clear and easy to implement.
4. Scatter Diagram
The Scatter plot is another problem analysis tool. Scatter plots are also called correlation charts.
A Scatter plot is used to uncover possible cause-and-effect relationships. It is constructed by
plotting two variables against one another on a pair of axes. A Scatter plot cannot prove that one
variable causes another, but it does show how a pair of variables is related and the strength of
that relationship. Statistical tests quantify the degree of correlation between the variables.
5. Probability Plot
In order to use Control Charts, the data needs to approximate a normal distribution, to generally
form the familiar bell-shaped curve. The probability plot is a graph of the cumulative relative
frequencies of the data, plotted on a normal probability scale. If the data is normal it forms a line
that is fairly straight. The purpose of this plot is to show whether the data approximates a normal
distribution. Although a probability plot is useful in analyzing data for normality, it is
particularly useful for determining how capable a process is when the data is not normally
distributed. That is, we are interested in finding the limits within which most of the data fall.
Since the probability plot shows the percent of the data that falls below a given value, we can
sketch the curve that best fits the data. We can then read the value that corresponds to 0.001
(0.1%) of the data. This is generally considered the lower natural limit. The value corresponding
to 0.999 (99.9%) is generally considered the upper natural limit.
6. Histogram
A histogram is a snapshot of the variation of a product or the results of a process. It often forms
the bell shaped curve which is characteristic of a normal process. The histogram helps you
analyze what is going on in the process and helps show the capability of a process, whether the
data is falling inside the bell-shaped curve and within specifications.
A histogram displays a frequency distribution of the occurrence of the various measurements.
The variable being measured is along the horizontal x-axis, and is grouped into a range of
measurements. The frequency of occurrence of each measurement is charted along the vertical y-
axis. Histograms depict the central tendency or mean of the data, and its variation or spread. A
histogram also shows the range of measurements, which defines the process capability.
A histogram can show characteristics of the process being measured, such as:
Do the results show a normal distribution, a bell curve? If not, why not?
Does the range of the data indicate that the process is capable of producing what is
required by the customer or the specifications?
How much improvement is necessary to meet specifications? Is this level of improvement
possible in the current process?
7. Control Charts
Control charts are used to routinely monitor quality. Depending on the number of process
characteristics to be monitored, there are two basic types of control charts. The first, referred to
as a uni-variate control chart, is a graphical display (chart) of one quality characteristic. The
second, referred to as a multivariate control chart, is a graphical display of a statistic that
summarizes or represents more than one quality characteristic.
Characteristics of control charts
If a single quality characteristic has been measured or computed from a sample, the control chart
shows the value of the quality characteristic versus the sample number or versus time. In general,
the chart contains a center line that represents the mean value for the in-control process. Two
other horizontal lines, called the upper control limit (UCL) and the lower control limit (LCL), are
also shown on the chart. These control limits are chosen so that almost all of the data points will
fall within these limits as long as the process remains in-control. The figure below illustrates this.
QUESTIONS:
1. What is SPC?
2. What is use of SPC techniques?
3. What meaning of control limits in control chart?
4. How the histogram shows the variation in process?
5. How cause and effect diagram shows causes and effect in SPC?
EXPERIMENT NO. 12 (QFD)
OBJECT:
INTRODUCTION:
Quality Function Deployment (QFD) uses a matrix format to capture a number of issues
that are vital to the planning process. The House of Quality Matrix is the most recognized and
widely used form of this method. It translates customer requirements, based on marketing
research and benchmarking data, into an appropriate number of engineering targets to be met by
a new product design. Basically, it is the nerve center and the engine that drives the entire QFD
process. According to Hauser and Clausing, it is “a kind of conceptual map that provides the
means for inter functional planning and communication.”
There are many different forms of the House of Quality, but its ability to be adapted to the
requirements of a particular problem make it a very strong and reliable system to use. Its general
format is made up of six major components. These include customer requirements, technical
requirements, a planning matrix, an interrelationship matrix, a technical correlation matrix, and a
technical priorities/benchmarks and targets section.
Customers buy benefits and producers offer features. This seems like a relatively simple notion,
however, unless customers and producers are perfectly in tune with one another, it may be very
difficult to anticipate these features, or each underlying benefit from each producer.
After determining what items are most important to the customer, organizations must translate
them into particulate specifications. Nothing can be produced, serviced or maintained without
detailed specifications or some set of given standards. Each aspect of the desired item must be
clearly defined: Measurements must be defined, heights specified , torques stated, and weights
targeted.
These values can be derived from several locations. Organizations can use known data from
market research, or conduct new studies to gather necessary information. In any event, the needs,
which were clarified and then explicitly stated, should be satisfied to the best of that
organization’s ability.
2. TECHNICAL REQUIREMENTS
The next step of the QFD process is identifying what the customer wants and what must be
achieved to satisfy these wants. In addition, regulatory standards and requirements dictated by
management must be identified. Once all requirements are identified it is important to answer
what must be done to the product design to fulfill the necessary requirements.
Requirements What
A list of requirements from customers, An expanded list of what needs to be done to
management and regulatory standards the product to fulfill the requirements
Figure 2 explains how to use a requirement chart to help the design process.
3. INTERRELATIONSHIP MATRIX
The main function of the interrelationship matrix is to establish a connection between the
customer’s product requirements and the performance measures designed to improve the
product. The first step in constructing this matrix involves obtaining the opinions of the
consumers as far as what they need and require from a specific product. These views are drawn
from the planning matrix and placed on the left side of the interrelationship matrix.
With this customer overview, the company can begin to formulate a strategy to improve their
product. In doing this, the strengths and weaknesses of the company are weighted against the
customer priorities to determine what aspects need to be changed to surpass the competition,
what aspects need to change to equal the competition, and what aspects will be left unchanged.
The optimal combination is desired.
Knowing what improvements need to be made allows the list of performance measures to be
generated and displayed across the top of the interrelationship matrix. By definition, a
performance measure is a technical measure evaluating the product’s performance of a
demanded quality (Terninco). In other words, the company must take the voice of the customer
and translate it into engineering terms. The matrix will have at least one performance measure
for each demanded quality.
After setting up the basic matrix, it is necessary to assign relationships between the customer
requirements and the performance measures. These relationships are portrayed by symbols
indicating a strong relationship, a medium relationship, or a weak relationship. The symbols in
turn are assigned respective indexes such as 9-3-1, 4-2-1, or 5-3-1. When no relationship is
evident between a pair a zero value is always assigned. The interrelationship matrix should
follow the Pareto Principle keeping in mind that designing to the critical 20% will satisfy 80% of
the customer desires (Terninco). Therefore, there should not be a significant number of strong
relationships between pairs.
ž - Strong positive
™ - Positive
x - Negative
xx - Strong negative
These symbols are then entered into the cells where a correlation has been identified. The
objective is to highlight any requirements that might be in conflict with each other.
Any cell identified with a high correlation is a strong signal to the team, and especially to the
engineers, that significant communication and coordination are a must if any changes are going
to be made. If there is a negative or strongly negative impact between requirements, the design
must be compromised unless the negative impact can be designed out. Some conflicts can’t be
resolved because they are an issue of physics. Others can be design-related, which leaves it up to
the team to decide how to resolve them. Negative impacts can also represent constraints, which
may be bi-directional. As a result, improving one of them may actually cause a negative impact
to the other. Sometimes an identified change impairs so many others that it is just simply better
to leave it alone. According to Step-By-Step QFD by John Terninko, asking the following
question when working with this part of the House of Quality helps to clarify the relationships
among requirements: “If technical requirement X is improved, will it help or hinder technical
requirement Z?”
Many technical requirements are related to each other so working to improve one may help a
related requirement and a positive or beneficial effect can result. On the other hand, working to
improve one requirement may negatively affect a related requirement as mentioned above. One
of the principal benefits of the Roof is that it flags these negative relationships so they can be
resolved. If these issues aren’t settled satisfactorily, some aspects of the final product will
dissatisfy the customer.
The customer requirements are distributed across the relationships to the quality characteristics.
This gives an organization prioritized quality characteristics. High priority quality characteristics
usually indicate that working on this technical issue will deliver great value to the customer. A
high quality characteristic weight indicates strong relationships with high priority demanded
quality items.
QUESTIONS:
1. What is QFD?
2. Draw the QFD?
3. Make customer requirement for a pen.
4. What is use of interrelationship matrix?