Quality Engineering
Quality Engineering
INTRODUCTION
Quality engineering is a discipline that focuses on the principles and practices of
ensuring that products and services meet or exceed customer needs. It involves analyzing,
developing, and implementing systems to achieve this goal. Quality engineering can be
applied to any type of product development.
Quality Engineering consists of analysis methods and the development of systems to
ensure products or services are designed, developed and manufactured.
Quality Dimensions:
Garvin proposes eight critical dimensions or categories of quality that can serve as a
framework for strategic analysis: Performance, features, reliability, conformance,
durability, serviceability, aesthetics, and perceived quality.
1. Performance
Performance has to do with the expected operating characteristics of a product or service.
Does a service or product do what it’s supposed to do? The primary operating
characteristics involve measurable elements, which makes it easier to objectively
measure the performance.
Some of the performance requirements are related to subjective preferences, but when
they are the preference of almost every consumer they become as powerful as an
objective requirement.
2. Features
What the dimension ‘performance’ doesn’t focus on are the features, the characteristics
that decide how appealing a product or service is to the consumer.
Such features are the extras of a product or service and complement its basic functioning.
This means that the ones designing a product or service should be familiar with the end-
users and should be updated on developments in consumer preferences.
Often it’s difficult to see a clear line between primary performance attributes and
additional features.
3. Reliability
Reliability is usually closely related to performance. The focus of the dimension
reliability is more on how long a product will perform consistently according to the
specifications of that product. This is important to customers who need the product to
work without any errors and contributes to a brand or company’s image.
The dimension reliability shows the probability of the product having signs of error
within a specific time of period. For measuring reliability you should measure the time to
the first failure, how much time there is between failures, and the failure rate per a
specific time of period.
These measures are usually applied to products that are expected to last for a longer time
and not so much for products that are meant to be used directly and for a shorter time
period.
Usually when the costs for maintenance or downtime increase, reliability as a dimension
of quality becomes more important to consumers.
Example
For example, for parents with children who depend on a car, the reliability of the car
becomes an important element. Also for most farmers, reliability is a key attribute.
This group of consumers is sensitive to downtime, especially during the shorter harvest
seasons. For a farmer, reliable equipment can be crucial in preventing spoiled crops.
Also, the reliability of computers is key for many consumers.
4. Conformance
This dimension is closely related to the dimensions performance and features. The
dimension of conformance is about to what extent the product or service conforms to the
specifications.
Does it function and have all the features as specified? Every product and service has
some sort of specifications that comes with it.
Example
For example, the materials used or the dimensions of a product can be specified and set
as a target specification for the product.
Something that can also be defined in the specification is the tolerance, which states how
much a product is allowed to deviate from the target. Problematic with this approach is
that it makes it easier for producers to focus less on if the specifications have been met as
long as they’ve met the tolerance limits.
5. Durability
Out of the eight dimensions of quality, the dimension durability is about how long a
product will last or perform and under what conditions it will perform. Estimating the
length of a product’s life becomes complicated when it’s possible to repair the product.
For such products, the durability will be counted until it is no longer economically
beneficial to use it. This is when the repairs and the costs of repairing increase.
Customers then must weigh the costs for future repairs against the costs of investing in a
new one together with its operating expenses. In other cases, durability is measured by
the amount someone can use a product before it stops working and repair is impossible.
This, for example, is the case when a light bulb burns up and must be replaced by a new
one. In this case, repairing it is impossible.
6. Serviceability
Serviceability is one of the eight dimensions of quality that reflects on if the product is
relatively easy to maintain and repair. This becomes important for consumers who are
more focused on the total cost of ownership as criteria for selecting a product.
Serviceability reflects on how easy it is for the consumer to obtain repair service, how
responsive the service personnel is, and how reliable the service is. It also focuses on the
speed with which a product can be repaired and also the competence and behaviour of the
personnel.
Customer’s concerns are mainly about the product getting defects, but also how long it
takes for the product to be repaired. It is not only important if a product can be fixed, but
also how satisfied the customer is about the company’s complaint handling procedures.
This can affect how the customer evaluates the service quality and eventually the
company’s reputation. Each company has a different way of dealing with complaint
handling and not every company attaches the same level of importance to serviceability.
Example
For example, there are companies that do their best to resolve the complaints they
receive, while others don’t offer any service when it comes to complaints. An example of
improving a company’s serviceability is by installing a cost-free phone number to reach
the helplines.
7. Aesthetics
The aesthetics dimension is all about the way a product looks and contributes to the
company’s identity or a brand. Aesthetics is not only about how a product looks but also
about how it feels, tastes, smells or sounds.
Not all people prefer the same taste or smell, which makes it impossible to please every
single customer. For this reason, companies end up searching for a niche.
Also, reputation plays a significant role when it comes to perceived quality. It’s easier for
a customer to trust the quality of a company’s new product when the established products
received positive reviews.
The north star of sustainable business speed requires focus on the second point to make
quality a first-class citizen in the entire software production system. It consequently requires
setting the boundaries of Quality Engineering.
With a systemic view, Quality Engineering acts on the software production system taking as
input digital business ideas and creating as output a software increment actually in operations,
and ideally, valuable.
On the broader big picture, the left and right boundaries of Quality Engineering start once
there is a minimal software production system, usually at a start-up stage with 2-3 people, and
up the remaining business maturity stages through serie-A and serie-E with about 500 FTEs
dedicated to software production.
At the bottom, Quality Engineering identifies the necessary software production elements but
doesn’t detail their complete implementation, left to the existing body of knowledge in each
area. On the upper boundary, Quality Engineering does not cover what the software
production system is used for, that is the features.
Quality Pains are more subjective but can more easily talk to multiple stakeholders that are
not necessarily familiar with the internals of software production. They can have different
forms from dependency on one person, or lack of usability.
Speed Pains are more easily supported by metrics and numbers being more factual. The
challenge in that case relies more on consolidating and framing correctly the value in
perspective and compared to others.
Quality Inspection
In quality engineering, inspection is a process that involves measuring, testing, or
examining a product's characteristics and comparing the results to specified
requirements. The goal of inspection is to determine if the product meets pre-established
quality standards. Inspection is a preventative measure that can help detect potential
defects in products or services before they reach customers. It's an important part of
manufacturing and delivering products to customers with consistent quality.
Quality Inspection is an activity of checking, measuring, or testing one or more product
or service characteristics and comparing the results with the specific requirements to
confirm compliance. An efficient inspection process standardizes quality, eliminates
paper documents, and increases efficiencies on the floor
The Pre-Production Inspection (PPI) is conducted before the production process begins
and helps to assess the quantity and quality of the raw materials and components and
whether they conform to the relevant product specifications.
A PPI is beneficial when you work with a new supplier, especially if your project is a
large contract that has critical delivery dates. This inspection can help to reduce or
eliminate communication between you and your supplier on issues regarding production
timelines, shipping dates, and quality expectations.
DPI inspections take place when only 10-15% of units are completed so that any
deviations can be identified, feedback given, and any defects can be re-checked to
confirm they have been corrected. It enables you to confirm that quality, as well as
compliance with specifications, is being maintained throughout the production process. It
also provides early detection of any issues requiring correction, thereby reducing delays
and rework.
Pre-shipment inspections (PSI) are an important step in the quality control process and
the method for checking the quality of goods before they are shipped. PSI ensures that
production complies with the specifications of the buyer. This inspection process is
conducted on finished products when at least 80% of the order has been packed for
shipping. Random samples are selected and inspected for defects against the relevant
standards and procedures.
Container loading and unloading inspections ensure your products are loaded and
unloaded correctly. Inspectors will supervise throughout the whole process and ensure
your products are handled professionally to guarantee their safe arrival to their final
destination.
This inspection will usually take place at your chosen factory while the cargo is being
loaded into the shipping container and at the destination once the products have arrived
and are being unloaded. This process includes evaluating the condition of the shipping
container and verifying all the product information, quantities, and packaging
compliance.
Quality Control
Quality control (QC) is a set of procedures in quality engineering that ensures a product
or service meets quality standards by monitoring results after development and
production:
• Planning: Carefully plan to ensure quality
• Equipment: Use the right equipment
• Inspection: Continuously inspect for defects and issues
• Corrective action: Take action to fix problems, such as repairing or eliminating defective
units, or purchasing better quality materials
QC is a reactive process that focuses on fixing existing defects, while quality engineering
(QE) is proactive and embeds quality considerations throughout the SDLC. QE focuses
on introducing new quality methodologies and how they impact the company, including
cultural and procedural changes.
QC is a subset of quality assurance (QA), which is more related to how a process is
performed or a product is made. QC is the inspection aspect of quality management,
while QA is the confirmation that requirements have been met.
QC can help engineering projects be successful by reducing rework, costs, and issues,
which can save time and money. It can also help maintain team morale, which can make
teams more motivated and productive.
Key Components of Quality Control
Key components of Quality Control may include:
6. Training and Education: Providing employees with the necessary skills and knowledge
to maintain quality standards effectively.
7. Continuous Improvement: Constantly analyzing data and feedback to identify areas for
improvement and enhancing the overall quality management system.
Quality Control is closely related to another quality management concept called Quality
Assurance (QA). While QC focuses on detecting and correcting defects, QA concentrates
on preventing them from occurring in the first place by setting up robust processes and
procedures.
Together, QC and QA form the backbone of an organization's quality management
system, helping to ensure that products and services consistently meet or exceed
customer expectations and regulatory requirements.
Customer service reviews, questionnaires, surveys, inspections, and audits are a few
examples of quality testing procedures that can be used in non-manufacturing businesses.
A company can use any procedure or technique to ensure that the final product or service
is safe, compliant, and meets consumer demands.
Just as quality is a relative word with many interpretations, quality control itself doesn’t
have a uniform, universal process. Some methods depend on the industry. Take food and
drug products, for instance, where errors can put people at risk and create significant
liability. These industries may rely more heavily on scientific measures, whereas others
(such as education or coaching) may require a more holistic, qualitative method.
At its core, quality control requires attention to detail and research methodology.
So, what is quality control? There are a wide range of quality control methods,
including:
Control Charts:
A graph or chart is used to study how processes are changing over time. Using statistics,
the business and manufacturing processes are analyzed for being “in control.”
Process Control:
Processes are monitored and adjusted to ensure quality and improve performance. This is
typically a technical process using feedback loops, industrial-level controls, and chemical
processes to achieve consistency.
Acceptance Sampling:
There are other quality control factors to consider when selecting a method in addition to
types of processes.
Some companies establish internal quality control divisions when defining what is quality
control. They do this to monitor products and services, while others rely on external
bodies to track products and performance. These controls may be largely dependent on
the industry of the business. Due to the strict nature of food inspections, for example, it
may be in a company’s best interest to sample products internally and verify these results
in a third-party lab.
Quality Control (QC) is essential for various reasons, and its importance lies in the
numerous benefits it brings to both businesses and consumers. Here are some key reasons
why QC is crucial:
4. Compliance and Regulations: QC ensures that products and services adhere to industry
standards and regulatory requirements, avoiding legal issues and penalties.
7. Risk Mitigation: Through rigorous testing and inspections, QC helps identify potential
risks and hazards, enabling businesses to address them proactively.
10. Customer Retention and Loyalty: Satisfied customers are more likely to remain loyal
and recommend the brand to others, contributing to long-term business success.
Quality Assurance
Quality assurance helps a company create products and services that meet the needs,
expectations and requirements of customers. It yields high-quality product offerings that
build trust and loyalty with customers. The standards and procedures defined by a quality
assurance program help prevent product defects before they arise.
Failure testing
Failure testing continually tests a product to determine if it breaks or fails. For physical
products that need to withstand stress, this could involve testing the product under heat,
pressure or vibration. For software products, failure testing might involve placing the
software under high usage or load conditions.
Statistical process control (SPC)
SPC is a methodology based on objective data and analysis and developed by Walter
Shewhart at Western Electric Company and Bell Telephone Laboratories in the 1920's
and 1930's. This methodology uses statistical methods to manage and control the
production of products.
TQM, which applies quantitative methods as the basis for continuous improvement.
TQM relies on facts, data and analysis to support product planning and performance
reviews.
• Food production, which uses X-ray systems, among other techniques, to detect physical
contaminants in the food production process. The X-ray systems ensure that contaminants
are removed and eliminated before products leave the factory.
Quality planning
Quality planning in quality engineering is a structured process for defining quality
standards, practices, and specifications for a product, service, project, or contract. It's an
essential step before starting a project and involves establishing the project, determining
the steps to take, and coordinating quality-related activities. The goal is to ensure that the
result meets customer needs.
• Benefit/CostAnalysis :
It is the process of estimating costs and benefits of various project quality
management activities. The main benefit of meeting quality is less rework and
therefore resulting in higher productivity, lower costs, and increased customer
satisfaction. The main cost of meeting quality requirements is the cost associated with
quality management activities. The cost of quality is generally of two types:
o Cost of Conformance to Requirements: Cost of completing the project
work to satisfy the expected level of quality and project scope. Example:
Prevention costs and appraisal.
o Cost of Non-Conformance: It includes costs due to some kind of failure,
Internal failure-related cost includes costs incurred before the customer
receives the product, and external failure related costs after the customer
receives the product. Cost is also very high. Example: Startup cost, project-
related cost, continuous cost.
• Benchmarking: It involves comparing project practices with those of other projects
within the same organization or with other companies. It is done to generate ideas for
improvement and provide a basis for measurement.
• Creating a Flowchart: A flowchart is any diagram that shows how various
components of a system are interrelated. There are two types of flowcharts used in
quality management. These are as follows:
1. System or Process Flow Charts: These shows how various elements of a
system interrelate. It shows the flow of the process through a system.
2. Cause-And-Effect Diagrams: It is also known as Ishikawa or Fishbone
diagram. It shows how variables within a process relate and how those relations
create potential problems.
Quality cost
Cost of quality (COQ) is defined as a methodology that allows an organization to
determine the extent to which its resources are used for activities that prevent poor
quality, that appraise the quality of the organization's products or services, and that result
from internal and external failures.
The Cost of Quality can be divided into four categories. They include Prevention,
Appraisal, Internal Failure and External Failure. Within each of the four categories there
are numerous possible sources of cost related to good or poor quality.
CoQ isn't just a neat concept—it's a critical tool that can transform your business strategy.
It can help you spot areas where a bit of investment now can save you a lot of trouble (and
money).
But here's the thing: the goal isn't to cut your CoQ down to zero. It's about finding the
sweet spot. Spending too little on prevention and appraisal could cost you more in failure
costs later.
How To Calculate The Cost of Quality
Measuring the Cost of Quality (CoQ) is easy. CoQ is the sum of two major components:
the Cost of Good Quality (CoGQ) and the Cost of Poor Quality (CoPQ).
• Appraisal Costs: Time spent evaluating your product to ensure it meets quality
standards. Suppose you spend $3,000 on beta product testing and $1,500 on
supplier assessments. Your total appraisal costs would be $4,500.
• Internal Failure Costs: Costs related to defects identified before reaching the
customer. If reworking defective units cost you $6,000 and scrapping useless
materials costs $2,000, your total internal failure costs would be $8,000.
• External Failure Costs: Costs arising from defects identified after the customer
receives the product. If your warranty claims total $7,000 and product returns cost
you $1,000, your total external failure costs would be $8,000.
So, if we want to put this into a formula, it would look like this:
Using the numbers from our examples, you'd calculate your CoQ as follows:
CoQ = ($7,000 + $4,500) + ($8,000 + $8,000) = $27,500
Remember, the goal isn't to drive your CoQ to zero—it's to find the right balance. You
want to invest enough in good quality costs to reduce poor quality costs. Regularly
calculating and analyzing your CoQ can help you find this balance and increase customer
satisfaction and profitability.
Economics of quality
The economics of quality is a methodology that helps organizations improve customer
satisfaction while reducing costs. It involves tracking the costs and benefits of prevention,
appraisal, and failure (PAF) to determine the most cost-effective solution. The cost of
quality (COQ) can be further broken down into the cost of good quality (conformance)
and poor quality (non-conformance).
Here are some examples of costs that fall under the economics of quality:
• Appraisal costs
These include costs associated with testing raw materials, parts, and components, as
well as other goods purchased from outside sources. They also include inspection and
testing during production.
• Internal failure costs
These costs arise when a product or service fails to meet quality requirements before
delivery. For example, software delays can cause delays in reaching the operational
phase of a system's life, which can lead to significant losses.
The economics of quality is supported by the ISO/TR 10014 international standard:
Quality management guideline. Some courses, such as Quality Engineering Management
(QEMT) at Lambton College, teach students how to analyze and report the cost of
quality, and how to establish a cost of quality program.
The economics of quality is a methodology designed to have organizations enhance
the customer satisfaction while cutting the costs at the same time. The methodology
is supported by the ISO/TR 10014 international standard: Quality
management guideline. It guides the management to achieve their entrepreneurial
intent while they continously increase the productivity.
It extends the basic quality management system with the economic objectives. It follows
the short-term objectives and the long-terms ones, while continuously evaluating how they
are being fulfilled.
The formula for the quality loss function is L(y) = k * (y - T)^2, where:
• L(y): The loss
• y: The measured value of the quality characteristic
• T: The target value
• k: A constant that depends on the unit of measurement and the tolerance limits
The function can take several forms depending on the nature of the quality
characteristics, such as "lower-the-better" or "higher-the-better". Genichi Taguchi
developed a quality loss function that considers three cases: nominal-the-best, smaller-
the-better, and larger-the-better. The methodology for the larger-the-better case is slightly
different from the other two.
In Taguchi's model, quality is defined as the loss caused to society by the shipment of a
product. Loss includes costs of operation, failure, maintenance, and customer
dissatisfaction. The goal is to have zero defects, or to hit the target exactly.
The loss function is quadratic, meaning that as the deviation from the target (Y - T)
increases, the loss grows exponentially
The quality loss function developed by Genichi Taguchi considers three cases, nominal-
the- best, smaller-the-better, and larger-the-better. The methodology used to deal with the
larger-the- better case is slightly different than the other two cases.
taguchi's Quality Loss Function This is a way to compute quality costs. Goal is zero
defects (hit target exactly). L= Loss (cost) in dollars D= Deviation from target T=
Taguchi parameter L=TD2 i.e. L=T( Actual Value − Target )2 Example The
specifications for the diameter of a gear are 25.00± 0.25 mm.21
OIE354 - QUALITY ENGINEERING
UNIT-II
CONTROL CHART
A control chart—sometimes called a Shewhart chart, a statistical process control chart, or an
SPC chart—is one of several graphical tools typically used in quality control analysis to
understand how a process changes over time.
Chart details
A control chart consists of:
• Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality
characteristic in samples taken from the process at different times (i.e., the data)
• The mean of this statistic using all the samples is calculated (e.g., the mean of the means,
mean of the ranges, mean of the proportions) - or for a reference period against which
change can be assessed. Similarly a median can be used instead.
• A centre line is drawn at the value of the mean or median of the statistic
• The standard deviation (e.g., sqrt(variance) of the mean) of the statistic is calculated using all
the samples - or again for a reference period against which change can be assessed. in the
case of XmR charts, strictly it is an approximation of standard deviation, the [does not make
the assumption of homogeneity of process over time that the standard deviation makes.
• Upper and lower control limits (sometimes called "natural process limits") that indicate the
threshold at which the process output is considered statistically 'unlikely' and are drawn
typically at 3 standard deviations from the center line
The chart may have other optional features, including:
• More restrictive upper and lower warning or control limits, drawn as separate lines, typically
two standard deviations above and below the center line. This is regularly used when a
process needs tighter controls on variability.
• Division into zones, with the addition of rules governing frequencies of observations in each
zone
• Annotation with events of interest, as determined by the Quality Engineer in charge of the
process' quality
• Action on special causes
.
Control charts limit specification limits or targets because of the tendency of those involved with
the process (e.g., machine operators) to focus on performing to specification when in fact the
least-cost course of action is to keep process variation as low as possible. Attempting to make a
process whose natural centre is not the same as the target perform to target specification
increases process variability and increases costs significantly and is the cause of much inefficiency
in operations. Process capability studies do examine the relationship between the natural
process limits (the control limits) and specifications, however.
The purpose of control charts is to allow simple detection of events that are indicative of an
increase in process variability. [12] This simple decision can be difficult where the process
characteristic is continuously varying; the control chart provides statistically objective criteria of
change. When change is detected and considered good its cause should be identified and possibly
become the new way of working, where the change is bad then its cause should be identified and
eliminated.
The purpose in adding warning limits or subdividing the control chart into zones is to provide
early notification if something is amiss. Instead of immediately launching a process improvement
effort to determine whether special causes are present, the Quality Engineer may temporarily
increase the rate at which samples are taken from the process output until it is clear that the
process is truly in control. Note that with three-sigma limits, common-cause variations result in
signals less than once out of every twenty-two points for skewed processes and about once out
of every three hundred seventy (1/370.4) points for normally distributed processes. [13] The two-
sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points
in normally distributed data. (For example, the means of sufficiently large samples drawn from
practically any underlying distribution whose variance exists are normally distributed, according
to the Central Limit Theorem.)
Out-of-Control Situations
• If at least one point plots beyond the control limits, the process is out of control
• If the points behave in a systematic or nonrandom manner, then the process could be out
of control.
Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical
methods to monitor and control the quality of a production process. This helps to ensure that
the process operates efficiently, producing more specification-conforming products with less
waste scrap. SPC can be applied to any process where the "conforming product" (product
meeting specifications) output can be measured. Key tools used in SPC include run charts, control
charts, a focus on continuous improvement, and the design of experiments. An example of a
process where SPC is applied is manufacturing lines.
SPC must be practiced in two phases: The first phase is the initial establishment of the
process, and the second phase is the regular production use of the process. In the second phase,
a decision of the period to be examined must be made, depending upon the change in 5M&E
conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts
used in the manufacturing process (machine parts, jigs, and fixtures).
An advantage of SPC over other methods of quality control, such as "inspection," is that
it emphasizes early detection and prevention of problems, rather than the correction of problems
after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce
the product. SPC makes it less likely the finished product will need to be reworked or scrapped.
SPC uses statistical tools to observe the performance of the production process in order to detect
significant variations before they result in the production of a sub-standard article. Any source of
variation at any point of time in a process will fall into one of two classes.
(1) Common causes
'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of
variation. It refers to any source of variation that consistently acts on process, of which
there are typically many. This type of causes collectively produce a statistically stable and
repeatable distribution over time.
(2) Special causes
'Special' causes are sometimes referred to as 'assignable' sources of variation. The term
refers to any factor causing variation that affects only some of the process output. They
are often intermittent and unpredictable.
Application
The application of SPC involves three main phases of activity:
1. Understanding the process and the specification limits.
2. Eliminating assignable (special) sources of variation, so that the process is stable.
3. Monitoring the ongoing production process, assisted by the use of control charts, to
detect significant changes of mean or variation.
The proper implementation of SPC has been limited, in part due to a lack of statistical expertise
at many organizations.[13]
Control Charts for Variables X
The control charts of variables can be classified based on the statistics of subgroup
summary plotted on the chart. X¯ chart describes the subset of averages or means, R chart
displays the subgroup ranges, and S chart shows the subgroup standard deviations.
An X-bar chart is a frequently used type of quality control chart, where the y-axis tracks the degree
to which the deviation of the tested attribute is acceptable.
The X-bar chart is used to monitor the mean of successive samples of constant size (n).
The x-axis on the X-bar chart tracks the samples tested. Analyzing the pattern of variance depicted
by a quality control chart can help determine if defects are occurring randomly or systematically.
This type of control chart is used for characteristics that can be measured on a continuous
scale, such as weight, temperature, thickness, etc. For example, one might take seven samples of
a particular device component from production every hour, measure the width of each, and then
plot the average of the seven width values on the chart for each sample.
Now, while the X-bar chart is essential since it helps to monitor the average or the mean of the
process and how this changes over time, it is never used alone and is most often used in
combination with the R-chart
• Range or R chart
An R-Chart is a statistical quality assurance graph for determining the stability and predictability of
a process. The R-chart shows the error of measurement, since the R values are the differences
between successive measurements of the same product. In other words, the R-chart shows the
sample range (which represents the difference between the highest and lowest value in each
sample) and monitors process variability at regular intervals from a process.
While the X-bar shows the overall mean or process mean, the R-chart shows the range of the
statistical center line.
Together, X-bar and R-charts are quality control charts used in conjunction to keep track of the
mean and variation of a process using samples gathered over a period of time. Both charts’ control
limits are used to monitor the mean and variation of the process going forward.
1 Standard deviation or σ chart.
In quality control processes, standard deviation measures the variation or dispersion of a
set of values from their mean. It plays a crucial role by quantifying the consistency and
predictability of manufacturing processes.
Standard deviation is a fundamental statistical tool in quality control that measures how much
variation exists from the average or mean. In a manufacturing context, you want parts to be as
close to the design specifications as possible. A small standard deviation means that most of the
produced items are very similar to the mean value, which is typically the target dimension. This
consistency is vital for product reliability and functionality. If the standard deviation is large,
however, it indicates that there's a wide spread in the product dimensions, which can lead to
assembly problems, parts that don't fit, or even product failures.
2Process Monitoring
In quality control, standard deviation is used for process monitoring. It's a way to keep an
eye on the consistency of production over time. By plotting the standard deviation of production
measurements, you can quickly see if something changes or starts to go wrong. This could be due
to machine wear, operator error, or material inconsistencies. A stable process has a consistent
standard deviation, while a process that's out of control will show significant variation in its
standard deviation over time, prompting immediate investigation and corrective actions.
3. Setting Tolerances
Setting tolerances is another area where standard deviation is invaluable. Tolerances are
the acceptable limits within which a product can vary from its specified dimensions. By
understanding the standard deviation of your process, you can set realistic tolerances that ensure
functionality while not being overly restrictive. If you set tolerances without considering standard
deviation, you might end up with an unachievable quality standard or, conversely, a product that
doesn't meet customer needs. It's all about finding the right balance.
4Continuous Improvement
Continuous improvement in quality control is about making incremental changes to
enhance product quality and process efficiency. Standard deviation plays a pivotal role here by
helping you measure the impact of these changes. If an adjustment leads to a lower standard
deviation, it means the process variation has decreased, which is usually a good sign. You can
also compare standard deviations before and after changes to objectively assess their
effectiveness. This data-driven approach ensures that efforts to improve quality are grounded in
reality.
5Risk Management
Risk management in quality control involves anticipating and mitigating potential issues
that could affect product quality. Standard deviation helps identify areas of high variability that
might pose a risk to consistent production. For example, if certain measurements have a high
standard deviation, there's a greater chance that some products will fall outside of the acceptable
range. By focusing on reducing the standard deviation in these areas, you can proactively manage
risks and prevent defects before they occur.
Tom 12 8 6 9
David 5 8 6 13
Paul 12 9 8 9
Sally 9 12 10 6
Fred 10 10 11 10
Sue 10 16 9 11
The np control chart plots the number of defects (red beads) in each subgroup (sample number)
of 50. The center line is the average. The upper dotted line is the upper control. The lower dotted
line is the lower control limit. As long as all the points are inside the control limits and there are
no patterns to the points, the process is in statistical control. We know what it will produce in the
future. While we don’t know the exact number of red beads a person will draw the next time, we
know it will be between about 2 and 17 (the control limits) and average about 10.
Steps in Constructing an np Control Chart
The steps in constructing the np chart are given below. The data from above is used to
demonstrate the calculations.
1. Gather the data.
a. Select the subgroup size (n). Attributes data often require large subgroup sizes (50 –
200). The subgroup size should be large enough to have several defective items. The
subgroup size must be constant.
In the red bead example, the subgroup size is 50.
b. Select the frequency with which the data will be collected. Data should be collected in
the order in which it is generated.
c. Select the number of subgroups (k) to be collected before control limits are calculated.
You can start a control chart with as few as five to six points but you should recalculate
the average and control limits until you have about 20 subgroups.
d. Inspect each item in the subgroup and record the item as either defective or non-
defective. If an item has several defects, it is still counted as one defective item.
e. Determine np for each subgroup.
np = number of defective items found
f. Record the data.
2. Plot the data
a. Select the scales for the control chart.
b. Plot the values of np for each subgroup on the control chart.
c. Connect consecutive points with straight lines.
3. Calculate the process average and control limits.
a. Calculate the process average number defective:
The c control chart plots the number of defects (c) over time. The area of opportunity
must be the same over time. This means that you use the same size sheet each time you are
counting the bubbles in the sheet. The u control chart plots the number of defects per inspection
unit (c/n) over time.
On occasion, there is a customer complaint. Sometimes someone gets injured on the job.
Sometimes hospital patients get infections. These situations are examining counting type
attributes data. Each count (customer complaint, injury, or infection) is considered a defect.
These types of situations are often governed by attributes data. c and u control charts are two
types of attribute control charts that can be used to monitor and improve these types of
processes. Both these charts track the variation in counting type attributes data. Purpose The
purpose of this module is to introduce c and u control charts - what they are, when they can be
used, how to construct them and how to interpret both charts. In addition, the small sample case
for c and u control charts is introduced. The c and u control charts tell you if the process is in
statistical control or if there are special causes present. Attributes Data and Control Charts We
sometimes collect data that involves counts; for example, the number of injuries in a plant, the
number of mistakes on an invoice, whether a delivery is on-time or not or whether a product is
in specification or not. These types of data are called attributes data. There are two types of
attributes data: yes/no and counting type. With yes/no data, you are examining distinct items
(such as invoices, deliveries, or phone calls). With counting type data, you are usually examining
an area where a defect has an opportunity to occur. Both types of data are explained below.
Yes/No Data For each item, there are only two possible outcomes: either it passes or it fails some
preset specification. Each item inspected is either defective (i.e., it does not meet the
specifications) or is not defective (i.e., it meets specifications). Examples of the yes/no data are
phone answered/not answered, product in spec/not in spec, shipment on time/not on time and
invoice correct/incorrect. If you have yes/no data, you will use either a p or np control chart to
examine the variation in the fraction of items not meeting (or meeting) a preset specification in
a group of items. You would use a p control chart if the subgroup size (the number of items
examined in a given time period) changes over time. You would use the np control chart if the
subgroup size stays the same. Counting Data With counting data, you count the number of
defects. A defect occurs when something does not meet a preset specification. It does not mean
that the item itself is defective. For example, a television set can have a scratched cabinet (a
defect) but still work properly. When looking at counting data, you have whole numbers such as
0, 1, 2, 3; you can't have half of a count. If you have counting data, you will use a c or u control
chart. The c control chart is used if the area stayed constant from sample to sample; the u control
chart is used if the area did not stay constant. 2 BPI Consulting, LLC www.spcforexcel.com If you
don't have data based on counts, you have variables data. Variables data are taken from a
continuum and are often referred to as continuous. Variables data can, theoretically, be
measured to any precision you like. Examples of variables data include time, length, width,
density, dollars, and height. Understanding c and u Control Charts Both the c and u control charts
are used to look at variation in counting type attributes data. They are used to determine the
variation in the number of defects in a subgroup. The subgroup size usually refers to the area
being examined. For example, a c control chart can be used to monitor the number of injuries in
a plant. In this case, the plant is the subgroup. If the subgroup size remains constant, the c control
chart is used. If the subgroup size varies, the u control chart is used. You will often use a c or u
control chart if the item is complex in nature. For example, it does not make much sense to
characterize something like a television set, a car, or a computer as being defective or not
defective. For example, a television set may have a scratch on the surface, but that defect hardly
makes the television set defective. The real issue here is how many defects there are on the
television set. Rating items as defective or not defective is also not very useful if the item is
continuous. For example, suppose you are making a plastic sheet. The fact that the sheet has a
small defect such as a bubble or blemish on it does not make it defective. However, if there are
too many bubbles, the sheet may not be useful for its intended purpose. For example, suppose
you make plastic sheets that are used for sheet protectors. Bubbles on the plastic sheet are
considered defects. You can monitor the number of bubbles over time by counting the number
of bubbles on one plastic sheet. The plastic sheet is the area of opportunity for defects to occur.
The number of bubbles is the number of defects (c). A defect occurs when something does not
meet a preset specification. It does not mean that the item itself is defective. When looking at
counting data, you end up with whole numbers such as 0, 1, 2, 3; you can't have half of a defect.
Thus, with the plastic sheet example, you will have 1 bubble, 2 bubbles, etc. There are two ways
to track this counting type data, depending on what you are plotting and whether or not the area
of opportunity for defects to occur is constant. The c control chart plots the number of defects
(c) over time. The area of opportunity must be the same over time. This means that you use the
same size sheet each time you are counting the bubbles in the sheet. The u control chart plots
the number of defects per inspection unit (c/n) over time. The area of opportunity can vary over
time. This means that you can vary the number of sheets or the area examined for bubbles each
time.
c Control Charts
c charts are used to look at variation in counting type attributes data. They are used to
determine the variation in the number of defects in a constant subgroup size. Subgroup size
usually refers to the area being examined. For example, a c chart can be used to monitor the
number of injuries in a plant. In this case, the plant is the subgroup. Since the plant doesn’t
change size very often, it is a subgroup of constant size.
To use the c chart, the opportunities for defects to occur in the subgroup must be very
large, but the number that actually occurs must be small. For example, the opportunity for
injuries to occur in a plant is very large, but the number that actually occurs is small.
Operational definitions (March 2004publication available on website) must be used to
determine what constitutes a defect. A subgroup can contain different types of defects. For
example, a customer complaint can occur for a number of different reasons. If this is the case,
operational definitions must exist for each defect.
The figure above is an example of a c chart. In this example, the variation in the number
of OSHA recordable injuries in a plant is being monitored. The subgroup size in this case is the
plant. The operational definition for an injury is any injury that is OSHA recordable. The
opportunity for defects to occur is very large since there are many opportunities for injuries to
occur. However, the number of defects (injuries) that actually occur is small. A c chart is
appropriate under these conditions. The time frame was selected as one month.
The number of injuries each month is plotted for a two-year period. For example, there were 2
injuries (c = 2) in January 2002 and 4 injuries (c =4) in February 2002. The overall average and
control limits have also been calculated and plotted.
The figure is in statistical control. What does it mean when the c chart is in statistical
control? It means that there are only common causes of variation present (see January 2004
publication on variation, available on the web-site). It also means that the number of injuries will
remain consistent in the near future. The average number per month will be around 2. Some
months it may be as high as 6, others as low as 0.
Since the process is in control, the system must be changed to decrease the number of
injuries. This is management’s responsibility. However, the people closest to the job will usually
have great ideas about what needs to be done to improve the process. They usually do not have
the authority to make the required changes. Attempting to decrease the number of injuries by
encouraging the workforce to be more careful will not work. The process must be changed.
Examine the reasons why injuries occur. A Pareto diagram can be used for this. Look for ways to
prevent injuries from occurring in the first place.
The steps in constructing a c chart are given below. The equations you need are shown to the
right.
1. Gather the data
a. Select the subgroup size. The subgroup size is the area where defects have the opportunity to
occur. It must be constant from subgroup to subgroup. The opportunity for defects to occur must
be large. The number of defects that actually occur must be small.
b. Select the frequency with which the data will be collected. Data should be collected in the
order in which they are generated.
c. Select the number of subgroups (k) to be collected before control limits will be calculated (at
least twenty).
d. Count the number of defects (c) in each subgroup. Ensure that operational definitions of a
defect are complete.
e. Record the data.
2. Plot the data.
a. Select the scales for the control chart.
b. Plot the values of c for each subgroup on the control chart. c. Connect consecutive points with
straight lines.
3. Calculate the process average.
a. Calculate the process average number of defects ( c):
b. Draw the process average number of defects on the control chart as a solid line and label.
4. Calculate the control limits.
a. Calculate the control limits for the c chart. The upper control limit is given by UCLc. The lower
control limit is given by LCLc.
b. Draw the control limits on the control chart as dashed lines and label.
5. Interpret the chart for statistical control.
a. The following tests for statistical control as a minimum should be used If any of these
conditions are present, the process is out of statistical control due to the presence of a special
cause of variation.
· Points beyond the control limits
· Seven points in a row trending up or trending down
· Seven points in a row above or below the average
Warning limits
These are set at +2s and –2s, where s is the standard deviation for the control chart.
Control limits
These are typically set at +/- 3 standard deviations, but can be adjusted to +/- 2 standard
deviations. The width of the control limits is determined by the size of sigma and the number of
standard deviations specified
Control charts
Control charts are powerful tools used by many industries to monitor the quality of processes and
detect special cause of variations on them. The 2 S Control Chart is one of the most used tools to
monitor if the variance of some quality characteristic ( X ), which is assumed to be normally
distributed, may change from an in-control (IC) to an out-of-control (OCC) situation. The main
objective of this chart is to detect increases of any magnitude in the process variance, as soon as
possible. In this context, if the actual process variance is larger than an in-control single level point,
the process is considered to be in an out-of-control state.
Here are some things to know about individual measurements control charts:
Multivariate Chart
A multivariate chart is a quality control chart that monitors the variation in multiple product
attributes simultaneously. It's used to detect shifts in the mean or covariance of several related
parameters.
A quality control chart that analyzes a specific attribute of a product is called a univariate chart,
while a chart measuring variances in several product attributes is called a multivariate chart.
Randomly selected products are tested for the given attribute(s) the chart is tracking.
An obvious advantage of using multivariate charts is that they enable you to minimize the total
number of control charts you need to manage, but there are some additional related benefits
involved as well:
• Analyzing process parameters jointly: Many process parameters are related to one
another, for example, for a particular process step we might expect the pressure value to be
large when temperature is high. Considering every process parameter separately is not
necessarily a good option and might even be misleading. Detecting any mismatch between
parameter settings may be very useful.
In the graph below, the Y1 and Y2 parameter values are correlated (high values for Y1 are
associated with high values for Y2) so that the red point in the lower right corner appears
to be out-of-control (beyond the control ellipse) from a multivariate point of view. From a
univariate perspective, this red point remains within the usual fluctuation bounds for both
Y1 and Y2, though. This point clearly represents a mismatch between Y1 and Y2. The
squared generalized multivariate distance from the red point to the scatterplot mean is
unusually large.
Overall rate of false alarms: The probability of a false alarm with three-sigma standard limits in
a control chart is 0,27%. If 100 charts are monitored at the same time, the probability of a false
alarm automatically increases to 27% (0.27% * 100).
However, when numerous variables are monitored simultaneously using a single multivariate
chart, the overall/family rate of false alarms remains close to 0.27%.
3-D measurements: When three-dimensional measurements of a product are taken, the amount of
data needed to ensure that all dimensions (X, Y and Z) remain within specifications can get pretty
big. But if the product gets damaged in a particular area, it will usually affect more than one
dimension, so the three dimensions should not be considered separately from one another. If a
multivariate chart simultaneously monitors deviations from the ideal planned X, Y, Z values, their
combined effects will be taken into account
• Fewer charts: Multivariate charts can monitor many tool process parameters with fewer charts.
• 3-D measurements: Multivariate charts are useful for monitoring 3-D measurements.
• Process variability: Multivariate charts can monitor process variability when multivariate data is
collected in subgroups.
Some challenges of multivariate charts include:
• Calculating them
Multivariate charts can be more difficult to calculate than univariate charts.
• Identifying out-of-control signals
It can be difficult to pinpoint which variable produced an out-of-control signal.
The ultimate purpose of trend analysis in quality management is to identify, evaluate, and
eliminate any issue that is having a negative effect on product quality. Quality trend analysis is
a particularly useful monitoring mechanism when changes are made to processes, especially those
related to manufacturing. It is the means of determining when a corrective action/preventive action
(CAPA) should be launched in response to audit findings, customer
complaints, deviations, equipment service/maintenance reports, nonconformances, etc.
For most life sciences companies, there are two main quality trend analysis methods:
▪ Performance trending.
▪ Process trending.
Both trending methods are most commonly expressed in various types of charts. The charts used
for trend analysis in quality management usually illustrate data points such as:
▪ Threshold limits.
▪ Alert limits.
▪ Action limits.
The wide variety of quality trend analysis charts life sciences manufacturers use to visualize
quality control activities typically fall under two general classifications:
▪ Attributes charts: Anything that can be quantified or rated with a pass/fail grade can be
expressed in an attributes chart.
▪ Variables charts: Any quality aspect that is measurable (i.e., length, temperature, weight,
etc.) can be expressed in a variables chart.
The metrics you choose to track in your quality trend analysis charts should target trends in data
movement. These metrics and the limits you set as thresholds should all be geared toward meeting
three key criteria:
Most importantly, though, the data points you analyze and the methodologies you use to analyze
them have to be consistent if you expect the trends that emerge from your quality trend analysis to
be genuine, according to American Society of Quality (ASQ) fellow and Quality Systems
Compliance Managing Principal Consultant Mark Durivage.
“There is no one perfect way to analyze data trends. However, for a trending program to be
successful, consistency is important,” Durivage said. “Pick a method and stick with it.”
The moving average/moving range (MA/MR) chart is a control chart that monitors the mean and
variation of a process over time:
Moving Average Charts are generally used in our SPC software for detecting small shifts in the
process mean. It's important to know how to use moving averages to detect small shifts in your
process. Moving Average Charts will detect shifts of .5 sigma to 2 sigma much faster than
Shewhart charts (i.e X-bar and Individual-X charts) with the same subgroup size. They are,
however, slower in detecting large shifts in the process mean. In addition, typical run test
rules cannot be used because of the dependence of data points.
Cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) control charts
are both used to monitor processes and detect small to moderate shifts:
An exponentially weighted moving average (EWMA) chart is a type of control chart used to
monitor small shifts in the process mean. It weights observations in geometrically decreasing order
so that the most recent observations contribute highly while the oldest observations contribute very
little.
The EWMA chart plots the exponentially weighted moving average of individual measurements
or subgroup means.
Given a series of observations and a fixed weight, the first element of the exponentially weighted
moving average is computed by taking the (1-weight) * previous EWMA + (weight * current
observation). Then the current observation is modified by "shifting forward"; and repeating the
calculation. This process is repeated over the entire series creating the exponentially weighted
moving average statistic.
The EWMA requires:
▪ A weight for the most recent observation. Weight must satisfy 0 < weight ≤ 1. Default weight=0.2.
The "best" value is a matter of personal preference and experience. A small weight reduces the
influence of the most recent sample; a large value increases the influence of the most recent
sample. A value of 1 reduces the chart to a Shewhart Xbar chart. Recommendations suggest a
weight between 0.05 and 0.25 (Montgomery 2012).
When designing an EWMA chart it is necessary to consider the average run length and shift to be
detected. Extensive guidance is available on suitable parameters (Montgomery 2012).
It is possible to modify the EWMA, so it responds more quickly to detect a process that is out-of-
control at start-up. This modification is done using a further exponentially decreasing adjustment
to narrow the limits of the first few observations (Montgomery 2012).
Each point on the chart represents the value of the exponentially weighted moving average.
The center line is the process mean. If unspecified, the process mean is the weighted mean of the
subgroup means or the mean of the individual observations.
The control limits are a multiple (L) of sigma above and below the center line. Default L=3. If
unspecified, the process sigma is the pooled standard deviation of the subgroups, or the standard
deviation of the individual observations, unless the chart is combined with an R-, S-, or MR- chart
where it is estimated as described for the respective chart.
Because the EWMA is a weighted average of all past and the current observations, it is very
insensitive to the assumption of normality. It is, therefore, an ideal replacement for a Shewhart I-
chart when normality cannot be assumed.
Like the CUSUM, EWMA is sensitive to small shifts in the process mean but does not match the
ability of a Shewhart chart to detect larger shifts. For this reason, it is sometimes used together
with a Shewhart chart
2 Marks
1)Define control chart?
Control charts are powerful tools used by many industries to monitor the quality of processes and
detect special cause of variations on them. The 2 S Control Chart is one of the most used tools to
monitor if the variance of some quality characteristic ( X ), which is assumed to be normally
distributed, may change from an in-control (IC) to an out-of-control (OCC) situation. The main
objective of this chart is to detect increases of any magnitude in the process variance, as soon as
possible. In this context, if the actual process variance is larger than an in-control single level point,
the process is considered to be in an out-of-control state.
2.Define EWMA?
An exponentially weighted moving average (EWMA) chart is a type of control chart used to
monitor small shifts in the process mean. It weights observations in geometrically decreasing order
so that the most recent observations contribute highly while the oldest observations contribute very
little.
3.Define Multivariate?
A multivariate chart is a quality control chart that monitors the variation in multiple product
attributes simultaneously. It's used to detect shifts in the mean or covariance of several related
parameters.
4. What Is the Primary Objective of Trend Analysis in Quality Management?
The ultimate purpose of trend analysis in quality management is to identify, evaluate, and
eliminate any issue that is having a negative effect on product quality. Quality trend analysis is
a particularly useful monitoring mechanism when changes are made to processes, especially those
related to manufacturing.
8. DefineWarning limits?
These are set at +2s and –2s, where s is the standard deviation for the control chart.
• Fewer charts: Multivariate charts can monitor many tool process parameters with fewer charts.
• 3-D measurements: Multivariate charts are useful for monitoring 3-D measurements.
• Process variability: Multivariate charts can monitor process variability when multivariate data is
collected in subgroups.
Unit-IV
Process stability
Process stability is a key concept in statistical process control (SPC) and refers to a process's ability
to maintain consistency and predictability over time:
• Definition
A process is considered stable when it consistently operates within defined control limits and
produces outputs that fall within the process width.
• Variations
A stable process only exhibits common cause variations, which are small, random fluctuations
that are expected and inherent to the system.
• Causes of instability
Processes can become unstable over time due to variations caused by external disturbances or
inherent process variations.
• Control limits
The control limits for a process are defined as the Upper Control Limit (UCL) and the Lower
Control Limit (LCL).
• Histograms
Histograms can be used to visualize data and identify outliers in a process.
• Control charts
Control charts can be created by plotting data after calculating the mean, standard deviation, and
control limits.
Process Stability - refers to the consistency of the process to stay within the Control Limits. If the
process distribution remains consistent over time, i.e. the outputs fall within the range (Process
Width), then the process is said to be stable or in control.
Process
A process is a set of sequential activities that convert the input to the desired output.
Stability
Process stability is the ability of the process to perform within a predictable limit. We can also
state stability of a process refers to its predictability.
Processes tend to become unstable over time due to variations. The variations occur either because
of the inherent nature of the process or some external or forced changes, called disturbances.
When the process is operating at a stable state, we can expect the process to produce comparable
results over a period. Thus, even though the process has variations, it would repeat similar
variations in the future under a stable state.
And that is how, the process behaves predictably.
Types of Variations
1. Common Cause Variations: We call the inherent process variations as common cause
variations.
2. Assignable Cause Variations: When the process suffers from an external disturbance or a
significant drift, we call these disturbances Assignable Cause Variations or Special Cause
Variations.
When a process exhibits an assignable cause, meaning some disturbance, it becomes unpredictable.
Because the disturbances are unpredictable or non-repeatable to some extent.
We categorise A process without any Special Causes as a Stable Process and processes with one
or more special cause as an unstable process.
Usually, continuous processes consist of zones of common and special causes. The zones without
any special cause are Stable Zones. Unstable zones are the period of the process exhibiting a
special cause.
1. We cannot predict the process performance when it has a special cause variation.
2. The presence of a special cause indicates significant drift or disturbance in the process.
Hence, it is important to address and monitor the stability.
3. Its presence also distorts the sample data properties and in turn, leads to inaccurate estimation
of population behaviour.
4. We cannot consider an Unstable process for improvement under Six Sigma DMAIC, or SPC
or under any other initiatives.
If you are the process owner, you can start plotting the observations on a control chart. If the
control limits are comfortably within the specification limits, then the special cause indicates the
need for adjustment in the process.
What to do, If you are considering the process for any improvement and sample data shows a
special cause? It may indicate that the sample data is not the true representative of process
behaviour.
• Histogram
A histogram can be used to illustrate the output of a process capability measurement.
• Probability plot
A probability plot compares a variable's ordered values to the percentiles of a theoretical
distribution, such as the normal distribution. If the data distribution matches the theoretical
distribution, the points on the plot will form a linear pattern.
• Control chart
Control charts can be used to monitor and evaluate process quality and capability differences.
Other tools used in PCA include: design of experiments, rate of nonconformities, and capability
indices.
PCA is used in product and process design, supply chain management, production planning, and
maintenance.
The histogram along with the mean x and standard deviation s enable us to assess process
capability by looking first at the shape of the histogram. If it reasonably approximates a normal
distribution, then x ± 3s can be used when assessing process capability.
process capability analysis using a histogram for probability plot and control chart
Capability Histogram: A histogram of data is displayed to help visualise its distribution. LCL,
target, UCL values are indicated as well as mean median mode and quartiles
Up to six distributions can also be fitted and displayed on the histogram. The first three of these
are reserved by the program to display normal curves with overall and pooled (if data has
subgroups) standard deviations and with deviation around the target (if a target has been
specified, see definition of above). The remaining three distributions are set by default to
Weibull, lognormal and gamma distributions, but these can be changed by
selecting Edit → Distributions dialogue from the graphics menu, after clicking on the [Opt]
button situated to the left of the Capability Histogram check box on the Output Options
Dialogue.
Normal Probability Plot: Original Data: A Normal Probability Plot of the original data is
displayed together with Anderson-Darling Test results in the legend. You can compare this
graph with the next one to visualise the improvement provided by the transformation – if there
is one.
Normal Probability Plot: Transformed Data: This option is enabled if a transformation has been
selected in the previous dialogue. A Normal Probability Plot of the transformed data is
displayed together with Anderson-Darling Test results in the legend. You can compare this
graph with the previous one to visualise the improvement provided by the transformation.
This probability plot provides a process performance statement relative to specification limits of
72 and 78. There is no reason to believe that these data do not follow a normal distribution since
the P-value is 0.522 (upper right table of plot), which is greater than 0.05, a commonly-used
value in hypothesis testing for determining statistical significance. From this plot, a process
capability/performance metric estimate would be 17.4% non-conformance [(100-96.793)
+14.192 = 17.399], which is consistent with a visual estimate of the area under the curve in
Figure 1 beyond the specification limits.
An overall summary about this process and its performance can be netted out using a 30,000-
foot-level charting format, as shown in Figure 3. The bottom of a 30,000-foot-level report-out
includes a quantification of whether a process has a recent region of stability (predictable
process) or not and a prediction statement, if the process is predictable.
Gauge capability studies are a vital part of Statistical Process Control (SPC) because they help
determine the extent of variability in a measurement system. The goal is to understand and
quantify the sources of variability in the measurement process so that the process and product
quality can be improved.
Here are some things to consider when conducting a gauge capability study:
• JMP
You can set specification limits in JMP by:
1. Right-clicking the column you want to set limits for
2. Selecting Column Properties > Spec Limits
3. Entering values for the lower and upper specification limits, or a target value
4. Selecting the Show as Graph Reference Lines check box
5. Clicking OK
• Database Manager
You can set specification limits in Database Manager by:
1. Double-clicking the Specification Limits table
2. Clicking Options > Add in the Database Manager menu bar
3. Right-clicking in the table and clicking Add
4. Configuring the generation information for the specification limit
Specification limits are boundaries that indicate where a product must perform. They are based
on customer requirements, process capability data, and relevant industry standards or
regulations.
Specification limits can be one sided, or two sided. One-sided specification: A specification
limits set only at one point, for product to be accepted or rejected. As an example, defects in
product. There can be only upper specification limit for defects
specification limits are the targets set for a product or the process by customer or market
performance; often, the Voice of the Customer is the input for these criteria. In other words, it is
the intended result of the metric that we measure.
Customers set the limit (upper and lower) on the product characteristics that define where the
product works and where it does not. If the product falls outside of these limits, assume that the
customer will reject the product. Specification limits are related to product design. Hence, set
these limits in the design phase of the product life cycle.
Example: A customer would expect no less than 100 M&Ms to be in a packet before being
dissatisfied.
The customer defines limits where the losses due to variation are equal to the product’s benefit.
Generally, these values are drawn on the histogram. If the product falls between the USL and
LSL, then the product meets the customer’s requirement. In other words, they determine the
process capability and the sigma value.
2 marks
Process stability is a key concept in statistical process control (SPC) and refers to a process's ability
to maintain consistency and predictability over time:
Example: A customer would wait in line at a drive-through for 7 minutes before being
dissatisfied.
Example: A customer would expect no less than 100 M&Ms to be in a packet before being
dissatisfied.
Common Cause Variations: We call the inherent process variations as common cause variations.
Assignable Cause Variations: When the process suffers from an external disturbance or a
significant drift, we call these disturbances Assignable Cause Variations or Special Cause
Variations
8.Define Process?
A process is a set of sequential activities that convert the input to the desired output.
9.Define Stability?
Process stability is the ability of the process to perform within a predictable limit. We can also
state stability of a process refers to its predictability.
We cannot consider an Unstable process for improvement under Six Sigma DMAIC, or SPC or
under any other initiatives
Unit-V
• How it works
A random sample of products is selected and tested to determine the quality of the entire batch.
• When it's used
It's often used in industries with mass production and set procedures, when testing every item
would be too costly, time-consuming, or could damage the products.
• How it's used
It's usually done as products leave the factory, or sometimes even within the factory.
• When it was developed
It was developed during World War II by Harold Dodge, a veteran of the Bell Laboratories
quality assurance department, and was originally used by the U.S. military to test bullets.
• Types of acceptance sampling
There are two main types of acceptance sampling: sampling by attributes and sampling by
variables. Sampling by attributes is more common, while sampling by variables is more
complicated.
• Risks involved
There are two types of risk that must be considered: the probability of rejecting a good lot
(producer's risk) and the probability of accepting a bad lot (consumer's risk).
• Visual representation
An Operating Characteristics (OC) Curve can be used to visually evaluate a sampling plan and
understand the probability of accepting a lot of varying quality levels.
Acceptance sampling is a quality control technique that uses statistical methods to determine if a
batch of items should be accepted or rejected. It involves:
Definition:
Acceptance Sampling is a technique which deals with acceptance or rejection of a lot or process
based upon the results obtained from a random sample or samples taken randomly from the lot. If
the items are judged good or bad by inspection of presence or absence of some attribute
characteristics, the inspection is called attribute inspection.
In this case the quality of a lot is defined by the sample fraction nonconforming (fraction
defective). The acceptance sampling is preferred to 100% inspection in the following cases.
1. When the cost of the inspection is high and loss that arises from the passing out defective items
is not too much
2. When the inspection units are costly or destructive
3. To maintain good quality: If the lots are rejected often the producer is forced to improve the
production process. Hence acceptance sampling indirectly improves the quality of the products.
4. To give protection to the consumer against the acceptance of bad lots. It also gives protection to
the producer against the rejection of good lots. The consumer is given long run protection against
the product. It minimizes the cost of inspection and administration. It provides a basis for action
with regard to the production of units in future course.
Comparison between 100% inspection and sampling inspection 100% inspection Sampling
inspection
1. 100% inspection is not efficient
2. it is subject to human errors arising out of fatigue and monotony. These errors cannot be
quantified
3. It is not practicable for mass‐ production components
4. It is not feasible where destructive testing is involved
5. It is costly, time consuming and effort some.
6. It does not develop an attitude of quality or pressure for quality
1. Because of low quantities involved inspection will be efficient
2. It is subject to sampling errors which can be quantified and controlled
3. It is practicable for mass production components.
4. It is only alternative where destructive testing is involved
5. It is cheap ‘ quick and easy
6. It develops pressure for quality improvement by rejection of entire lots on the basis of findings
in samples
OC curves are used in many industries, including manufacturing and food safety, to:
Quality teams can examine a sampling method’s ability to differentiate the great and poor batches
by studying the OC curve.
This insight is important when deciding to accept or reject shipments, as well as making sampling
better at achieving desired standards and balanced risks.
OC curves simply map testing passing chances on the y-axis against the percentage of defects on
the x-axis. Their position and shape deliver valuable clues about performance in various scenarios.
The operating characteristic (OC) curve is a graphical representation that depicts the probability
of accepting or rejecting a lot based on the sampling plan and the quality level of the product or
process. It is a fundamental tool in acceptance sampling and quality control.
The OC curve plots the probability of acceptance on the y-axis against the quality level or percent
defective on the x-axis.
The quality level represents the fraction or percentage of defective items in the lot or process
output.
The shape of the OC curve provides valuable insights into the performance of the sampling plan.
An ideal OC curve has a sharp, vertical drop from a high probability of acceptance to a low
probability near the acceptable quality level (AQL). This indicates that the sampling plan can
effectively discriminate between good and bad quality lots.
1. Producer’s Risk (α): This is the probability of rejecting a lot when the quality level is equal
to or better than the AQL. It is represented by the point where the OC curve intersects the
acceptable quality level line.
2. Consumer’s Risk (β): This is the probability of accepting a lot when the quality level is
equal to or worse than the rejectable quality level (RQL). It is represented by the point
where the OC curve intersects the rejectable quality level line.
The OC curve helps balance the risks between the producer (supplier) and the consumer
(customer). A good sampling plan aims to minimize both the producer’s and consumer’s risks
while keeping the sample size and inspection costs reasonable.
The operating characteristic (OC) curve is a graphical representation that displays the probability
of accepting a lot as the percent defective or nonconforming units increase.
Limitations of OC Curves
The first step is determining the appropriate sample size and acceptance number for the sampling
plan being used (single, double, multiple, etc).
The sample size (n) is the number of units selected from the lot for inspection. The acceptance
number (c) is the maximum allowable number of defective units in the sample. These are chosen
based on the desired producer’s risk (α) and consumer’s risk (β) levels.
Calculating Probabilities
Once n and c are set, the probability of acceptance Pa(p) can be calculated for various levels of
percent defective (p) using the appropriate probability distribution model:
This gives a set of coordinates (p, Pa(p)) that can be plotted to form the OC curve.
The x-axis represents the percent defective or nonconforming units (p). The y-axis represents the
probability of acceptance Pa(p).
By plotting the calculated probability points, a smooth OC curve can be drawn connecting the
points.
The curve will have an S-shape, with Pa(p) approaching the producer’s risk α as p approaches 0,
and Pa(p) approaching the consumer’s risk β as p increases.
The shape and position of the OC curve convey important information about the discriminatory
power of the sampling plan between good and bad lots.
Ideally, the curve should have a relatively steep slope transitioning from the producer’s risk to the
consumer’s risk over a narrow range of percent defective values.
This allows reasonably good discrimination between acceptable and unacceptable quality levels.
The operating characteristic (OC) curve is closely tied to the sampling plan used for inspection.
Different sampling plans will produce different OC curves. The three main types of sampling plans
are:
Single Sampling Plan
In a single sampling plan, a single random sample of n units is taken from the lot and inspected.
The number of defective units (d) in the sample is counted.
If d is less than or equal to an acceptance number c, the entire lot is accepted. Otherwise, it is
rejected.
The single sampling plan is defined by the sample size n and acceptance number c. The OC
curve shows the probability of acceptance for different levels of percent defective in the lot.
Double Sampling Plan
A double sampling plan involves two potential samples. An initial sample of n1 units is taken. If
the number of defects d1 is less than or equal to an acceptance number c1, the lot is accepted. If
d1 is greater than a rejection number r1, the lot is rejected. Otherwise, a second sample of n2 units
is taken.
The total number of defects d1 + d2 is then compared to an acceptance number c2. If it is less than
or equal to c2, the lot is accepted, otherwise it is rejected.
The double sampling plan requires more parameters: n1, c1, r1, n2, c2. The OC curve shows the
probability of acceptance for different quality levels.
A multiple-sampling plan extends the double-sampling concept to allow for more than two samples
before deciding on lot acceptance or rejection. It provides additional discrimination for borderline
cases.
The OC curve is used to evaluate the discriminatory power and other characteristics of these
different sampling plans for a given situation. Appropriate sampling plans can be selected by
studying their OC curves.
sampling strategy satisfies the criteria used by quality assurance managers to determine whether
a particular batch meets the recommended specifications relating to quality, it is often difficult to
assess how well the strategy distinguishes between acceptable and defective units at intermediate
levels. As a result of this, sampling strategies are typically represented graphically using the OC
curves.
The operating characteristic (OC) curve is a graphical illustration that indicates the probability
of acceptance of the production batch versus the percentage of defective units. As shown in the
image below, the y-axis plots the probability of acceptance, while the x-axis represents the
percentage of defective units.
Variables sampling plans use the actual measurements of sample products for decision making
rather than classifying products as conforming or nonconforming, as in attributes sampling plans.
Variables sampling plans are more complex in administration than attributes plans, thus, they
require more skill
MIL-STD-105
MIL-STD-105 was a United States defense standard that provided procedures and tables for
sampling by attributes based on Walter A. Shewhart, Harry Romig, and Harold F. Dodge sampling
inspection theories and mathematical formulas. Widely adopted outside of military procurement
applications.
The last revision was MIL-STD-105E;[1] it has been carried over in ASTM E2234.
Sampling plans are typically set up with reference to an acceptable quality level, or AQL. The
AQL is the base line requirement for the quality of the producer's product. The producer would
like to design a sampling plan such that the OC curve yields a high probability of acceptance at
the AQL. On the other side of the OC curve, the consumer wishes to be protected from accepting
poor quality from the producer. So the consumer establishes a criterion, the lot tolerance percent
defective or LTPD. Here the idea is to only accept poor quality product with a very low probability.
Mil. Std. plans have been used for over 50 years to achieve these goals.
Standard military sampling procedures for inspection by attributes were developed during World
War II. Army Ordnance tables and procedures were generated in the early 1940's and these grew
into the Army Service Forces tables. At the end of the war, the Navy also worked on a set of tables.
In the meanwhile, the Statistical Research Group at Columbia University performed research and
outputted many outstanding results on attribute sampling plans
These three streams combined in 1950 into a standard called Mil. Std. 105A. It has since been
modified from time to time and issued as 105B, 105C and 105D. Mil. Std. 105D was issued by the
U.S. government in 1963. It was adopted in 1971 by the American National Standards Institute as
ANSI Standard Z1.4 and in 1974 it was adopted (with minor changes) by the International
Organization for Standardization as ISO Std. 2859. The latest revision is Mil. Std 105E and was
issued in 1989.
These three similar standards are continuously being updated and revised, but the basic tables
remain the same. Thus the discussion that follows of the germane aspects of Mil. Std. 105E also
applies to the other two standards
Mil. Std. 105E offers three types of sampling plans: single, double and multiple plans.
The steps in the use of the standard can be summarized as follows:
MIL-STD-414
The MIL-STD-414 application has been vastly improved. You can now design plans for both
known and unknown variability, and the application has been completed to include all three
types of inspections. Give it a shot!
This application designs a sampling plan for variables, according to the Military Standard 414
tables (Similar to ANSI/ASQ Z1.9, BS6002, ISO 3951) tables, for a given lot size and AQL. It
also calculates the estimated percent defectives in a lot, given the known or estimated variability.
Military Standard 414 has sampling plans for a set of pre-determined AQLs, ranging from 0.04%
to 15.0%. If the AQL you are using does not match any of the standard pre-determined AQL, use
the following calculator to find which AQL you should be using when designing a sampling plan
MIL-STD-414, MILITARY STANDARD: SAMPLING PROCEDURES AND TABLES FOR
INSPECTION BY VARIABLES FOR PERCENT DEFECTIVE (11 JUN 1957) [NO S/S
DOCUMENT]., This standard establishes sampling plans and procedures for inspection by
variables for use in Government procurement, supply and storage, and maintenance inspection
operations. When applicable this Standard shall be referenced in the specification, contract, or
inspection instructions, and the provisions set forth herein shall govern
IS 2500 STANDARAD
In order to promote public education and public safety, equal justice for all, a better informed
citizenry, the rule of law, world trade and world peace, this legal document is hereby made
available on a noncommercial basis, as it is the right of all humans to know and speak the laws
that govern them
• IS 2500-1 (2000): This standard includes fractional acceptance number plans, reduced plans, and
multiple sampling plans. It also recommends using the standard in conjunction with ISO 2859-O,
which includes illustrative examples.
• IS 2500-2 (1965): This standard discusses the types of inspection and the risk of rejecting lots of
quality.
• IS 2500-3 (1995): This standard identifies plans by the lot size and limiting quality (LQ). It also
provides sampling size (n) and acceptance number (AC) in table A
A variables sampling plan uses actual measurements of sample products to make decisions, rather
than classifying products as conforming or nonconforming. This type of plan is more complex than
an attributes plan, but it can provide equal protection with a smaller sample size.
2 Marks
1.Define Acceptance Sampling fundamentals?
Acceptance sampling is a quality control technique that uses statistical sampling to determine
whether to accept or reject a batch of products
2.Define OC?
The operating characteristic (OC) curve is a graphical illustration that indicates the probability
f acceptance of the production batch versus the percentage of defective units. As shown in the
image below, the y-axis plots the probability of acceptance, while the x-axis represents the
percentage of defective units.
3.Define Sequential Sampling Plan?
A multiple-sampling plan extends the double-sampling concept to allow for more than two samples
before deciding on lot acceptance or rejection. It provides additional discrimination for borderline
cases.
The OC curve is specific to the sampling plan used; a new curve must be generated for different
sampling plans.
For very small sample sizes, the discrete nature of the OC curve may be difficult to interpret
visually
The management sets an AQL, which is the acceptable proportion of a lot that can be
defective. For example, an AQL could be 2 defects in a lot of 200, or 1%.
9.Define Average total inspection (ATI)?
The ATI is the average number of units that will be inspected for a given incoming quality level
and probability of acceptance.
10.Define Probability?
Probability is a key factor in acceptance sampling, but it's not the only factor. For example, if a
company tests 10 units out of a million and finds one defect, it might assume that 100,000 of the
million are defective, but this might be inaccurate.