0% found this document useful (0 votes)
36 views91 pages

Quality Engineering

Unit 1 notes

Uploaded by

abarnatl03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views91 pages

Quality Engineering

Unit 1 notes

Uploaded by

abarnatl03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Unit-1

INTRODUCTION
Quality engineering is a discipline that focuses on the principles and practices of
ensuring that products and services meet or exceed customer needs. It involves analyzing,
developing, and implementing systems to achieve this goal. Quality engineering can be
applied to any type of product development.
Quality Engineering consists of analysis methods and the development of systems to
ensure products or services are designed, developed and manufactured.

Quality Dimensions:
Garvin proposes eight critical dimensions or categories of quality that can serve as a
framework for strategic analysis: Performance, features, reliability, conformance,
durability, serviceability, aesthetics, and perceived quality.

1. Performance
Performance has to do with the expected operating characteristics of a product or service.
Does a service or product do what it’s supposed to do? The primary operating
characteristics involve measurable elements, which makes it easier to objectively
measure the performance.
Some of the performance requirements are related to subjective preferences, but when
they are the preference of almost every consumer they become as powerful as an
objective requirement.

2. Features
What the dimension ‘performance’ doesn’t focus on are the features, the characteristics
that decide how appealing a product or service is to the consumer.

Such features are the extras of a product or service and complement its basic functioning.
This means that the ones designing a product or service should be familiar with the end-
users and should be updated on developments in consumer preferences.

Often it’s difficult to see a clear line between primary performance attributes and
additional features.

An example of features in service is offering free drinks on a plane. An example of


features in products is adding a drink cooler in the car.

3. Reliability
Reliability is usually closely related to performance. The focus of the dimension
reliability is more on how long a product will perform consistently according to the
specifications of that product. This is important to customers who need the product to
work without any errors and contributes to a brand or company’s image.

The dimension reliability shows the probability of the product having signs of error
within a specific time of period. For measuring reliability you should measure the time to
the first failure, how much time there is between failures, and the failure rate per a
specific time of period.

These measures are usually applied to products that are expected to last for a longer time
and not so much for products that are meant to be used directly and for a shorter time
period.

Usually when the costs for maintenance or downtime increase, reliability as a dimension
of quality becomes more important to consumers.

Example
For example, for parents with children who depend on a car, the reliability of the car
becomes an important element. Also for most farmers, reliability is a key attribute.
This group of consumers is sensitive to downtime, especially during the shorter harvest
seasons. For a farmer, reliable equipment can be crucial in preventing spoiled crops.
Also, the reliability of computers is key for many consumers.

4. Conformance
This dimension is closely related to the dimensions performance and features. The
dimension of conformance is about to what extent the product or service conforms to the
specifications.

Does it function and have all the features as specified? Every product and service has
some sort of specifications that comes with it.

Example
For example, the materials used or the dimensions of a product can be specified and set
as a target specification for the product.

Something that can also be defined in the specification is the tolerance, which states how
much a product is allowed to deviate from the target. Problematic with this approach is
that it makes it easier for producers to focus less on if the specifications have been met as
long as they’ve met the tolerance limits.

When it comes to service businesses, conformance is measured by focusing on the


accuracy, the number of processing errors, unexpected delays and other common
mistakes.

5. Durability
Out of the eight dimensions of quality, the dimension durability is about how long a
product will last or perform and under what conditions it will perform. Estimating the
length of a product’s life becomes complicated when it’s possible to repair the product.

For such products, the durability will be counted until it is no longer economically
beneficial to use it. This is when the repairs and the costs of repairing increase.

Customers then must weigh the costs for future repairs against the costs of investing in a
new one together with its operating expenses. In other cases, durability is measured by
the amount someone can use a product before it stops working and repair is impossible.

This, for example, is the case when a light bulb burns up and must be replaced by a new
one. In this case, repairing it is impossible.
6. Serviceability
Serviceability is one of the eight dimensions of quality that reflects on if the product is
relatively easy to maintain and repair. This becomes important for consumers who are
more focused on the total cost of ownership as criteria for selecting a product.

Serviceability reflects on how easy it is for the consumer to obtain repair service, how
responsive the service personnel is, and how reliable the service is. It also focuses on the
speed with which a product can be repaired and also the competence and behaviour of the
personnel.

Customer’s concerns are mainly about the product getting defects, but also how long it
takes for the product to be repaired. It is not only important if a product can be fixed, but
also how satisfied the customer is about the company’s complaint handling procedures.

This can affect how the customer evaluates the service quality and eventually the
company’s reputation. Each company has a different way of dealing with complaint
handling and not every company attaches the same level of importance to serviceability.

Example
For example, there are companies that do their best to resolve the complaints they
receive, while others don’t offer any service when it comes to complaints. An example of
improving a company’s serviceability is by installing a cost-free phone number to reach
the helplines.

7. Aesthetics
The aesthetics dimension is all about the way a product looks and contributes to the
company’s identity or a brand. Aesthetics is not only about how a product looks but also
about how it feels, tastes, smells or sounds.

This is clearly determined by individual preference and personal judgement, however,


there is a way to measure this dimension. There are some clear patterns found in the way
consumers rank products based on personal taste. Still, the aesthetics of a product is not
as universal as the dimension ‘performance’.

Not all people prefer the same taste or smell, which makes it impossible to please every
single customer. For this reason, companies end up searching for a niche.

8. Perceived Quality, the last of the eight Dimensions of quality


The perception of something is not always reality. Meaning that a product or service can
have high scores on each of the seven dimensions of quality, but still receive a bad rating
from customers as a result of negative perceptions from customers or the public.
Customers sometimes lack information about a service or product and for comparing
brands will rely on indirect reviews. This is usually the case when it comes to a product’s
durability because in most cases it can’t be observed directly.

Also, reputation plays a significant role when it comes to perceived quality. It’s easier for
a customer to trust the quality of a company’s new product when the established products
received positive reviews.

Definition of Quality Engineering

The definition of Quality Engineering is:

“Quality Engineering applies total quality management to software production


through a systemic approach that builds Quality at Speed capabilities for sustainable
business speed.”

Quality engineering philosophy


The underlying logic of Quality Engineering is summarized in the mantra “Build Better,
Build Faster” which represents the need to focus on building better on every aspect of
software production to unlock sustainable speed.

The paradigm shift of Quality Engineering relies on the following principles:

1. Business depends on software Quality and Speed


2. Quality embraces the entire software production system
3. Speed is sustainable only through built-in quality.

The north star of sustainable business speed requires focus on the second point to make
quality a first-class citizen in the entire software production system. It consequently requires
setting the boundaries of Quality Engineering.
With a systemic view, Quality Engineering acts on the software production system taking as
input digital business ideas and creating as output a software increment actually in operations,
and ideally, valuable.

On the broader big picture, the left and right boundaries of Quality Engineering start once
there is a minimal software production system, usually at a start-up stage with 2-3 people, and
up the remaining business maturity stages through serie-A and serie-E with about 500 FTEs
dedicated to software production.

At the bottom, Quality Engineering identifies the necessary software production elements but
doesn’t detail their complete implementation, left to the existing body of knowledge in each
area. On the upper boundary, Quality Engineering does not cover what the software
production system is used for, that is the features.

Quality Engineering body of knowledge

From an implementation perspective, Quality Engineering requires an understanding of


the big picture to discover systemic issues and architect focused solutions that effectively
solve key causes and limiting factors.
The key areas of knowledge of Quality Engineering are:

• Lean, Agile, DevOps for consolidating the software production areas


• Systemic-thinking and architecture as core to the systemic approach
• Organizational and human behavior for the actors’ collaboration.

Quality Engineering Building Blocks


The concept of building blocks are essential elements of Quality Engineering that collectively
pave the way for effective software production.

Quality Engineering structures these 3 core building blocks:

1. Quality at Speed Pains


2. MAMOS System Areas
3. Outputs and Outcomes.
Quality at Speed Pains
The first two critical dimensions are Quality Pains and Speed Pains. Quality Pains encompass
challenges related to the different facets of software production quality, while Speed Pains
address the hurdles of achieving rapid and efficient software delivery at different levels of the
organization.

Quality Pains are more subjective but can more easily talk to multiple stakeholders that are
not necessarily familiar with the internals of software production. They can have different
forms from dependency on one person, or lack of usability.

Speed Pains are more easily supported by metrics and numbers being more factual. The
challenge in that case relies more on consolidating and framing correctly the value in
perspective and compared to others.

MAMOS System Areas


At the heart of Quality Engineering is the MAMOS software production system that
encapsulates the key principles and practices for building Quality at Speed capabilities,
ensuring alignment and coherence across various aspects of software development through
three levels.
MAMOS areas
MAMOS structures the software production system into key areas that all together represent
the sum of the parts, but allow to focus in each context with cohesion.

The 5 areas of MAMOS are:

1. Methods: streamline collaboration for lean value delivery


2. Architecture: structure the quality-driven technology platform
3. Management: enable actors and teams to deliver value
4. Organization: align teams structure and boundaries for quality
5. Skills: ensure competencies availability to support capabilities.

Quality Inspection
In quality engineering, inspection is a process that involves measuring, testing, or
examining a product's characteristics and comparing the results to specified
requirements. The goal of inspection is to determine if the product meets pre-established
quality standards. Inspection is a preventative measure that can help detect potential
defects in products or services before they reach customers. It's an important part of
manufacturing and delivering products to customers with consistent quality.
Quality Inspection is an activity of checking, measuring, or testing one or more product
or service characteristics and comparing the results with the specific requirements to
confirm compliance. An efficient inspection process standardizes quality, eliminates
paper documents, and increases efficiencies on the floor

4 Types of Quality Inspection

There are, in total, 4 types of inspection in quality control: pre-production inspection,


during production inspection, pre-shipment inspection, and container loading/unloading
inspections. As the name implies, each of the quality control methods is carried out at a
different stage – and each of them has its own purpose in quality control and supply chain
management. Depending on the product, experience with your supplier and other factors,
one or all of these steps may apply to your business needs.

Pre-Production Inspection (PPI)

The Pre-Production Inspection (PPI) is conducted before the production process begins
and helps to assess the quantity and quality of the raw materials and components and
whether they conform to the relevant product specifications.

A PPI is beneficial when you work with a new supplier, especially if your project is a
large contract that has critical delivery dates. This inspection can help to reduce or
eliminate communication between you and your supplier on issues regarding production
timelines, shipping dates, and quality expectations.

During Production Inspection (DPI)

During production inspection (DPI), also known as DUPRO, is a quality control


inspection conducted while production is underway. This step is particularly useful for
products that are in continuous production and have strict requirements and/or when
quality issues have been found prior to manufacturing during an earlier PPI.

DPI inspections take place when only 10-15% of units are completed so that any
deviations can be identified, feedback given, and any defects can be re-checked to
confirm they have been corrected. It enables you to confirm that quality, as well as
compliance with specifications, is being maintained throughout the production process. It
also provides early detection of any issues requiring correction, thereby reducing delays
and rework.

Pre-Shipment Inspection (PSI)

Pre-shipment inspections (PSI) are an important step in the quality control process and
the method for checking the quality of goods before they are shipped. PSI ensures that
production complies with the specifications of the buyer. This inspection process is
conducted on finished products when at least 80% of the order has been packed for
shipping. Random samples are selected and inspected for defects against the relevant
standards and procedures.

Container Loading/Loading Supervision (LS)

Container loading and unloading inspections ensure your products are loaded and
unloaded correctly. Inspectors will supervise throughout the whole process and ensure
your products are handled professionally to guarantee their safe arrival to their final
destination.

This inspection will usually take place at your chosen factory while the cargo is being
loaded into the shipping container and at the destination once the products have arrived
and are being unloaded. This process includes evaluating the condition of the shipping
container and verifying all the product information, quantities, and packaging
compliance.

Quality Control
Quality control (QC) is a set of procedures in quality engineering that ensures a product
or service meets quality standards by monitoring results after development and
production:
• Planning: Carefully plan to ensure quality
• Equipment: Use the right equipment
• Inspection: Continuously inspect for defects and issues
• Corrective action: Take action to fix problems, such as repairing or eliminating defective
units, or purchasing better quality materials

QC is a reactive process that focuses on fixing existing defects, while quality engineering
(QE) is proactive and embeds quality considerations throughout the SDLC. QE focuses
on introducing new quality methodologies and how they impact the company, including
cultural and procedural changes.
QC is a subset of quality assurance (QA), which is more related to how a process is
performed or a product is made. QC is the inspection aspect of quality management,
while QA is the confirmation that requirements have been met.
QC can help engineering projects be successful by reducing rework, costs, and issues,
which can save time and money. It can also help maintain team morale, which can make
teams more motivated and productive.
Key Components of Quality Control
Key components of Quality Control may include:

1. Inspection: Regularly examining products, materials, or services to identify defects,


non-compliance, or deviations from quality standards.

2. Testing: Conducting various tests and measurements to assess the performance,


functionality, or characteristics of products or services.

3. Statistical Process Control (SPC): Employing statistical techniques to monitor and


control the production processes, ensuring that they remain within acceptable quality
limits.

4. Documentation and Records: Keeping detailed records of inspections, tests, and


corrective actions taken to maintain traceability and accountability.

5. Corrective Action: Implementing appropriate measures to address any identified


quality issues and prevent their recurrence.

6. Training and Education: Providing employees with the necessary skills and knowledge
to maintain quality standards effectively.

7. Continuous Improvement: Constantly analyzing data and feedback to identify areas for
improvement and enhancing the overall quality management system.

Quality Control is closely related to another quality management concept called Quality
Assurance (QA). While QC focuses on detecting and correcting defects, QA concentrates
on preventing them from occurring in the first place by setting up robust processes and
procedures.
Together, QC and QA form the backbone of an organization's quality management
system, helping to ensure that products and services consistently meet or exceed
customer expectations and regulatory requirements.

Quality Control Process

Normally, quality testing is part of every stage of a manufacturing or business process.


Employees frequently begin testing using samples collected from the production line,
finished products, and raw materials. Testing during various production phases can help
identify the cause of a production problem and the necessary corrective actions to prevent
it from happening again.

Customer service reviews, questionnaires, surveys, inspections, and audits are a few
examples of quality testing procedures that can be used in non-manufacturing businesses.
A company can use any procedure or technique to ensure that the final product or service
is safe, compliant, and meets consumer demands.

Types of Quality Control

Just as quality is a relative word with many interpretations, quality control itself doesn’t
have a uniform, universal process. Some methods depend on the industry. Take food and
drug products, for instance, where errors can put people at risk and create significant
liability. These industries may rely more heavily on scientific measures, whereas others
(such as education or coaching) may require a more holistic, qualitative method.

At its core, quality control requires attention to detail and research methodology.

So, what is quality control? There are a wide range of quality control methods,
including:
Control Charts:

A graph or chart is used to study how processes are changing over time. Using statistics,
the business and manufacturing processes are analyzed for being “in control.”

Process Control:

Processes are monitored and adjusted to ensure quality and improve performance. This is
typically a technical process using feedback loops, industrial-level controls, and chemical
processes to achieve consistency.

Acceptance Sampling:

A statistical measure is used to determine if a batch or sample of products meets the


overall manufacturing standard.
Process Protocol:

A mapping methodology that improves the design and implementation processes by


creating evaluative indicators for each step.

There are other quality control factors to consider when selecting a method in addition to
types of processes.

Some companies establish internal quality control divisions when defining what is quality
control. They do this to monitor products and services, while others rely on external
bodies to track products and performance. These controls may be largely dependent on
the industry of the business. Due to the strict nature of food inspections, for example, it
may be in a company’s best interest to sample products internally and verify these results
in a third-party lab.

Why Is Quality Control Important? What Are the Benefits?

Quality Control (QC) is essential for various reasons, and its importance lies in the
numerous benefits it brings to both businesses and consumers. Here are some key reasons
why QC is crucial:

1. Customer Satisfaction: QC ensures that products and services meet or exceed


customer expectations, leading to higher satisfaction levels and increased customer
loyalty.

2. Defect Prevention: By identifying and correcting issues early in the production or


service delivery process, QC helps prevent defects, reducing the likelihood of
expensive recalls or rework.

3. Cost Reduction: Implementing QC measures can lead to reduced waste, lower


production costs, and improved operational efficiency, contributing to overall cost
savings.

4. Compliance and Regulations: QC ensures that products and services adhere to industry
standards and regulatory requirements, avoiding legal issues and penalties.

5. Brand Reputation: Consistent high-quality products or services build a positive brand


image, enhancing the company's reputation and competitiveness in the market.
6. Increased Efficiency: QC optimizes processes and identifies areas for improvement,
leading to increased productivity and streamlined operations.

7. Risk Mitigation: Through rigorous testing and inspections, QC helps identify potential
risks and hazards, enabling businesses to address them proactively.

8. Continuous Improvement: QC encourages a culture of continuous improvement,


where organizations strive to enhance their products, services, and processes
constantly.

9. International Competitiveness: High-quality products can open doors to global


markets, increasing a company's competitiveness on an international scale.

10. Customer Retention and Loyalty: Satisfied customers are more likely to remain loyal
and recommend the brand to others, contributing to long-term business success.

Quality control engineers have many responsibilities, including:


• Developing and implementing standards
Create quality standards and processes for inspection and evaluation. They also
establish detailed guidelines for what to check.
• Testing
Inspect and evaluate new suppliers' materials, test products, and plan, execute, and
oversee inspection and testing of products. They also lead and direct quality technicians
in product testing and recording of test results.
• Analyzing
Analyze quality data, summarize findings, and conduct root cause analysis during
issues. They also investigate product complaints and reported quality issues to ensure
closure in accordance with company guidelines and external regulatory requirements.
• Improving
Review systems and processes to develop continued improvements and
efficiencies. They also analyze problems reported and develop improvements to
overcome them.
• Training
Train company staff on quality standards and practices. They also check that employees
are trained to the proper quality standard.
• Collaborating
Collaborate with operations managers to identify opportunities for improvements to
workflow and controls. They also work closely with external partners such as suppliers
and customers.
• Reporting
Generate reports on products or results and create reports for senior staff members
based on quality documentation.

Quality Assurance

Quality assurance (QA) is any systematic process of determining whether a product or


service meets specified requirements.

QA establishes and maintains set requirements for developing or manufacturing reliable


products. A quality assurance system is meant to increase customer confidence and a
company's credibility, while also improving work processes and efficiency, and it enables
a company to better compete with others.

The ISO (International Organization for Standardization) is a driving force behind QA


practices and mapping the processes used to implement QA. QA is often paired with
the ISO 9000 international standard. Many companies use ISO 9000 to ensure that their
quality assurance system is in place and effective.

The concept of QA as a formalized practice started in the manufacturing industry, and it


has since spread to most industries, including software development.
Importance of quality assurance

Quality assurance helps a company create products and services that meet the needs,
expectations and requirements of customers. It yields high-quality product offerings that
build trust and loyalty with customers. The standards and procedures defined by a quality
assurance program help prevent product defects before they arise.

Quality assurance methods

Quality assurance utilizes one of three methods:

Failure testing

Failure testing continually tests a product to determine if it breaks or fails. For physical
products that need to withstand stress, this could involve testing the product under heat,
pressure or vibration. For software products, failure testing might involve placing the
software under high usage or load conditions.
Statistical process control (SPC)

SPC is a methodology based on objective data and analysis and developed by Walter
Shewhart at Western Electric Company and Bell Telephone Laboratories in the 1920's
and 1930's. This methodology uses statistical methods to manage and control the
production of products.

Total quality management

TQM, which applies quantitative methods as the basis for continuous improvement.
TQM relies on facts, data and analysis to support product planning and performance
reviews.

Examples of quality assurance

The following are a few examples of quality assurance in use by industries:

• Manufacturing, the industry that formalized the quality assurance discipline.


Manufacturers need to ensure that assembled products are created without defects and
meet the defined product specifications and requirements.

• Food production, which uses X-ray systems, among other techniques, to detect physical
contaminants in the food production process. The X-ray systems ensure that contaminants
are removed and eliminated before products leave the factory.

• Pharmaceutical, which employs different quality assurance approaches during each


stage of a drug's development. Across the different stages, the QA processes include
reviewing documents, approving equipment calibration, reviewing training records,
reviewing manufacturing records and investigating market returns.

Quality planning
Quality planning in quality engineering is a structured process for defining quality
standards, practices, and specifications for a product, service, project, or contract. It's an
essential step before starting a project and involves establishing the project, determining
the steps to take, and coordinating quality-related activities. The goal is to ensure that the
result meets customer needs.
• Benefit/CostAnalysis :
It is the process of estimating costs and benefits of various project quality
management activities. The main benefit of meeting quality is less rework and
therefore resulting in higher productivity, lower costs, and increased customer
satisfaction. The main cost of meeting quality requirements is the cost associated with
quality management activities. The cost of quality is generally of two types:
o Cost of Conformance to Requirements: Cost of completing the project
work to satisfy the expected level of quality and project scope. Example:
Prevention costs and appraisal.
o Cost of Non-Conformance: It includes costs due to some kind of failure,
Internal failure-related cost includes costs incurred before the customer
receives the product, and external failure related costs after the customer
receives the product. Cost is also very high. Example: Startup cost, project-
related cost, continuous cost.
• Benchmarking: It involves comparing project practices with those of other projects
within the same organization or with other companies. It is done to generate ideas for
improvement and provide a basis for measurement.
• Creating a Flowchart: A flowchart is any diagram that shows how various
components of a system are interrelated. There are two types of flowcharts used in
quality management. These are as follows:
1. System or Process Flow Charts: These shows how various elements of a
system interrelate. It shows the flow of the process through a system.
2. Cause-And-Effect Diagrams: It is also known as Ishikawa or Fishbone
diagram. It shows how variables within a process relate and how those relations
create potential problems.

Quality cost
Cost of quality (COQ) is defined as a methodology that allows an organization to
determine the extent to which its resources are used for activities that prevent poor
quality, that appraise the quality of the organization's products or services, and that result
from internal and external failures.
The Cost of Quality can be divided into four categories. They include Prevention,
Appraisal, Internal Failure and External Failure. Within each of the four categories there
are numerous possible sources of cost related to good or poor quality.

Why Is Cost of Quality Important?

CoQ isn't just a neat concept—it's a critical tool that can transform your business strategy.
It can help you spot areas where a bit of investment now can save you a lot of trouble (and
money).

But here's the thing: the goal isn't to cut your CoQ down to zero. It's about finding the
sweet spot. Spending too little on prevention and appraisal could cost you more in failure
costs later.
How To Calculate The Cost of Quality

Measuring the Cost of Quality (CoQ) is easy. CoQ is the sum of two major components:
the Cost of Good Quality (CoGQ) and the Cost of Poor Quality (CoPQ).

CoGQ is the sum of the following:

• Prevention Costs: Money spent on activities to prevent defects. For example, if


you spend $5,000 on employee training and $2,000 on a quality planning system,
your total prevention costs would be $7,000.

• Appraisal Costs: Time spent evaluating your product to ensure it meets quality
standards. Suppose you spend $3,000 on beta product testing and $1,500 on
supplier assessments. Your total appraisal costs would be $4,500.

CoPQ is the sum of the following:

• Internal Failure Costs: Costs related to defects identified before reaching the
customer. If reworking defective units cost you $6,000 and scrapping useless
materials costs $2,000, your total internal failure costs would be $8,000.

• External Failure Costs: Costs arising from defects identified after the customer
receives the product. If your warranty claims total $7,000 and product returns cost
you $1,000, your total external failure costs would be $8,000.

So, if we want to put this into a formula, it would look like this:

CoQ = (Prevention Costs + Appraisal Costs) + (Internal Failure Costs + External


Failure Costs)

Using the numbers from our examples, you'd calculate your CoQ as follows:
CoQ = ($7,000 + $4,500) + ($8,000 + $8,000) = $27,500

Remember, the goal isn't to drive your CoQ to zero—it's to find the right balance. You
want to invest enough in good quality costs to reduce poor quality costs. Regularly
calculating and analyzing your CoQ can help you find this balance and increase customer
satisfaction and profitability.

Economics of quality
The economics of quality is a methodology that helps organizations improve customer
satisfaction while reducing costs. It involves tracking the costs and benefits of prevention,
appraisal, and failure (PAF) to determine the most cost-effective solution. The cost of
quality (COQ) can be further broken down into the cost of good quality (conformance)
and poor quality (non-conformance).

Here are some examples of costs that fall under the economics of quality:
• Appraisal costs
These include costs associated with testing raw materials, parts, and components, as
well as other goods purchased from outside sources. They also include inspection and
testing during production.
• Internal failure costs
These costs arise when a product or service fails to meet quality requirements before
delivery. For example, software delays can cause delays in reaching the operational
phase of a system's life, which can lead to significant losses.
The economics of quality is supported by the ISO/TR 10014 international standard:
Quality management guideline. Some courses, such as Quality Engineering Management
(QEMT) at Lambton College, teach students how to analyze and report the cost of
quality, and how to establish a cost of quality program.
The economics of quality is a methodology designed to have organizations enhance
the customer satisfaction while cutting the costs at the same time. The methodology
is supported by the ISO/TR 10014 international standard: Quality
management guideline. It guides the management to achieve their entrepreneurial
intent while they continously increase the productivity.

The economics-of-quality methodology is focused on finding the most adequate way to


control the costs while working on the fulfillment of the company´s entrepreneurial
intent. It lays great stress on :

• satisfaction - loyalty of the customer


• trust in the company
• reputation of the product
• company image

It extends the basic quality management system with the economic objectives. It follows
the short-term objectives and the long-terms ones, while continuously evaluating how they
are being fulfilled.

The methodology components include:

• performance of the costs analysis


• costs of compliance and non-compliance
• defining benefits to the customer
• factors leading to satisfaction, pleasure
• critical financial impacts
• opportunites identified
• management review
• strong orientation towards performance
• costs monitoring
• an so on
Quality loss function
In quality engineering, a quality loss function is a graphical representation of the losses
that can occur to a product from design to shipment. It can be used to estimate costs and
improve the performance of a product, process, design, and system.

The formula for the quality loss function is L(y) = k * (y - T)^2, where:
• L(y): The loss
• y: The measured value of the quality characteristic
• T: The target value
• k: A constant that depends on the unit of measurement and the tolerance limits
The function can take several forms depending on the nature of the quality
characteristics, such as "lower-the-better" or "higher-the-better". Genichi Taguchi
developed a quality loss function that considers three cases: nominal-the-best, smaller-
the-better, and larger-the-better. The methodology for the larger-the-better case is slightly
different from the other two.
In Taguchi's model, quality is defined as the loss caused to society by the shipment of a
product. Loss includes costs of operation, failure, maintenance, and customer
dissatisfaction. The goal is to have zero defects, or to hit the target exactly.

The Taguchi Loss Function: A Brief Overview


The Taguchi Loss Function is a statistical tool used to quantify the economic loss
incurred by a product or process as it deviates from its target or optimal performance. It is
based on the premise that even small variations in quality can lead to increased costs and
reduced customer satisfaction. By understanding and minimizing these losses,
manufacturers can enhance their competitiveness and profitability.
The formula for the Taguchi Loss Function is as follows:
Loss (L) = k * (Y - T)^2
Where:

• L represents the loss due to deviation from the target.

• k is a constant representing the cost of poor quality.

• Y is the actual or observed value of a product or process parameter.

• T is the target value or desired performance level.

The loss function is quadratic, meaning that as the deviation from the target (Y - T)
increases, the loss grows exponentially

The quality loss function developed by Genichi Taguchi considers three cases, nominal-
the- best, smaller-the-better, and larger-the-better. The methodology used to deal with the
larger-the- better case is slightly different than the other two cases.
taguchi's Quality Loss Function This is a way to compute quality costs. Goal is zero
defects (hit target exactly). L= Loss (cost) in dollars D= Deviation from target T=
Taguchi parameter L=TD2 i.e. L=T( Actual Value − Target )2 Example The
specifications for the diameter of a gear are 25.00± 0.25 mm.21
OIE354 - QUALITY ENGINEERING
UNIT-II
CONTROL CHART
A control chart—sometimes called a Shewhart chart, a statistical process control chart, or an
SPC chart—is one of several graphical tools typically used in quality control analysis to
understand how a process changes over time.

Chart details
A control chart consists of:
• Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality
characteristic in samples taken from the process at different times (i.e., the data)
• The mean of this statistic using all the samples is calculated (e.g., the mean of the means,
mean of the ranges, mean of the proportions) - or for a reference period against which
change can be assessed. Similarly a median can be used instead.
• A centre line is drawn at the value of the mean or median of the statistic
• The standard deviation (e.g., sqrt(variance) of the mean) of the statistic is calculated using all
the samples - or again for a reference period against which change can be assessed. in the
case of XmR charts, strictly it is an approximation of standard deviation, the [does not make
the assumption of homogeneity of process over time that the standard deviation makes.
• Upper and lower control limits (sometimes called "natural process limits") that indicate the
threshold at which the process output is considered statistically 'unlikely' and are drawn
typically at 3 standard deviations from the center line
The chart may have other optional features, including:
• More restrictive upper and lower warning or control limits, drawn as separate lines, typically
two standard deviations above and below the center line. This is regularly used when a
process needs tighter controls on variability.
• Division into zones, with the addition of rules governing frequencies of observations in each
zone
• Annotation with events of interest, as determined by the Quality Engineer in charge of the
process' quality
• Action on special causes
.
Control charts limit specification limits or targets because of the tendency of those involved with
the process (e.g., machine operators) to focus on performing to specification when in fact the
least-cost course of action is to keep process variation as low as possible. Attempting to make a
process whose natural centre is not the same as the target perform to target specification
increases process variability and increases costs significantly and is the cause of much inefficiency
in operations. Process capability studies do examine the relationship between the natural
process limits (the control limits) and specifications, however.
The purpose of control charts is to allow simple detection of events that are indicative of an
increase in process variability. [12] This simple decision can be difficult where the process
characteristic is continuously varying; the control chart provides statistically objective criteria of
change. When change is detected and considered good its cause should be identified and possibly
become the new way of working, where the change is bad then its cause should be identified and
eliminated.
The purpose in adding warning limits or subdividing the control chart into zones is to provide
early notification if something is amiss. Instead of immediately launching a process improvement
effort to determine whether special causes are present, the Quality Engineer may temporarily
increase the rate at which samples are taken from the process output until it is clear that the
process is truly in control. Note that with three-sigma limits, common-cause variations result in
signals less than once out of every twenty-two points for skewed processes and about once out
of every three hundred seventy (1/370.4) points for normally distributed processes. [13] The two-
sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points
in normally distributed data. (For example, the means of sufficiently large samples drawn from
practically any underlying distribution whose variance exists are normally distributed, according
to the Central Limit Theorem.)

Chance and Assignable Causes of Quality Variation


• A process that is operating with only chance causes of variation present is said to be in
statistical control.
• A process that is operating in the presence of assignable causes is said to be out of control.
• The eventual goal of SPC is reduction or elimination of variability in the process by
identification of assignable causes.
Statistical Basis of the Control Chart
Basic Principles
A typical control chart has control limits set at values such that if the process is in control, nearly
all points will lie between the upper control limit (UCL) and the lower control limit (LCL).

Out-of-Control Situations
• If at least one point plots beyond the control limits, the process is out of control
• If the points behave in a systematic or nonrandom manner, then the process could be out
of control.

Relationship between the process and the control chart

Important uses of the control chart


• Most processes do not operate in a state of statistical control.
• Consequently, the routine and attentive use of control charts will identify assignable
causes. If these causes can be eliminated from the process, variability will be reduced
and the process will be improved.
• The control chart only detects assignable causes. Management, operator, and
engineering action will be necessary to eliminate the assignable causes.
• Out-of-control action plans (OCAPs) are an important aspect of successful control chart
usage.

Types the control chart


• Variables Control Charts
– These charts are applied to data that follow a continuous distribution
(measurement data).
• Attributes Control Charts
These charts are applied to data that follow a discrete distribution
Popularity of control charts
1) Control charts are a proven technique for improving productivity.
2) Control charts are effective in defect prevention.
3) Control charts prevent unnecessary process adjustment.
4) Control charts provide diagnostic information.
5) Control charts provide information about process capability.

Choice of Control Limits


General model of a control chart
where L = distance of the control limit from the center line
= mean of the sample statistic, w.
= standard deviation of the statistic, w.
Sample Size and Sampling
Frequency
• In designing a control chart, both the sample size to be selected and the frequency of
selection must be specified.
• Larger samples make it easier to detect small shifts in the process.
• Current practice tends to favor smaller, more frequent samples.
Analysis of Patterns on Control Charts
Nonrandom patterns can indicate out-of-control conditions
• Patterns such as cycles, trends, are often of considerable diagnostic value (more about
this in Chapter 5)
• Look for “runs” - this is a sequence of observations of the same type (all above the
center line, or all below the center line)
• Runs of say 8 observations or more could indicate an out-of-control situation.
– Run up: a series of observations are increasing
– Run down: a series of observations are decreasing
Analysis of Patterns on Control Charts
Western Electric Handbook Rules (Should be used carefully because of the increased risk of
false alarms)
A process is considered out of control if any of the
following occur:
1) One point plots outside the 3-sigma control limits.
2) Two out of three consecutive points plot beyond the 2-sigma warning limits.
3) Four out of five consecutive points plot at a distance of 1-sigma or beyond from the
center line.
4) Eight consecutive points plot on one side of the center line.
Statistical process control (SPC)
control charts are completed by measuring or assessing products and recording the results.
Control charts use process data and information to calculate process control limits. These control
limits are a measure of the variability of the process due to common causes.

Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical
methods to monitor and control the quality of a production process. This helps to ensure that
the process operates efficiently, producing more specification-conforming products with less
waste scrap. SPC can be applied to any process where the "conforming product" (product
meeting specifications) output can be measured. Key tools used in SPC include run charts, control
charts, a focus on continuous improvement, and the design of experiments. An example of a
process where SPC is applied is manufacturing lines.
SPC must be practiced in two phases: The first phase is the initial establishment of the
process, and the second phase is the regular production use of the process. In the second phase,
a decision of the period to be examined must be made, depending upon the change in 5M&E
conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts
used in the manufacturing process (machine parts, jigs, and fixtures).
An advantage of SPC over other methods of quality control, such as "inspection," is that
it emphasizes early detection and prevention of problems, rather than the correction of problems
after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce
the product. SPC makes it less likely the finished product will need to be reworked or scrapped.
SPC uses statistical tools to observe the performance of the production process in order to detect
significant variations before they result in the production of a sub-standard article. Any source of
variation at any point of time in a process will fall into one of two classes.
(1) Common causes
'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of
variation. It refers to any source of variation that consistently acts on process, of which
there are typically many. This type of causes collectively produce a statistically stable and
repeatable distribution over time.
(2) Special causes
'Special' causes are sometimes referred to as 'assignable' sources of variation. The term
refers to any factor causing variation that affects only some of the process output. They
are often intermittent and unpredictable.

Application
The application of SPC involves three main phases of activity:
1. Understanding the process and the specification limits.
2. Eliminating assignable (special) sources of variation, so that the process is stable.
3. Monitoring the ongoing production process, assisted by the use of control charts, to
detect significant changes of mean or variation.
The proper implementation of SPC has been limited, in part due to a lack of statistical expertise
at many organizations.[13]
Control Charts for Variables X
The control charts of variables can be classified based on the statistics of subgroup
summary plotted on the chart. X¯ chart describes the subset of averages or means, R chart
displays the subgroup ranges, and S chart shows the subgroup standard deviations.

CONTROL CHARTS FOR VARIABLES


The quality characteristic of a product which can be measured and expressed in specific units
is called a variable e.g. height, weight, density and diameter. Thus the control charts which are
based on measureable quality characteristics are called control charts for variables.

Control charts for variables are of three types:


• Mean or Chart
• Range or R chart
• Standard deviation or σ chart.
• Mean or Chart
X-bar shows the overall mean or process mean, the R-chart shows the range of the
statistical center line. Together, X-bar and R-charts are quality control charts used in
conjunction to keep track of the mean and variation of a process using samples gathered
over a period of time.

An X-bar chart is a frequently used type of quality control chart, where the y-axis tracks the degree
to which the deviation of the tested attribute is acceptable.

The X-bar chart is used to monitor the mean of successive samples of constant size (n).
The x-axis on the X-bar chart tracks the samples tested. Analyzing the pattern of variance depicted
by a quality control chart can help determine if defects are occurring randomly or systematically.
This type of control chart is used for characteristics that can be measured on a continuous
scale, such as weight, temperature, thickness, etc. For example, one might take seven samples of
a particular device component from production every hour, measure the width of each, and then
plot the average of the seven width values on the chart for each sample.
Now, while the X-bar chart is essential since it helps to monitor the average or the mean of the
process and how this changes over time, it is never used alone and is most often used in
combination with the R-chart
• Range or R chart
An R-Chart is a statistical quality assurance graph for determining the stability and predictability of
a process. The R-chart shows the error of measurement, since the R values are the differences
between successive measurements of the same product. In other words, the R-chart shows the
sample range (which represents the difference between the highest and lowest value in each
sample) and monitors process variability at regular intervals from a process.
While the X-bar shows the overall mean or process mean, the R-chart shows the range of the
statistical center line.
Together, X-bar and R-charts are quality control charts used in conjunction to keep track of the
mean and variation of a process using samples gathered over a period of time. Both charts’ control
limits are used to monitor the mean and variation of the process going forward.
1 Standard deviation or σ chart.
In quality control processes, standard deviation measures the variation or dispersion of a
set of values from their mean. It plays a crucial role by quantifying the consistency and
predictability of manufacturing processes.
Standard deviation is a fundamental statistical tool in quality control that measures how much
variation exists from the average or mean. In a manufacturing context, you want parts to be as
close to the design specifications as possible. A small standard deviation means that most of the
produced items are very similar to the mean value, which is typically the target dimension. This
consistency is vital for product reliability and functionality. If the standard deviation is large,
however, it indicates that there's a wide spread in the product dimensions, which can lead to
assembly problems, parts that don't fit, or even product failures.
2Process Monitoring
In quality control, standard deviation is used for process monitoring. It's a way to keep an
eye on the consistency of production over time. By plotting the standard deviation of production
measurements, you can quickly see if something changes or starts to go wrong. This could be due
to machine wear, operator error, or material inconsistencies. A stable process has a consistent
standard deviation, while a process that's out of control will show significant variation in its
standard deviation over time, prompting immediate investigation and corrective actions.
3. Setting Tolerances
Setting tolerances is another area where standard deviation is invaluable. Tolerances are
the acceptable limits within which a product can vary from its specified dimensions. By
understanding the standard deviation of your process, you can set realistic tolerances that ensure
functionality while not being overly restrictive. If you set tolerances without considering standard
deviation, you might end up with an unachievable quality standard or, conversely, a product that
doesn't meet customer needs. It's all about finding the right balance.

4Continuous Improvement
Continuous improvement in quality control is about making incremental changes to
enhance product quality and process efficiency. Standard deviation plays a pivotal role here by
helping you measure the impact of these changes. If an adjustment leads to a lower standard
deviation, it means the process variation has decreased, which is usually a good sign. You can
also compare standard deviations before and after changes to objectively assess their
effectiveness. This data-driven approach ensures that efforts to improve quality are grounded in
reality.

5Risk Management
Risk management in quality control involves anticipating and mitigating potential issues
that could affect product quality. Standard deviation helps identify areas of high variability that
might pose a risk to consistent production. For example, if certain measurements have a high
standard deviation, there's a greater chance that some products will fall outside of the acceptable
range. By focusing on reducing the standard deviation in these areas, you can proactively manage
risks and prevent defects before they occur.

What role does standard deviation play in quality control processes?


Here’s what else to consider
Understanding the role of standard deviation in quality control is crucial for ensuring
products meet the required standards. This statistical measure helps you grasp the variability or
spread of a set of data points, such as the dimensions of manufactured parts. By calculating the
standard deviation, you can determine if a production process is consistent and if the products
are being made to specifications. A low standard deviation indicates that the data points are close
to the mean, reflecting high process precision. Conversely, a high standard deviation suggests
greater variability, which could signal potential quality issues. Quality control heavily relies on
these insights to maintain product uniformity and customer satisfaction.
1Key Concept
Standard deviation is a fundamental statistical tool in quality control that measures how
much variation exists from the average or mean. In a manufacturing context, you want parts to
be as close to the design specifications as possible. A small standard deviation means that most
of the produced items are very similar to the mean value, which is typically the target dimension.
This consistency is vital for product reliability and functionality. If the standard deviation is large,
however, it indicates that there's a wide spread in the product dimensions, which can lead to
assembly problems, parts that don't fit, or even product failures.
2Process Monitoring
In quality control, standard deviation is used for process monitoring. It's a way to keep an
eye on the consistency of production over time. By plotting the standard deviation of production
measurements, you can quickly see if something changes or starts to go wrong. This could be due
to machine wear, operator error, or material inconsistencies. A stable process has a consistent
standard deviation, while a process that's out of control will show significant variation in its
standard deviation over time, prompting immediate investigation and corrective actions.

CONTROL CHARTS FOR ATTRIBUTES(p)


Control charts for attributes are used for quality characteristics that are counted rather
than measured. Attributes are discrete in nature and entail simple yes-or-no decisions, for
example, the number of nonfunctioning light bulbs, the proportion of broken eggs in a carton,
the number of rotten apples, the number of scratches on a tile, or the number of complaints
received. Two of the most common types of control charts for attributes are p-charts and c-
charts.
P-charts are used to measure the proportion of items in a sample that are defective.
Examples are the proportion of broken cookies in a batch and the proportion of cars produced
with a misaligned fender. P-charts are appropriate when both the number of defectives
measured and the size of the total sample can be counted. A proportion can then be computed
and used as the statistic of measurement.
C-charts count the actual number of defects. For example, we can count the number of
complaints from customers in a month, the number of bacteria in a petri dish, or the number of
barnacles on the bottom of a boat. However, we cannot compute the proportion of complaints
from customers, the proportion of bacteria in a petri dish, or the proportion of barnacles on the
bottom of a boat. To summarize:
P-charts: Used when observations are placed in either of two groups.
Examples:
• Defective or not defective
• Good or bad
• Broken or not broken
C-charts: Used when defects can be counted per unit of measure. ..
CONTROL CHARTS FOR ATTRIBUTES(np)
An np control chart is used to look at variation in yes/no type attributes data. There are
only two possible outcomes: either the item is defective or it is not defective. The np control
chart is used to determine if the number of defective items in a group of items is consistent over
time.
A product or service is defective if it fails, in some respect, to conform to specifications or a
standard. For example, customers like invoices to be correct. If you charge them too much, you
will definitely hear about it and it will take longer to get paid. If you charge them too little, you
may never hear about it. As an organization, it is important that your invoices be correct. Suppose
you have decided that an invoice is defective if it has the wrong item or wrong price on it. You
could then take a random sample of invoices (e.g., 100 per week) and check each invoice to see
if it is defective. You could then use an np control chart to monitor the process.
You use an np control chart when you have yes/no type data. This type of chart involves counts.
You are counting items. To use an np control chart, the counts must also satisfy the following two
conditions:
1. You are counting n items. A count is the number of items in those n items that fail to
conform to specification.
2. Suppose p is the probability that an item will fail to conform to the specification. The value
of p must be the same for each of the n items in a single sample.
If these two conditions are met, the binomial distribution can be used to estimate the distribution
of the counts and the np control chart can be used. The control limits equations for the np control
chart are based on the assumption that you have a binomial distribution. Be careful here because
condition 2 does not always hold. For example, some people use the p control chart to monitor
on-time delivery on a monthly basis. A p control chart is the same as the np control chart, but the
subgroup size does not have to be constant. You can’t use the p control chart unless the
probability of each shipment during the month being on time is the same for all the shipments.
Big customers often get priority on their orders, so the probability of their orders being on time
is different from that of other customers and you can’t use the p control chart.
np Control Chart Example: Red Beads
The red bead experiment described in last month’s newsletter is an example of yes/no data that
can be tracked using an np control chart. In this experiment, each worker is given a sampling
device that can sample 50 beads from a bowl containing white and red beads. The objective is to
get all white beads. In this case, a bead is “in-spec” if it is white. It is “out of spec” if it is red. So,
we have yes/no data – only two possible outcomes.
Data from one red bead experiment are shown below. The numbers represent the number of red
beads each person received in each sample of 50 beads.
Worker Day 1 Day 2 Day 3 Day 4

Tom 12 8 6 9

David 5 8 6 13

Paul 12 9 8 9

Sally 9 12 10 6

Fred 10 10 11 10

Sue 10 16 9 11

The np control chart from this data is shown below.

The np control chart plots the number of defects (red beads) in each subgroup (sample number)
of 50. The center line is the average. The upper dotted line is the upper control. The lower dotted
line is the lower control limit. As long as all the points are inside the control limits and there are
no patterns to the points, the process is in statistical control. We know what it will produce in the
future. While we don’t know the exact number of red beads a person will draw the next time, we
know it will be between about 2 and 17 (the control limits) and average about 10.
Steps in Constructing an np Control Chart
The steps in constructing the np chart are given below. The data from above is used to
demonstrate the calculations.
1. Gather the data.
a. Select the subgroup size (n). Attributes data often require large subgroup sizes (50 –
200). The subgroup size should be large enough to have several defective items. The
subgroup size must be constant.
In the red bead example, the subgroup size is 50.
b. Select the frequency with which the data will be collected. Data should be collected in
the order in which it is generated.
c. Select the number of subgroups (k) to be collected before control limits are calculated.
You can start a control chart with as few as five to six points but you should recalculate
the average and control limits until you have about 20 subgroups.
d. Inspect each item in the subgroup and record the item as either defective or non-
defective. If an item has several defects, it is still counted as one defective item.
e. Determine np for each subgroup.
np = number of defective items found
f. Record the data.
2. Plot the data
a. Select the scales for the control chart.
b. Plot the values of np for each subgroup on the control chart.
c. Connect consecutive points with straight lines.
3. Calculate the process average and control limits.
a. Calculate the process average number defective:

CONTROL CHARTS FOR ATTRIBUTES(c&U)


c-Chart. The c-Chart is also known as the Number of Defects or Number of Non-
Conformities Chart. For a sample subgroup, the number of times a defect occurs is measured and
plotted as a simple count.

The c control chart plots the number of defects (c) over time. The area of opportunity
must be the same over time. This means that you use the same size sheet each time you are
counting the bubbles in the sheet. The u control chart plots the number of defects per inspection
unit (c/n) over time.
On occasion, there is a customer complaint. Sometimes someone gets injured on the job.
Sometimes hospital patients get infections. These situations are examining counting type
attributes data. Each count (customer complaint, injury, or infection) is considered a defect.
These types of situations are often governed by attributes data. c and u control charts are two
types of attribute control charts that can be used to monitor and improve these types of
processes. Both these charts track the variation in counting type attributes data. Purpose The
purpose of this module is to introduce c and u control charts - what they are, when they can be
used, how to construct them and how to interpret both charts. In addition, the small sample case
for c and u control charts is introduced. The c and u control charts tell you if the process is in
statistical control or if there are special causes present. Attributes Data and Control Charts We
sometimes collect data that involves counts; for example, the number of injuries in a plant, the
number of mistakes on an invoice, whether a delivery is on-time or not or whether a product is
in specification or not. These types of data are called attributes data. There are two types of
attributes data: yes/no and counting type. With yes/no data, you are examining distinct items
(such as invoices, deliveries, or phone calls). With counting type data, you are usually examining
an area where a defect has an opportunity to occur. Both types of data are explained below.
Yes/No Data For each item, there are only two possible outcomes: either it passes or it fails some
preset specification. Each item inspected is either defective (i.e., it does not meet the
specifications) or is not defective (i.e., it meets specifications). Examples of the yes/no data are
phone answered/not answered, product in spec/not in spec, shipment on time/not on time and
invoice correct/incorrect. If you have yes/no data, you will use either a p or np control chart to
examine the variation in the fraction of items not meeting (or meeting) a preset specification in
a group of items. You would use a p control chart if the subgroup size (the number of items
examined in a given time period) changes over time. You would use the np control chart if the
subgroup size stays the same. Counting Data With counting data, you count the number of
defects. A defect occurs when something does not meet a preset specification. It does not mean
that the item itself is defective. For example, a television set can have a scratched cabinet (a
defect) but still work properly. When looking at counting data, you have whole numbers such as
0, 1, 2, 3; you can't have half of a count. If you have counting data, you will use a c or u control
chart. The c control chart is used if the area stayed constant from sample to sample; the u control
chart is used if the area did not stay constant. 2 BPI Consulting, LLC www.spcforexcel.com If you
don't have data based on counts, you have variables data. Variables data are taken from a
continuum and are often referred to as continuous. Variables data can, theoretically, be
measured to any precision you like. Examples of variables data include time, length, width,
density, dollars, and height. Understanding c and u Control Charts Both the c and u control charts
are used to look at variation in counting type attributes data. They are used to determine the
variation in the number of defects in a subgroup. The subgroup size usually refers to the area
being examined. For example, a c control chart can be used to monitor the number of injuries in
a plant. In this case, the plant is the subgroup. If the subgroup size remains constant, the c control
chart is used. If the subgroup size varies, the u control chart is used. You will often use a c or u
control chart if the item is complex in nature. For example, it does not make much sense to
characterize something like a television set, a car, or a computer as being defective or not
defective. For example, a television set may have a scratch on the surface, but that defect hardly
makes the television set defective. The real issue here is how many defects there are on the
television set. Rating items as defective or not defective is also not very useful if the item is
continuous. For example, suppose you are making a plastic sheet. The fact that the sheet has a
small defect such as a bubble or blemish on it does not make it defective. However, if there are
too many bubbles, the sheet may not be useful for its intended purpose. For example, suppose
you make plastic sheets that are used for sheet protectors. Bubbles on the plastic sheet are
considered defects. You can monitor the number of bubbles over time by counting the number
of bubbles on one plastic sheet. The plastic sheet is the area of opportunity for defects to occur.
The number of bubbles is the number of defects (c). A defect occurs when something does not
meet a preset specification. It does not mean that the item itself is defective. When looking at
counting data, you end up with whole numbers such as 0, 1, 2, 3; you can't have half of a defect.
Thus, with the plastic sheet example, you will have 1 bubble, 2 bubbles, etc. There are two ways
to track this counting type data, depending on what you are plotting and whether or not the area
of opportunity for defects to occur is constant. The c control chart plots the number of defects
(c) over time. The area of opportunity must be the same over time. This means that you use the
same size sheet each time you are counting the bubbles in the sheet. The u control chart plots
the number of defects per inspection unit (c/n) over time. The area of opportunity can vary over
time. This means that you can vary the number of sheets or the area examined for bubbles each
time.

c Control Charts

c charts are used to look at variation in counting type attributes data. They are used to
determine the variation in the number of defects in a constant subgroup size. Subgroup size
usually refers to the area being examined. For example, a c chart can be used to monitor the
number of injuries in a plant. In this case, the plant is the subgroup. Since the plant doesn’t
change size very often, it is a subgroup of constant size.
To use the c chart, the opportunities for defects to occur in the subgroup must be very
large, but the number that actually occurs must be small. For example, the opportunity for
injuries to occur in a plant is very large, but the number that actually occurs is small.
Operational definitions (March 2004publication available on website) must be used to
determine what constitutes a defect. A subgroup can contain different types of defects. For
example, a customer complaint can occur for a number of different reasons. If this is the case,
operational definitions must exist for each defect.
The figure above is an example of a c chart. In this example, the variation in the number
of OSHA recordable injuries in a plant is being monitored. The subgroup size in this case is the
plant. The operational definition for an injury is any injury that is OSHA recordable. The
opportunity for defects to occur is very large since there are many opportunities for injuries to
occur. However, the number of defects (injuries) that actually occur is small. A c chart is
appropriate under these conditions. The time frame was selected as one month.
The number of injuries each month is plotted for a two-year period. For example, there were 2
injuries (c = 2) in January 2002 and 4 injuries (c =4) in February 2002. The overall average and
control limits have also been calculated and plotted.
The figure is in statistical control. What does it mean when the c chart is in statistical
control? It means that there are only common causes of variation present (see January 2004
publication on variation, available on the web-site). It also means that the number of injuries will
remain consistent in the near future. The average number per month will be around 2. Some
months it may be as high as 6, others as low as 0.
Since the process is in control, the system must be changed to decrease the number of
injuries. This is management’s responsibility. However, the people closest to the job will usually
have great ideas about what needs to be done to improve the process. They usually do not have
the authority to make the required changes. Attempting to decrease the number of injuries by
encouraging the workforce to be more careful will not work. The process must be changed.
Examine the reasons why injuries occur. A Pareto diagram can be used for this. Look for ways to
prevent injuries from occurring in the first place.

Steps in Constructing a c Control Chart

The steps in constructing a c chart are given below. The equations you need are shown to the
right.
1. Gather the data
a. Select the subgroup size. The subgroup size is the area where defects have the opportunity to
occur. It must be constant from subgroup to subgroup. The opportunity for defects to occur must
be large. The number of defects that actually occur must be small.
b. Select the frequency with which the data will be collected. Data should be collected in the
order in which they are generated.
c. Select the number of subgroups (k) to be collected before control limits will be calculated (at
least twenty).
d. Count the number of defects (c) in each subgroup. Ensure that operational definitions of a
defect are complete.
e. Record the data.
2. Plot the data.
a. Select the scales for the control chart.
b. Plot the values of c for each subgroup on the control chart. c. Connect consecutive points with
straight lines.
3. Calculate the process average.
a. Calculate the process average number of defects ( c):
b. Draw the process average number of defects on the control chart as a solid line and label.
4. Calculate the control limits.
a. Calculate the control limits for the c chart. The upper control limit is given by UCLc. The lower
control limit is given by LCLc.
b. Draw the control limits on the control chart as dashed lines and label.
5. Interpret the chart for statistical control.
a. The following tests for statistical control as a minimum should be used If any of these
conditions are present, the process is out of statistical control due to the presence of a special
cause of variation.
· Points beyond the control limits
· Seven points in a row trending up or trending down
· Seven points in a row above or below the average

Construction Quality Control Approaches


There are a number of ways to approach quality control management in construction,
with each having its own pros and cons depending on the needs and scope of a company’s
projects. The International Organization for Standardization established a set of quality
standards called ISO 9001.
These seven standards are based on the engagement of people, customer focus,
leadership, process approach, improvement, evidence-based decision making, and relationship
management. Additionally, there are four other main approaches to quality control
management in construction:
• Continuous improvement. Focuses on continuous incremental improvements to processes over
time. Improvements are discovered through customer feedback and internal analytics processes.
• Kaizen. A Japanese word that means “change for the better,” Kaizen refers to a philosophy of
continuously looking for ways to improve that is applied to quality control management. When
all members of an organization implement Kaizen to their daily practices, gradual improvements
can be seen over time.
• Six sigma. This problem solving framework focuses on proactively identifying and solving issues
that arise. The main steps to this quality control management approach are Define, Measure,
Analyze, Improve, and Control.
• Lean management. Waste elimination and reduction are key factors to this approach. Waste is
determined by extraneous processes and materials that don’t provide value to customers or
construction companies.
How to Ensure Construction Quality Control
Constructional quality control definitions vary slightly between organizations, but there
are some things that all construction industry professionals must take into consideration when
implementing quality control management protocols:
Define the expectations and acceptance criteria
Before implementing quality control procedures, quality standards must be clearly
defined so that all parties involved have a clear understanding of what the client expects to see
in the finished work. These expectations should include key acceptance criteria such as
completing a project with zero defects that satisfy regulatory codes and client specifications.
Have an inspection plan in place
Inspections should take place regularly as a part of a thorough quality assurance plan at
different points in the construction process. However, before conducting any inspections it's
crucial that organizations create a plan that details what needs to be inspected and what an
acceptable result looks like. All completed work should meet client criteria, company
expectations, and any other indications brought forth by invested parties.
Create a quality control checklist
Quality control criteria and expectations can be difficult to communicate and manage
across teams without a standardized quality control checklist. A checklist simplifies the inspection
process, making sure that critical aspects of quality control are not overlooked. Additionally,
checklists also help clearly communicate areas of concern and the specific tasks each
construction team member is responsible for.
Correct inaccuracies and deficiencies
The whole point of implementing quality control management procedures is to ensure
that construction work meets company and client standards. Perhaps the most important aspect
of any quality control management plan is to make time and tools available to make corrections
and address deficiencies as they arise. Continuous monitoring of teams and construction sites as
well as regular inspections allow for opportunities to discover work that does not meet
expectations before it is completed and presented to the client.
Review and analyze problems and their solutions
During the process of monitoring progress and inspecting deliverables, issues and
problems will be identified along the way. This can include the slow, manual processes that teams
do just to get by - or “Gray Work.” In addition to mitigating these issues as they arise, it’s a good
idea to include a step for construction project managers to review how each job went and analyze
how these problems can be avoided in their next construction project. When conclusions are
made regarding these issues, quality control managers need to communicate to the entire crew
what the new expectations and quality requirements may be for projects to come.
Tips for Creating a Construction Quality Control Plan
Here are some final tips before you begin collaborating to either create a quality control
plan or make adjustments to your existing construction industry quality program.
Communicate clearly and effectively
Communication and quality control must go hand in hand. Without a plan to effectively
communicate policy, compliance, safety standards, and building expectations, quality control will
be an endless process. Quality control should be a part of all communications and discussions
about project specifications, and all contractors and involved parties should clearly understand
what is expected of them.
Project managers need to identify what kind of communications, how frequent these
communications occur, and the manner in which messages are transmitted across the
organization. Any monitoring and surveillance activities must be clearly indicated within your
quality control plan, as well as expectations placed on subcontractors and suppliers.
Communications with builders and clients must also be exceptional because clients are
the deciding factor of whether or not a project was executed according to their standards. When
construction project managers collect client specifications there should be a plan in place to
communicate these expectations with crews and individuals that are affected.
Have a backup plan
Having a backup plan (or multiple backup plans) is often overlooked, especially when
processes, suppliers, and workflows have become well established over a long period of time.
But as we well know, no construction project ever goes according to plan. Having a back up plan
or a series of backup plans in place and communicated to applicable teams can help avoid costly
mistakes and tough client conversations.
Record any backup plans within a management system and keep a record of when and
how any of these back up plans were implemented. Making sure all parties are on the same page
with defined backup plans ensures quality construction.
Applications
Quality engineers can help teams identify issues in their adoption of these practices and
can help optimize integration. As adoption of these practices improves, so should the quality of
products and a company's ROI. Quality engineers can help teams integrate artificial intelligence
when automating processes.
Its tools and methods are developed by using a horizontal approach. It involves many
branches of business and engineering. Quality engineering has many types of tools and
methodologies, which are:-
• Implement a Quality Management System (QMS).
• Advanced Product Quality Planning (APQP) tools from design concept and verification to finish
product tracking. Tools like Error Mode and Effects Analysis (FMEA) and Quality Function
Deployment (QFD) are beneficial here.
• Implement the voice of the customer (VOC) into new product designs and processes.
• Identify and eliminate waste in business and production processes.
• Coordinate with internal and external suppliers to ensure materials, assemblies, and
components meet design and quality requirements.
• Implement process controls such as Statistical Process Control (SPC).
• Addressing the root cause: Implement practical root cause analysis (RCA) measures to
separate signal from noise to identify the root cause of problems. This could be in the form of
enhanced, AI-driven statistical process control from machine monitoring data or increased
attention to human quality aspects gathered from device-driven applications.
• Fix Root Cause: Ensure effective process control by developing appropriate test and inspection
methods.
Unit –III
Special Control Procedures
Warning and Modified Control Limits
In quality engineering, warning and modified control limits are used to monitor processes and
determine if they are in control:

Warning limits
These are set at +2s and –2s, where s is the standard deviation for the control chart.

Modified control limits


These are used when a process has a slight variability around its mean, compared to the designer's
specified tolerance. This means that it's not necessary to keep the mean rigidly stable

Control limits
These are typically set at +/- 3 standard deviations, but can be adjusted to +/- 2 standard
deviations. The width of the control limits is determined by the size of sigma and the number of
standard deviations specified

Control charts
Control charts are powerful tools used by many industries to monitor the quality of processes and
detect special cause of variations on them. The 2 S Control Chart is one of the most used tools to
monitor if the variance of some quality characteristic ( X ), which is assumed to be normally
distributed, may change from an in-control (IC) to an out-of-control (OCC) situation. The main
objective of this chart is to detect increases of any magnitude in the process variance, as soon as
possible. In this context, if the actual process variance is larger than an in-control single level point,
the process is considered to be in an out-of-control state.

Control chart for individual measurements


In quality engineering, an individual measurements control chart, also known as an X control chart,
is a type of quality control chart that tracks the value of a single measurement over time. It's used
when only one value is being measured, such as counting, calculating, or testing.

Here are some things to know about individual measurements control charts:

• How they're used


These charts can be used to monitor a variety of things, such as the number of workers in a
company each month, paid taxes over time, or the effective length of fiber in a batch.
• How they're made
To create an X control chart, you need at least 10 values, but 20 is better. You can calculate the
average value of X, the moving range, and the average of the moving range. Then, you can
calculate the upper and lower control limits (UCL and LCL) using the formula ˉx ± 2.66
¯MR. Finally, you can plot the values on the chart.
• How to interpret the chart
If a data point falls outside the control limits, the process is likely out of control and needs to be
investigated. However, if all the points fall within the limits, the process isn't necessarily in
control. If the points don't form a random pattern, there may still be an issue.

Control chart for Individual Measurement Chart

Individual Measurement chart displays individual measurements. Individual Measurement.


Charts are appropriate when only one measurement is available for each subgroup sample. If the
researcher is charting individual measurements, the Individual Measurement Chart shows its
corresponding moving range chart. Moving Range chart displays moving ranges of two successive
measurements. An assumption of using moving range chart is that the data must be normally
distributed. The difference between data, , and its predecessor,𝒙𝒊−𝟏, is calculated as Moving
Range 𝑀𝑅𝑖 = |𝑥𝑖 – 𝑥𝑖 − 1|, for𝑚 individual values, there are 𝑚 −1 ranges. Next, the arithmetic
mean of these values is calculated as, 𝑀 𝑅 = 𝑀𝑅_𝑖 𝑚 𝑖=2 𝑚 − 1 If the data are normally distributed
with standard deviation σ then the expected value is 𝑑2𝜎 = 2𝜎/√𝜋 The upper control limit for the
range (or upper range limit) is calculated by multiplying the average of the moving range by D4:
UCL = D4× 𝑀 𝑅
To calculate the individuals control limits, First, the average of the individual values is calculated:
𝑥 = 𝑚𝑖 𝑚 𝑖=1 𝑚 Next, the upper control limit (UCL) and lower control limit (LCL) for the
individual values (or upper and lower natural process limits) are calculated by, UCL = 𝑥 + (3 × 𝑀
𝑅 /d2) LCL = 𝑥 −(3 × 𝑀 𝑅 /d2) The value d2 = 1.128 is obtained for n=2, as given in most textbooks
on statistical process control (for example, Montgomery 2007). The plan here is to use the data on
each of the parameter of the thread and calculate their range. Here, four parameters of thread has
been used which is described below. Parameters will be taken one by one and then normality
assumption will be checked with Q-Q plot. Then Moving range and individual range will be
calculated for each parameter. Hence, after compiling all the results the study will attain a proper
range of values for each quality parameter of the thread.

Multivariate Chart

A multivariate chart is a quality control chart that monitors the variation in multiple product
attributes simultaneously. It's used to detect shifts in the mean or covariance of several related
parameters.
A quality control chart that analyzes a specific attribute of a product is called a univariate chart,
while a chart measuring variances in several product attributes is called a multivariate chart.
Randomly selected products are tested for the given attribute(s) the chart is tracking.
An obvious advantage of using multivariate charts is that they enable you to minimize the total
number of control charts you need to manage, but there are some additional related benefits
involved as well:

• Analyzing process parameters jointly: Many process parameters are related to one
another, for example, for a particular process step we might expect the pressure value to be
large when temperature is high. Considering every process parameter separately is not
necessarily a good option and might even be misleading. Detecting any mismatch between
parameter settings may be very useful.

In the graph below, the Y1 and Y2 parameter values are correlated (high values for Y1 are
associated with high values for Y2) so that the red point in the lower right corner appears
to be out-of-control (beyond the control ellipse) from a multivariate point of view. From a
univariate perspective, this red point remains within the usual fluctuation bounds for both
Y1 and Y2, though. This point clearly represents a mismatch between Y1 and Y2. The
squared generalized multivariate distance from the red point to the scatterplot mean is
unusually large.
Overall rate of false alarms: The probability of a false alarm with three-sigma standard limits in
a control chart is 0,27%. If 100 charts are monitored at the same time, the probability of a false
alarm automatically increases to 27% (0.27% * 100).

However, when numerous variables are monitored simultaneously using a single multivariate
chart, the overall/family rate of false alarms remains close to 0.27%.

3-D measurements: When three-dimensional measurements of a product are taken, the amount of
data needed to ensure that all dimensions (X, Y and Z) remain within specifications can get pretty
big. But if the product gets damaged in a particular area, it will usually affect more than one
dimension, so the three dimensions should not be considered separately from one another. If a
multivariate chart simultaneously monitors deviations from the ideal planned X, Y, Z values, their
combined effects will be taken into account

Here are some benefits of multivariate charts:

• Fewer charts: Multivariate charts can monitor many tool process parameters with fewer charts.
• 3-D measurements: Multivariate charts are useful for monitoring 3-D measurements.
• Process variability: Multivariate charts can monitor process variability when multivariate data is
collected in subgroups.
Some challenges of multivariate charts include:

• Calculating them
Multivariate charts can be more difficult to calculate than univariate charts.
• Identifying out-of-control signals
It can be difficult to pinpoint which variable produced an out-of-control signal.

XChart with a linear trend


An X-bar chart is a frequently used type of quality control chart, where the y-axis tracks the degree
to which the deviation of the tested attribute is acceptable.
What Is the Primary Objective of Trend Analysis in Quality Management?

The ultimate purpose of trend analysis in quality management is to identify, evaluate, and
eliminate any issue that is having a negative effect on product quality. Quality trend analysis is
a particularly useful monitoring mechanism when changes are made to processes, especially those
related to manufacturing. It is the means of determining when a corrective action/preventive action
(CAPA) should be launched in response to audit findings, customer
complaints, deviations, equipment service/maintenance reports, nonconformances, etc.

For most life sciences companies, there are two main quality trend analysis methods:
▪ Performance trending.
▪ Process trending.

Both trending methods are most commonly expressed in various types of charts. The charts used
for trend analysis in quality management usually illustrate data points such as:

▪ Threshold limits.
▪ Alert limits.
▪ Action limits.

The wide variety of quality trend analysis charts life sciences manufacturers use to visualize
quality control activities typically fall under two general classifications:

▪ Attributes charts: Anything that can be quantified or rated with a pass/fail grade can be
expressed in an attributes chart.
▪ Variables charts: Any quality aspect that is measurable (i.e., length, temperature, weight,
etc.) can be expressed in a variables chart.

The metrics you choose to track in your quality trend analysis charts should target trends in data
movement. These metrics and the limits you set as thresholds should all be geared toward meeting
three key criteria:

1. Applicable regulatory requirements.


2. Industry best practices.
3. Your organization’s risk acceptance thresholds.

Most importantly, though, the data points you analyze and the methodologies you use to analyze
them have to be consistent if you expect the trends that emerge from your quality trend analysis to
be genuine, according to American Society of Quality (ASQ) fellow and Quality Systems
Compliance Managing Principal Consultant Mark Durivage.

“There is no one perfect way to analyze data trends. However, for a trending program to be
successful, consistency is important,” Durivage said. “Pick a method and stick with it.”

CHART FOR MOVING AVERAGES AND RANGES

The moving average/moving range (MA/MR) chart is a control chart that monitors the mean and
variation of a process over time:

• Moving average chart


Monitors the mean of a process by using the average of the current mean and a few previous
means.
• Moving range chart
Monitors the variation between subgroups by taking the absolute difference between consecutive
observations.
The MA/MR chart is useful when there is only one data point at a time, or when the data is not
normally distributed. It's similar to the Xbar-R chart, but the subgroups are formed differently and
the out of control tests are different.

Here are some steps for creating an MA/MR chart:

1. Select the characteristics to monitor


2. Choose the moving average group size
3. Measure and record the first sample part
4. Determine the x̄ and R values
5. Plot the x̄ and R values as Subgroup 1
6. Continue adding samples to subgroups, determining the x̄ and R values, and plotting them on the
chart
7. Calculate the control limits using standard x̄ and R formulas
8. Continue monitoring the process
The Moving Average chart monitors the process location over time, based on the average of the
current subgroup and one or more prior subgroups. The Moving Range chart monitors the variation
between the subgroups over time.
Moving Average Control Charts The moving average chart is control chart for the mean that uses
the average of the current mean and a handful of previous means to produce each moving average.
Moving average charts are used to monitor the mean of a process based on samples taken from the
process at given times (hours, shifts, days, weeks, months, etc.). The measurements of the samples
at a given time constitute a subgroup. The moving average chart relies on the specification of a
target value and a known or reliable estimate of the standard deviation. For this reason, the moving
average chart is better used after process control has been established.
control charts, Moving Average Charts are used to monitor processes over time. The x-axes are
time based, so that the charts show a history of the process. For this reason, you must have data
that is time-ordered; that is, entered in the sequence from which it was generated. If this is not the
case, then trends or shifts in the process may not be detected, but instead attributed to random
(common cause) variation.

Moving Average Charts are generally used in our SPC software for detecting small shifts in the
process mean. It's important to know how to use moving averages to detect small shifts in your
process. Moving Average Charts will detect shifts of .5 sigma to 2 sigma much faster than
Shewhart charts (i.e X-bar and Individual-X charts) with the same subgroup size. They are,
however, slower in detecting large shifts in the process mean. In addition, typical run test
rules cannot be used because of the dependence of data points.

cumulative sum and exponentially weighted moving average control charts

Cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) control charts
are both used to monitor processes and detect small to moderate shifts:

• How they work


Both charts use current and previous information to detect shifts, and are known as memory
control charts.
• When to use them
Both charts are effective at detecting small, persistent shifts that a Shewhart Xbar chart might
miss.
• How they compare
Some say the practical performance of CUSUM and EWMA charts is similar, and that neither
has a clear advantage over the other.
• How to use them
Some recommend using CUSUM or EWMA charts in combination with Shewhart charts to
detect both small sustained shifts and large intermittent shifts

Exponentially Weighted Moving Average (EWMA) chart

An exponentially weighted moving average (EWMA) chart is a type of control chart used to
monitor small shifts in the process mean. It weights observations in geometrically decreasing order
so that the most recent observations contribute highly while the oldest observations contribute very
little.
The EWMA chart plots the exponentially weighted moving average of individual measurements
or subgroup means.
Given a series of observations and a fixed weight, the first element of the exponentially weighted
moving average is computed by taking the (1-weight) * previous EWMA + (weight * current
observation). Then the current observation is modified by "shifting forward"; and repeating the
calculation. This process is repeated over the entire series creating the exponentially weighted
moving average statistic.
The EWMA requires:
▪ A weight for the most recent observation. Weight must satisfy 0 < weight ≤ 1. Default weight=0.2.
The "best" value is a matter of personal preference and experience. A small weight reduces the
influence of the most recent sample; a large value increases the influence of the most recent
sample. A value of 1 reduces the chart to a Shewhart Xbar chart. Recommendations suggest a
weight between 0.05 and 0.25 (Montgomery 2012).
When designing an EWMA chart it is necessary to consider the average run length and shift to be
detected. Extensive guidance is available on suitable parameters (Montgomery 2012).
It is possible to modify the EWMA, so it responds more quickly to detect a process that is out-of-
control at start-up. This modification is done using a further exponentially decreasing adjustment
to narrow the limits of the first few observations (Montgomery 2012).

Each point on the chart represents the value of the exponentially weighted moving average.

The center line is the process mean. If unspecified, the process mean is the weighted mean of the
subgroup means or the mean of the individual observations.

The control limits are a multiple (L) of sigma above and below the center line. Default L=3. If
unspecified, the process sigma is the pooled standard deviation of the subgroups, or the standard
deviation of the individual observations, unless the chart is combined with an R-, S-, or MR- chart
where it is estimated as described for the respective chart.

Because the EWMA is a weighted average of all past and the current observations, it is very
insensitive to the assumption of normality. It is, therefore, an ideal replacement for a Shewhart I-
chart when normality cannot be assumed.

Like the CUSUM, EWMA is sensitive to small shifts in the process mean but does not match the
ability of a Shewhart chart to detect larger shifts. For this reason, it is sometimes used together
with a Shewhart chart

2 Marks
1)Define control chart?
Control charts are powerful tools used by many industries to monitor the quality of processes and
detect special cause of variations on them. The 2 S Control Chart is one of the most used tools to
monitor if the variance of some quality characteristic ( X ), which is assumed to be normally
distributed, may change from an in-control (IC) to an out-of-control (OCC) situation. The main
objective of this chart is to detect increases of any magnitude in the process variance, as soon as
possible. In this context, if the actual process variance is larger than an in-control single level point,
the process is considered to be in an out-of-control state.
2.Define EWMA?
An exponentially weighted moving average (EWMA) chart is a type of control chart used to
monitor small shifts in the process mean. It weights observations in geometrically decreasing order
so that the most recent observations contribute highly while the oldest observations contribute very
little.

3.Define Multivariate?
A multivariate chart is a quality control chart that monitors the variation in multiple product
attributes simultaneously. It's used to detect shifts in the mean or covariance of several related
parameters.
4. What Is the Primary Objective of Trend Analysis in Quality Management?

The ultimate purpose of trend analysis in quality management is to identify, evaluate, and
eliminate any issue that is having a negative effect on product quality. Quality trend analysis is
a particularly useful monitoring mechanism when changes are made to processes, especially those
related to manufacturing.

5.What is the use of MA/MR chart?


The MA/MR chart is useful when there is only one data point at a time, or when the data is not
normally distributed. It's similar to the Xbar-R chart, but the subgroups are formed differently and
the out of control tests are different.

6.Define Control limits?


These are typically set at +/- 3 standard deviations, but can be adjusted to +/- 2 standard
deviations. The width of the control limits is determined by the size of sigma and the number of
standard deviations specified

7. Define XChart with a linear trend ?


An X-bar chart is a frequently used type of quality control chart, where the y-axis tracks the degree
to which the deviation of the tested attribute is acceptable

8. DefineWarning limits?
These are set at +2s and –2s, where s is the standard deviation for the control chart.

9.Define Modified control limits?


These are used when a process has a slight variability around its mean, compared to the designer's
specified tolerance. This means that it's not necessary to keep the mean rigidly stable

10. what are the benefits of multivariate charts?

• Fewer charts: Multivariate charts can monitor many tool process parameters with fewer charts.
• 3-D measurements: Multivariate charts are useful for monitoring 3-D measurements.
• Process variability: Multivariate charts can monitor process variability when multivariate data is
collected in subgroups.
Unit-IV

Statistical Process Control

Process stability
Process stability is a key concept in statistical process control (SPC) and refers to a process's ability
to maintain consistency and predictability over time:

• Definition
A process is considered stable when it consistently operates within defined control limits and
produces outputs that fall within the process width.
• Variations
A stable process only exhibits common cause variations, which are small, random fluctuations
that are expected and inherent to the system.
• Causes of instability
Processes can become unstable over time due to variations caused by external disturbances or
inherent process variations.
• Control limits
The control limits for a process are defined as the Upper Control Limit (UCL) and the Lower
Control Limit (LCL).
• Histograms
Histograms can be used to visualize data and identify outliers in a process.
• Control charts
Control charts can be created by plotting data after calculating the mean, standard deviation, and
control limits.

Process Stability - refers to the consistency of the process to stay within the Control Limits. If the
process distribution remains consistent over time, i.e. the outputs fall within the range (Process
Width), then the process is said to be stable or in control.
Process

A process is a set of sequential activities that convert the input to the desired output.

Stability

Process stability is the ability of the process to perform within a predictable limit. We can also
state stability of a process refers to its predictability.

Why processes become unstable?

Processes tend to become unstable over time due to variations. The variations occur either because
of the inherent nature of the process or some external or forced changes, called disturbances.

When the process is operating at a stable state, we can expect the process to produce comparable
results over a period. Thus, even though the process has variations, it would repeat similar
variations in the future under a stable state.
And that is how, the process behaves predictably.

Types of Variations

1. Common Cause Variations: We call the inherent process variations as common cause
variations.
2. Assignable Cause Variations: When the process suffers from an external disturbance or a
significant drift, we call these disturbances Assignable Cause Variations or Special Cause
Variations.
When a process exhibits an assignable cause, meaning some disturbance, it becomes unpredictable.
Because the disturbances are unpredictable or non-repeatable to some extent.

Stability – A Stable Process

We categorise A process without any Special Causes as a Stable Process and processes with one
or more special cause as an unstable process.

Usually, continuous processes consist of zones of common and special causes. The zones without
any special cause are Stable Zones. Unstable zones are the period of the process exhibiting a
special cause.

Control Charts for Stability – Stable and Unstable Zones


Why is Process Stability Important?

1. We cannot predict the process performance when it has a special cause variation.
2. The presence of a special cause indicates significant drift or disturbance in the process.
Hence, it is important to address and monitor the stability.
3. Its presence also distorts the sample data properties and in turn, leads to inaccurate estimation
of population behaviour.
4. We cannot consider an Unstable process for improvement under Six Sigma DMAIC, or SPC
or under any other initiatives.

Managing Special Causes

If you are the process owner, you can start plotting the observations on a control chart. If the
control limits are comfortably within the specification limits, then the special cause indicates the
need for adjustment in the process.

What to do, If you are considering the process for any improvement and sample data shows a
special cause? It may indicate that the sample data is not the true representative of process
behaviour.

1. You may try to increase the sample size.


2. Consider extending sample collection duration.
3. If sample number is a constraint, then plan for extended sample duration with reduced
sampling frequency.
4. Only when you have one or two special cause variations, and you are certain that you know
the root cause of the special cause variation, and you have taken necessary steps to prevent its
recurrence, you may omit the data point from the dataset. You must be cautious as well as
confident from the process side while considering this option. Justification and approval from
Black Belt and Champions will ensure you are not misrepresenting the process by omitting
that point.

Process capability analysis using a histogram or probability plots and control


chart
Process capability analysis (PCA) uses a variety of tools to evaluate the consistency of a
process's output against its specifications, including histograms, probability plots, and control
charts:

• Histogram
A histogram can be used to illustrate the output of a process capability measurement.
• Probability plot
A probability plot compares a variable's ordered values to the percentiles of a theoretical
distribution, such as the normal distribution. If the data distribution matches the theoretical
distribution, the points on the plot will form a linear pattern.
• Control chart
Control charts can be used to monitor and evaluate process quality and capability differences.
Other tools used in PCA include: design of experiments, rate of nonconformities, and capability
indices.
PCA is used in product and process design, supply chain management, production planning, and
maintenance.
The histogram along with the mean x and standard deviation s enable us to assess process
capability by looking first at the shape of the histogram. If it reasonably approximates a normal
distribution, then x ± 3s can be used when assessing process capability.

Process Capability Analysis 8.1


Introduction
• A process capability analysis relates the inherent variability in a process to specifications or
requirements for the product produced by that process
. • There are many ways of analyzing the capability of a process. The most common being: (1)
Histograms and probability plots (3) Process capability ratios. (2) The control chart (4) Designed
experiments.
• Process capability measures the uniformity of a process. Process variability (variance) and
systematic deviations from a target value (bias) are the primary sources of nonuniformity. • We
will study the two major components of process variability: – Short-term variability which
reflects the inherent random variability at a point in time. – Long-term variability which reflects
the variability over time. • It is common to take a 6σ spread as a measure of process capability
(where σ comes from the distribution of the product quality characteristic of interest). • When the
distribution is assumed to be normal N(µ, σ), we define the natural tolerance limits to be µ±3σ.
In this case, 99.73% of process output will be within the tolerance limits. • One way to estimate
of process capability is to find a probability distribution that best describes data from that process
(e.g. normal, weibull, gamma, lognormal, etc.). Once an acceptable distribution has been found a
process capability analysis is performed by comparing the properties of fitted distribution to
specification limits. • When the researcher observes the process directly and can control or
monitor the datacollection procedure, the study is a true process capability study because by
controlling data collection and knowing the time sequence of the data, inferences can be made
about the stability of the process over time. • Major applications of data from a process capability
analysis are: 1. Predicting how well the process will meet tolerances. 2. Assisting, when
necessary, in adjusting a process. 3. Reducing the variability in a manufacturing process. 4.
Specifying performance requirements for new equipment. 5. Selecting between competing
suppliers. 8.2 Using a Histogram or Probability Plots • One advantage of using a histogram is the
immediate visual impression of process performance and that it could possibly indicate a reason
for poor performance (off-target, outliers, skewness, bimodality, etc.). • For a histogram to be
moderately stable so that it can reliably estimate process capability, Montgomery recommends
that at least 100 observations be taken from the process.

process capability analysis using a histogram for probability plot and control chart

Capability Histogram: A histogram of data is displayed to help visualise its distribution. LCL,
target, UCL values are indicated as well as mean median mode and quartiles

Up to six distributions can also be fitted and displayed on the histogram. The first three of these
are reserved by the program to display normal curves with overall and pooled (if data has
subgroups) standard deviations and with deviation around the target (if a target has been
specified, see definition of above). The remaining three distributions are set by default to
Weibull, lognormal and gamma distributions, but these can be changed by
selecting Edit → Distributions dialogue from the graphics menu, after clicking on the [Opt]
button situated to the left of the Capability Histogram check box on the Output Options
Dialogue.

Normal Probability Plot: Original Data: A Normal Probability Plot of the original data is
displayed together with Anderson-Darling Test results in the legend. You can compare this
graph with the next one to visualise the improvement provided by the transformation – if there
is one.

Normal Probability Plot: Transformed Data: This option is enabled if a transformation has been
selected in the previous dialogue. A Normal Probability Plot of the transformed data is
displayed together with Anderson-Darling Test results in the legend. You can compare this
graph with the previous one to visualise the improvement provided by the transformation.

This probability plot provides a process performance statement relative to specification limits of
72 and 78. There is no reason to believe that these data do not follow a normal distribution since
the P-value is 0.522 (upper right table of plot), which is greater than 0.05, a commonly-used
value in hypothesis testing for determining statistical significance. From this plot, a process
capability/performance metric estimate would be 17.4% non-conformance [(100-96.793)
+14.192 = 17.399], which is consistent with a visual estimate of the area under the curve in
Figure 1 beyond the specification limits.
An overall summary about this process and its performance can be netted out using a 30,000-
foot-level charting format, as shown in Figure 3. The bottom of a 30,000-foot-level report-out
includes a quantification of whether a process has a recent region of stability (predictable
process) or not and a prediction statement, if the process is predictable.

Gauge capability studies

Gauge capability studies are a vital part of Statistical Process Control (SPC) because they help
determine the extent of variability in a measurement system. The goal is to understand and
quantify the sources of variability in the measurement process so that the process and product
quality can be improved.

Here are some things to consider when conducting a gauge capability study:

• Measurement system adequacy


The effectiveness of SPC tools depends on the adequacy of the measurement system.
• Control charts
Control charts can help determine if a process is in a state of statistical control.
• Cg and Cgk
These terms are used to assess the capability of measurement devices in Type 1 gage studies. A
larger Cg value is better, as it indicates that more of the study variation fits into the tolerance
band.
• Scan design parameters
For high-density data, scan design parameters can be a source of reproducibility variance.
• Repeatability and reproducibility
The percent tolerance consumed by repeatability and reproducibility should be evaluated to
determine its effect on potential decision errors.
A capability analysis can include multiple characteristics. The data belonging to each
characteristic builds the foundation for every procedure that is to be applied for the
corresponding characteristic. Procedures 2 and 3 have procedure 1 as prerequisite. The capability
analysis results are assigned to the single gauge. The worst result of all gauges of a type is
assigned to the gauge type. This way it becomes a very easy task to find the appropriate gauge
for a specific measuring job.

Important Features at a Glance


• Overview of every capability analysis that has been performed for a gauge or gauge type
• Multiple procedures per characteristic and work piece
• Optional review after each analysis of a characteristic
• Creation of appropriate forms for each procedure
• Total result and use decision are part of the overview.
• Every analysis is documented in the gauge's history. The gauge capability analysis considers
: • Measurement of serial production parts
• Usage of multiple workers
• Work places with integrated measuring devices
• Environmental conditions of the work place
• Automated measuring devices
Defaults for capability analysis
• For each gauge type a time limit can be specified for repeating the capability analysis. It is up
to the user which specific gauge of this type will be taken as subject of the analysis
. • In the master data of a gauge defaults can be specified for performing a capability analysis
after using the gauge for measuring. Possible values are 'not necessary', 'recommended' or
'necessary after purchase and repair'. Characteristic-related data
• Work piece including part no. and name
• Value of the characteristic including unit and tolerances
• Process dispersion (procedure 1)
• Resolution
• Calculation method (4s, 6s or process-related)
• Measuring conditions such as temperature, air pressure, and humidity
• Reference standard as part no. with name or gauge
• Actual value of the standard including uncertainty
Procedure 1 Determining the capability as Cgm and Cgmk
• Repeated measurements of (at least) 50 values using a calibrated reference or serial part
• Measurement at the location of usage
• Specification of the worker
• Absolute and relative measurements
• Calculation of average, standard deviation, and capability indices Cgm and Cgmk
• Use decision based on minimal values
• Specification of the decider including the cost centre
• Comments
Procedure 2
Total range of dispersion in multi-worker situations
• No limitation regarding the number of workers (usually 2)
• No limitation regarding the number of parts (usually 10)
• Calculation of the percentage range of dispersion
• Support of the ARM and differences based method
• Use decisions: capable (0-10%), conditionally capable (10-30%), not capable (>30%)
Procedure 3
Total range of dispersion without user influence
• Absolute and relative measurements
• No limitation regarding the number of parts (usually 25)
• Determination of the range of dispersion for all measuring series
• Support of the ARM and differences based method
• Use decision as in procedure 2
Procedure 4
Determination of linearity
• Important for measuring ranges with characteristic curves
• Variable number of sampling points over the entire workspace
e • Measuring using reference standards
• Multiple measurements per sampling poin
t • Graphical representation of the result
• Single values and average per sampling point
• Over a measuring range a curve progression for a 95% measuring range
• Point of origin check in relation to the confidence interval
Procedure 5
Measuring stability
• Long term monitoring of usage
• Multiple measuring of each part (standard)
• Measuring using a common inspection order from production
• Report containing any data related to measuring stability
Procedure 6
Qualitative inspections of parts that have been measured exactly
• Specification of the number of inspectors
• Selection of parts from production to examine the entire tolerance range
• Exact measurement of parts to determine the number of parts within and outside of the
tolerance limits
• Following the measurement a qualitative inspection of each part and a statement if the part was
within or outside of the specific tolerance limit
• Statistical value %GRR depending on the number of conformities
• Use decision as in procedure 2 Import interface • Procedure 1, 2, and 3 support importing from
the Q-DAS interface

Setting Specification Limit


Specification limits can be set in a variety of ways, including:

• JMP
You can set specification limits in JMP by:
1. Right-clicking the column you want to set limits for
2. Selecting Column Properties > Spec Limits
3. Entering values for the lower and upper specification limits, or a target value
4. Selecting the Show as Graph Reference Lines check box
5. Clicking OK
• Database Manager
You can set specification limits in Database Manager by:
1. Double-clicking the Specification Limits table
2. Clicking Options > Add in the Database Manager menu bar
3. Right-clicking in the table and clicking Add
4. Configuring the generation information for the specification limit
Specification limits are boundaries that indicate where a product must perform. They are based
on customer requirements, process capability data, and relevant industry standards or
regulations.
Specification limits can be one sided, or two sided. One-sided specification: A specification
limits set only at one point, for product to be accepted or rejected. As an example, defects in
product. There can be only upper specification limit for defects

specification limits are the targets set for a product or the process by customer or market
performance; often, the Voice of the Customer is the input for these criteria. In other words, it is
the intended result of the metric that we measure.

Customers set the limit (upper and lower) on the product characteristics that define where the
product works and where it does not. If the product falls outside of these limits, assume that the
customer will reject the product. Specification limits are related to product design. Hence, set
these limits in the design phase of the product life cycle.

The Two Types

Upper Specification Limit: The highest limit a customer would accept.


Example: A customer would wait in line at a drive-through for 7 minutes before being
dissatisfied.

Lower Specification Limit: The lowest limit a customer would accept.

Example: A customer would expect no less than 100 M&Ms to be in a packet before being
dissatisfied.

Why Specification Limits?

The customer defines limits where the losses due to variation are equal to the product’s benefit.
Generally, these values are drawn on the histogram. If the product falls between the USL and
LSL, then the product meets the customer’s requirement. In other words, they determine the
process capability and the sigma value.

2 marks

1.Define Process stability?

Process stability is a key concept in statistical process control (SPC) and refers to a process's ability
to maintain consistency and predictability over time:

2.Define Control limits?


The control limits for a process are defined as the Upper Control Limit (UCL) and the Lower
Control Limit (LCL).
3.Define Histograms?
Histograms can be used to visualize data and identify outliers in a process.
4.DefineControl charts?
Control charts can be created by plotting data after calculating the mean, standard deviation, and
control limits.

5.Define Process Stability ?


It refers to the consistency of the process to stay within the Control Limits. If the process
distribution remains consistent over time, i.e. the outputs fall within the range (Process Width),
then the process is said to be stable or in control.

6.What are The Two Types of Specification Limit?

Upper Specification Limit: The highest limit a customer would accept.

Example: A customer would wait in line at a drive-through for 7 minutes before being
dissatisfied.

Lower Specification Limit: The lowest limit a customer would accept.

Example: A customer would expect no less than 100 M&Ms to be in a packet before being
dissatisfied.

7.What are the types of Variations?

Common Cause Variations: We call the inherent process variations as common cause variations.

Assignable Cause Variations: When the process suffers from an external disturbance or a
significant drift, we call these disturbances Assignable Cause Variations or Special Cause
Variations

8.Define Process?

A process is a set of sequential activities that convert the input to the desired output.

9.Define Stability?

Process stability is the ability of the process to perform within a predictable limit. We can also
state stability of a process refers to its predictability.

10. Why is Process Stability Important?


We cannot predict the process performance when it has a special cause variation.
The presence of a special cause indicates significant drift or disturbance in the process.
Hence, it is important to address and monitor the stability.
Its presence also distorts the sample data properties and in turn, leads to inaccurate estimation
of population behaviour.

We cannot consider an Unstable process for improvement under Six Sigma DMAIC, or SPC or
under any other initiatives
Unit-V

Acceptance Sampling fundamentals


Acceptance sampling is a quality control technique that uses statistical sampling to determine
whether to accept or reject a batch of products:

• How it works
A random sample of products is selected and tested to determine the quality of the entire batch.
• When it's used
It's often used in industries with mass production and set procedures, when testing every item
would be too costly, time-consuming, or could damage the products.
• How it's used
It's usually done as products leave the factory, or sometimes even within the factory.
• When it was developed
It was developed during World War II by Harold Dodge, a veteran of the Bell Laboratories
quality assurance department, and was originally used by the U.S. military to test bullets.
• Types of acceptance sampling
There are two main types of acceptance sampling: sampling by attributes and sampling by
variables. Sampling by attributes is more common, while sampling by variables is more
complicated.
• Risks involved
There are two types of risk that must be considered: the probability of rejecting a good lot
(producer's risk) and the probability of accepting a bad lot (consumer's risk).
• Visual representation
An Operating Characteristics (OC) Curve can be used to visually evaluate a sampling plan and
understand the probability of accepting a lot of varying quality levels.

Acceptance sampling is a statistical measure used in quality control. It allows a company to


determine the quality of a batch of products by selecting a specified number for testing. The quality
of this designated sample will be viewed as the quality level for the entire group of products.

Acceptance sampling is a quality control technique that uses statistical methods to determine if a
batch of items should be accepted or rejected. It involves:

• Selecting a sample: A random sample of items is selected from a batch or lot.


• Inspecting the sample: The quality of the sample is inspected.
• Making a decision: Based on the findings, the entire batch is either accepted or rejected. The
assumption is that the quality of the sample reflects the quality of the entire batch.
Some fundamentals of acceptance sampling include:

• Acceptable quality level (AQL)


The management sets an AQL, which is the acceptable proportion of a lot that can be
defective. For example, an AQL could be 2 defects in a lot of 200, or 1%.
• Average total inspection (ATI)
The ATI is the average number of units that will be inspected for a given incoming quality level
and probability of acceptance.
• Probability
Probability is a key factor in acceptance sampling, but it's not the only factor. For example, if a
company tests 10 units out of a million and finds one defect, it might assume that 100,000 of the
million are defective, but this might be inaccurate.
• Sampling methods
There are different sampling methods, including single sampling and double sampling. In double
sampling, the second sample is only taken if the first sample meets acceptance standards.

Definition:
Acceptance Sampling is a technique which deals with acceptance or rejection of a lot or process
based upon the results obtained from a random sample or samples taken randomly from the lot. If
the items are judged good or bad by inspection of presence or absence of some attribute
characteristics, the inspection is called attribute inspection.
In this case the quality of a lot is defined by the sample fraction nonconforming (fraction
defective). The acceptance sampling is preferred to 100% inspection in the following cases.
1. When the cost of the inspection is high and loss that arises from the passing out defective items
is not too much
2. When the inspection units are costly or destructive
3. To maintain good quality: If the lots are rejected often the producer is forced to improve the
production process. Hence acceptance sampling indirectly improves the quality of the products.
4. To give protection to the consumer against the acceptance of bad lots. It also gives protection to
the producer against the rejection of good lots. The consumer is given long run protection against
the product. It minimizes the cost of inspection and administration. It provides a basis for action
with regard to the production of units in future course.
Comparison between 100% inspection and sampling inspection 100% inspection Sampling
inspection
1. 100% inspection is not efficient
2. it is subject to human errors arising out of fatigue and monotony. These errors cannot be
quantified
3. It is not practicable for mass‐ production components
4. It is not feasible where destructive testing is involved
5. It is costly, time consuming and effort some.
6. It does not develop an attitude of quality or pressure for quality
1. Because of low quantities involved inspection will be efficient
2. It is subject to sampling errors which can be quantified and controlled
3. It is practicable for mass production components.
4. It is only alternative where destructive testing is involved
5. It is cheap ‘ quick and easy
6. It develops pressure for quality improvement by rejection of entire lots on the basis of findings
in samples

Advantages and disadvantages of sampling plan


Advantages of Acceptance Sampling
1. Less expensive because of less inspection compare to entire lot.
2. Less handling of product and hence reduce damage
3. Applicable to destructive testing
4. Fewer personnel are involved in inspection activities
5. Reduces the amount of inspection error
6. The rejection of lot motivate the vendor for quality improvement

Disadvantages of Acceptance Sampling


1. Risk of accepting a bad lots and rejecting a good lots
2. Less information about the product
3. Need some plan and formulation compare to 100% inspection
OC curve
An operating characteristic (OC) curve is a graph that shows the probability of accepting a batch
of a product or process based on its quality level. It plots the probability of acceptance on the y-
axis and the percentage of defective units on the x-axis.

OC curves are used in many industries, including manufacturing and food safety, to:

• Determine acceptable quality levels (AQLs)


• Plan sampling for inspecting raw materials or finished products
• Perform microbiological testing and inspections
Here are some things to know about OC curves:
• The shape of an OC curve is mainly determined by the sample size and acceptance number.
• The acceptance number is the largest number of nonconforming units that can be found in a sample
without rejecting the lot.
• The OC curve quantifies the risk for both the producer and the consumer.
• It's important to examine the OC curve before using a sampling plan.
The graphs help answer important questions like:

• The probability of high quality lots are approved?


• The probability of poor-quality ones are rejected?
• How does the plan perform at different levels?

Quality teams can examine a sampling method’s ability to differentiate the great and poor batches
by studying the OC curve.

This insight is important when deciding to accept or reject shipments, as well as making sampling
better at achieving desired standards and balanced risks.

OC curves simply map testing passing chances on the y-axis against the percentage of defects on
the x-axis. Their position and shape deliver valuable clues about performance in various scenarios.

These visualizations provide objective perspectives supporting informed management of quality


acceptance approaches.

Understanding the Operating Characteristic Curve (OC Curve)

The operating characteristic (OC) curve is a graphical representation that depicts the probability
of accepting or rejecting a lot based on the sampling plan and the quality level of the product or
process. It is a fundamental tool in acceptance sampling and quality control.

The OC curve plots the probability of acceptance on the y-axis against the quality level or percent
defective on the x-axis.

The quality level represents the fraction or percentage of defective items in the lot or process
output.

The shape of the OC curve provides valuable insights into the performance of the sampling plan.

An ideal OC curve has a sharp, vertical drop from a high probability of acceptance to a low
probability near the acceptable quality level (AQL). This indicates that the sampling plan can
effectively discriminate between good and bad quality lots.

There are two important points on the Operating Characteristic curve:

1. Producer’s Risk (α): This is the probability of rejecting a lot when the quality level is equal
to or better than the AQL. It is represented by the point where the OC curve intersects the
acceptable quality level line.
2. Consumer’s Risk (β): This is the probability of accepting a lot when the quality level is
equal to or worse than the rejectable quality level (RQL). It is represented by the point
where the OC curve intersects the rejectable quality level line.
The OC curve helps balance the risks between the producer (supplier) and the consumer
(customer). A good sampling plan aims to minimize both the producer’s and consumer’s risks
while keeping the sample size and inspection costs reasonable.

Constructing the Operating Characteristic Curve (OC Curve)

The operating characteristic (OC) curve is a graphical representation that displays the probability
of accepting a lot as the percent defective or nonconforming units increase.

Constructing an accurate OC curve is essential for the proper implementation of acceptance


sampling plans.

Advantages of Operating Characteristic (OC) Curve

• OC curves provide a visual representation of the performance of a sampling plan, allowing


for easy interpretation and comparison of different plans.
• They quantify the producer’s risk (alpha) and consumer’s risk (beta) associated with a
sampling plan.
• OC curves account for the quality level or percent defective of the lot being sampled.
• They can be used to select an appropriate sampling plan that balances producer’s and
consumer’s risks.
• OC curves help determine the sample size needed to meet desired performance criteria.

Limitations of OC Curves

• Constructing OC curves requires knowledge of the underlying distribution


(binomial, hypergeometric, Poisson) and associated parameters.
• The OC curve is specific to the sampling plan used; a new curve must be generated for different
sampling plans.
• For very small sample sizes, the discrete nature of the OC curve may be difficult to interpret
visually.
• OC curves assume stable process conditions; they do not account for process shifts or drifts
over time.
• Applying OC curves requires an understanding of statistical concepts like acceptable quality
level (AQL).

Determining Sample Size and Acceptance Number

The first step is determining the appropriate sample size and acceptance number for the sampling
plan being used (single, double, multiple, etc).
The sample size (n) is the number of units selected from the lot for inspection. The acceptance
number (c) is the maximum allowable number of defective units in the sample. These are chosen
based on the desired producer’s risk (α) and consumer’s risk (β) levels.

Calculating Probabilities

Once n and c are set, the probability of acceptance Pa(p) can be calculated for various levels of
percent defective (p) using the appropriate probability distribution model:

• Binomial distribution for attribute sampling


• Poisson distribution for narrow limits on percent defective
• Hypergeometric distribution for variables sampling

This gives a set of coordinates (p, Pa(p)) that can be plotted to form the OC curve.

Plotting the Operating Characteristic Curve

The x-axis represents the percent defective or nonconforming units (p). The y-axis represents the
probability of acceptance Pa(p).

By plotting the calculated probability points, a smooth OC curve can be drawn connecting the
points.

The curve will have an S-shape, with Pa(p) approaching the producer’s risk α as p approaches 0,
and Pa(p) approaching the consumer’s risk β as p increases.

Interpreting the Curve

The shape and position of the OC curve convey important information about the discriminatory
power of the sampling plan between good and bad lots.

Ideally, the curve should have a relatively steep slope transitioning from the producer’s risk to the
consumer’s risk over a narrow range of percent defective values.

This allows reasonably good discrimination between acceptable and unacceptable quality levels.

Sampling Plans for atrributes

The operating characteristic (OC) curve is closely tied to the sampling plan used for inspection.
Different sampling plans will produce different OC curves. The three main types of sampling plans
are:
Single Sampling Plan

In a single sampling plan, a single random sample of n units is taken from the lot and inspected.
The number of defective units (d) in the sample is counted.

If d is less than or equal to an acceptance number c, the entire lot is accepted. Otherwise, it is
rejected.

The single sampling plan is defined by the sample size n and acceptance number c. The OC
curve shows the probability of acceptance for different levels of percent defective in the lot.
Double Sampling Plan

A double sampling plan involves two potential samples. An initial sample of n1 units is taken. If
the number of defects d1 is less than or equal to an acceptance number c1, the lot is accepted. If
d1 is greater than a rejection number r1, the lot is rejected. Otherwise, a second sample of n2 units
is taken.

The total number of defects d1 + d2 is then compared to an acceptance number c2. If it is less than
or equal to c2, the lot is accepted, otherwise it is rejected.

The double sampling plan requires more parameters: n1, c1, r1, n2, c2. The OC curve shows the
probability of acceptance for different quality levels.

Multiple Sampling Plan

A multiple-sampling plan extends the double-sampling concept to allow for more than two samples
before deciding on lot acceptance or rejection. It provides additional discrimination for borderline
cases.
The OC curve is used to evaluate the discriminatory power and other characteristics of these
different sampling plans for a given situation. Appropriate sampling plans can be selected by
studying their OC curves.

Sequential Sampling Plan

A sequential sampling plan is a non-probabilistic sampling technique that involves taking


samples one at a time and deciding whether to accept, reject, or continue sampling after each
sample:
• No predetermined sample size

Unlike multiple-sample plans, there is no fixed number of samples to be taken.


• Decision-making
After each sample, a decision is made to accept, reject, or continue sampling.
• Test until confident
The process continues until the researcher is confident in their results.
• Cost reduction
Sequential sampling can reduce sampling costs by reducing the number of observations needed
Sequential sampling plans are designed to fit a desired operating characteristic curve. The
probability of acceptance changes as the product changes, which is shown by the Operating
Characteristic (OC) curve.
The efficiency of a sequential sampling scheme is measured by the average sample number
(ASN) required for a given Type I and Type II set of errors.
The operating characteristic (OC) curve depicts the discriminatory power of an acceptance
sampling plan. The OC curve plots the probabilities of accepting a lot versus the fraction
defective. When the OC curve is plotted, the sampling risks are obvious.

sampling strategy satisfies the criteria used by quality assurance managers to determine whether
a particular batch meets the recommended specifications relating to quality, it is often difficult to
assess how well the strategy distinguishes between acceptable and defective units at intermediate
levels. As a result of this, sampling strategies are typically represented graphically using the OC
curves.
The operating characteristic (OC) curve is a graphical illustration that indicates the probability
of acceptance of the production batch versus the percentage of defective units. As shown in the
image below, the y-axis plots the probability of acceptance, while the x-axis represents the
percentage of defective units.

Sampling plans for variables


A variable sampling plan is a statistical technique used to make pass or fail decisions for quality
characteristics that are measured on a continuous scale. It's a type of acceptance sampling plan
that uses the actual measurements of sample products to make decisions.

Here are some characteristics of variable sampling plans:

• Uses the coefficient of variation


The plan is based on the coefficient of variation (CV), which measures how spread out the data
is relative to the central location.
• Requires a statistical model
The plan requires knowledge of the statistical model.
• Assumes normal distribution
The plan assumes that the underlying data fits a normal distribution. A normality test should be
performed before using the plan.
• Requires fewer samples
Variable sampling plans require fewer samples than attribute plans because more information
is available in the measurements.
• More complex
Variable sampling plans are more complex and require more skill than attribute plans.
Here are some examples of variable sampling plans:

• Measuring the diameter of ball bearings


The diameter of a ball bearing is an important variable of interest that should be designed to
meet given specification limits.
• Measuring the mileage of cars
Measuring the mileage of cars per liter of gasoline is an example of collecting variable data

Variables sampling plans use the actual measurements of sample products for decision making
rather than classifying products as conforming or nonconforming, as in attributes sampling plans.
Variables sampling plans are more complex in administration than attributes plans, thus, they
require more skill

MIL-STD-105

MIL-STD-105 was a United States defense standard that provided procedures and tables for
sampling by attributes based on Walter A. Shewhart, Harry Romig, and Harold F. Dodge sampling
inspection theories and mathematical formulas. Widely adopted outside of military procurement
applications.

The last revision was MIL-STD-105E;[1] it has been carried over in ASTM E2234.

Sampling plans are typically set up with reference to an acceptable quality level, or AQL. The
AQL is the base line requirement for the quality of the producer's product. The producer would
like to design a sampling plan such that the OC curve yields a high probability of acceptance at
the AQL. On the other side of the OC curve, the consumer wishes to be protected from accepting
poor quality from the producer. So the consumer establishes a criterion, the lot tolerance percent
defective or LTPD. Here the idea is to only accept poor quality product with a very low probability.
Mil. Std. plans have been used for over 50 years to achieve these goals.

The U.S. Department of Defense Military Standard 105E

Standard military sampling procedures for inspection by attributes were developed during World
War II. Army Ordnance tables and procedures were generated in the early 1940's and these grew
into the Army Service Forces tables. At the end of the war, the Navy also worked on a set of tables.
In the meanwhile, the Statistical Research Group at Columbia University performed research and
outputted many outstanding results on attribute sampling plans

These three streams combined in 1950 into a standard called Mil. Std. 105A. It has since been
modified from time to time and issued as 105B, 105C and 105D. Mil. Std. 105D was issued by the
U.S. government in 1963. It was adopted in 1971 by the American National Standards Institute as
ANSI Standard Z1.4 and in 1974 it was adopted (with minor changes) by the International
Organization for Standardization as ISO Std. 2859. The latest revision is Mil. Std 105E and was
issued in 1989.

These three similar standards are continuously being updated and revised, but the basic tables
remain the same. Thus the discussion that follows of the germane aspects of Mil. Std. 105E also
applies to the other two standards

Mil. Std. 105E offers three types of sampling plans: single, double and multiple plans.
The steps in the use of the standard can be summarized as follows:

1. Decide on the AQL.


2. Decide on the inspection level.
3. Determine the lot size.
4. Enter the table to find sample size code letter.
5. Decide on type of sampling to be used.
6. Enter proper table to find the plan to be used.
7. Begin with normal inspection, follow the switching rules and the rule for stopping the
inspection (if needed).

MIL-STD-414
The MIL-STD-414 application has been vastly improved. You can now design plans for both
known and unknown variability, and the application has been completed to include all three
types of inspections. Give it a shot!
This application designs a sampling plan for variables, according to the Military Standard 414
tables (Similar to ANSI/ASQ Z1.9, BS6002, ISO 3951) tables, for a given lot size and AQL. It
also calculates the estimated percent defectives in a lot, given the known or estimated variability.
Military Standard 414 has sampling plans for a set of pre-determined AQLs, ranging from 0.04%
to 15.0%. If the AQL you are using does not match any of the standard pre-determined AQL, use
the following calculator to find which AQL you should be using when designing a sampling plan
MIL-STD-414, MILITARY STANDARD: SAMPLING PROCEDURES AND TABLES FOR
INSPECTION BY VARIABLES FOR PERCENT DEFECTIVE (11 JUN 1957) [NO S/S
DOCUMENT]., This standard establishes sampling plans and procedures for inspection by
variables for use in Government procurement, supply and storage, and maintenance inspection
operations. When applicable this Standard shall be referenced in the specification, contract, or
inspection instructions, and the provisions set forth herein shall govern

IS 2500 STANDARAD
In order to promote public education and public safety, equal justice for all, a better informed
citizenry, the rule of law, world trade and world peace, this legal document is hereby made
available on a noncommercial basis, as it is the right of all humans to know and speak the laws
that govern them

• IS 2500-1 (2000): This standard includes fractional acceptance number plans, reduced plans, and
multiple sampling plans. It also recommends using the standard in conjunction with ISO 2859-O,
which includes illustrative examples.
• IS 2500-2 (1965): This standard discusses the types of inspection and the risk of rejecting lots of
quality.
• IS 2500-3 (1995): This standard identifies plans by the lot size and limiting quality (LQ). It also
provides sampling size (n) and acceptance number (AC) in table A
A variables sampling plan uses actual measurements of sample products to make decisions, rather
than classifying products as conforming or nonconforming. This type of plan is more complex than
an attributes plan, but it can provide equal protection with a smaller sample size.

2 Marks
1.Define Acceptance Sampling fundamentals?

Acceptance sampling is a quality control technique that uses statistical sampling to determine
whether to accept or reject a batch of products
2.Define OC?

The operating characteristic (OC) curve is a graphical illustration that indicates the probability
f acceptance of the production batch versus the percentage of defective units. As shown in the
image below, the y-axis plots the probability of acceptance, while the x-axis represents the
percentage of defective units.
3.Define Sequential Sampling Plan?

A sequential sampling plan is a non-probabilistic sampling technique that involves taking


samples one at a time and deciding whether to accept, reject, or continue sampling after each
sample:
4.Define Multiple Sampling Plan?

A multiple-sampling plan extends the double-sampling concept to allow for more than two samples
before deciding on lot acceptance or rejection. It provides additional discrimination for borderline
cases.

5.What are the Limitations of OC Curves ?

Constructing OC curves requires knowledge of the underlying distribution


(binomial, hypergeometric, Poisson) and associated parameters.

The OC curve is specific to the sampling plan used; a new curve must be generated for different
sampling plans.

For very small sample sizes, the discrete nature of the OC curve may be difficult to interpret
visually

6..what are the advantages of Acceptance Sampling ?

1. Less expensive because of less inspection compare to entire lot.

2. Less handling of product and hence reduce damage

3. Applicable to destructive testing


4. Fewer personnel are involved in inspection activities
5. Reduces the amount of inspection error
6. The rejection of lot motivate the vendor for quality improvement

7.what are the disadvantages of Acceptance Sampling ?

1. Risk of accepting a bad lots and rejecting a good lots


2. Less information about the product
3. Need some plan and formulation compare to 100% inspection

8.DefineAcceptable quality level (AQL)?

The management sets an AQL, which is the acceptable proportion of a lot that can be
defective. For example, an AQL could be 2 defects in a lot of 200, or 1%.
9.Define Average total inspection (ATI)?
The ATI is the average number of units that will be inspected for a given incoming quality level
and probability of acceptance.
10.Define Probability?
Probability is a key factor in acceptance sampling, but it's not the only factor. For example, if a
company tests 10 units out of a million and finds one defect, it might assume that 100,000 of the
million are defective, but this might be inaccurate.

You might also like