Assignment 2
Assignment 2
Task 1 – critically evaluate QM tools and techniques that are available and how these
may improve systems, and processes and support business objectives.
1250 words 178 words each
Topic sentence
Supporting evidence
Critical evaluation
Use tables for advantages and disadvantages summary table
Example Flowchart
Transformation process risk management to sustainable corporate performance and quality
management: developing flowcharts for approved training organization
ISSN: 0002-2667
IFLOW methodology showed the power of "visual" techniques to process improvements. The
different types of shape-icons for the activities and line-icons for connectivity helped us to capture
the current (asis) process very easily. lt also allowed us to communicate the information through
simple iconic means than through plain words. By depicting the activities into suitable icons, team
members were able to visualize easily the entire process without much needs for explanations and
tutorials. The charts were easy to follow and quite self-explanatory. lt also provided a medium of
discussions for process improvements in many areas of our productivity concerns. The
implementations of the change process as a result of those improvements presented very little or no
resistance, from our employees. The members were also seen forthcoming in suggesting new
improvements in their own areas and compromising on many barriers that became vividly clear
when the entire process was flow-charted.
PMCID: PMC6021048
PMID: 29949595
Use and limitations of checklists. Other strategies for audits and inspections
September 2001
DOI:
10.1002/qaj.136
Checklists are tools commonly used by quality assurance (QA) unit professionals in inspections and audits
as they perform their customary tasks of verifying and checking compliance with various applicable
standards. An analysis of the advantages and disadvantages of checklist use is discussed in this article.
Excessive dependence on these auxiliary-checking systems can cause certain problems, and in some cases
their real usefulness is called into question. Alternative or complementary strategies are proposed in order
to optimise the rational use of checklists. Copyright © 2001 John Wiley & Sons, Ltd.
https://ec.europa.eu/eurostat/documents/64157/4374310/45-Checklist-quality-evaluation-
administrative-data-sources-2009.pdf/24ffb3dd-5509-4f7e-9683-4477be82ee60
2.2.2 Checklist For the evaluation of the Source and Metadata hyperdimension, the authors have
developed a checklist (Daas et al., 2008b; Daas et al., 2009). The checklist guides the user through
the quality indicators that need to be evaluated for both Source and Metadata. For the Data
hyperdimension a checklist cannot be used because of the large amount of calculations that need to
be performed. The best approach for this hyperdimension is the topic of current research. Because
of this, the quality aspects in the Data hyperdimension were not determined for the data sources
described in this paper. The Source and Metadata checklist can be used for a data source that is
already available (and used) by the NSI and for the evaluation of a new data source that could
potentially be used for statistics. The checklist is included in the Annex. The checklist guides the user
through the measurement methods for each of the quality indicators shown in tables 1 and 2. By
answering the questions in the checklist, the ‘value’ of every measurement method in tables 1 and 2
is determined. Since the predominant part of the methods in the Source and Metadata
hyperdimension are qualitative, usually a score has to be filled in. When problems are found or a
question cannot be answered completely, the user is guided in the steps to take. Apart from this,
additional space is included to write down remarks. Evaluation of the Metadata-part requires that
the user has a particular use in mind (Daas et al., 2008b).
https://www.foodfocus.co.za/assets/images/Whitepapers/
Whitepaper-Digital-Checklists-testo-Saveris-Retail-Chain-EN-
ZA.pdf
1. Lack of employee support When a food safety process prescribes cleaning a surface or checking
the temperature of a deep-freezer, for example, paper-based checklists contain only the steps to be
carried out and the time at which they must be finished. However, they provide no guidance of
which special factors need to be observed during the checks, and which corrective actions for
deviations are prescribed. Deviations or problems can occur, such as the violation of upper or lower
temperature limit values. These then need to be documented on paper, and action is often only
taken days or weeks later. In addition, the employees can only make handwritten comments, which
can lead to misinterpretation, depending on the handwriting. Paper-based checklists also provide no
possibility of visual documentation, e.g. in the form of photos. These however, would in many cases
considerably simplify the documentation. 2. Manipulation and pencil whipping If there is only low
interest in the completed checklists, and they are only looked at when an audit audit is scheduled,
then this can lead to the staff’s questioning the purpose of the documentation. As a result, the lists
are filled in incompletely and without enthusiasm. The appreciation of the relevance of the
compliance-relevant task and their completion on time is increasingly lost. It is not uncommon that
the checklists for the entire weekend are already filled in on Friday, or that missing entries are
completed later when an audit has been announced. Employees have the feeling they are
documenting everything simply for the audit and not for the safety of the consumer. 3. Time-
consuming analysis It is not just finding and filling in paper checklists which needs a lot of time and
effort. The analysis of completed checklists is also time-consuming and requires valuable
management resources. Analyzing possible weaknesses in a restaurant or store and gaining an
overview over the overall status of the food safety system is very complex using paper-based
documentation. The same applies to the comparison of individual sites with each other
(benchmarking).
Application Of Fishbone Diagram To Determine The Risk Of An Event With Multiple Causes
Article in Knowledge Management Research & Practice · January 2010
Fishbone diagram is a method used to determine the global risk of an event with multiple relevant
causes, relatively easy to apply. The application realized allows determining the risk of secondary
and main causes, of cause’s categories and of the global risk, allows structuring of treatment
measures on vulnerability areas, precisely oriented on the causes which determine high risk values.
Analysis of causes sequence can be a simple analyze which refers to the multitude of the causes and
their sequence, but can be completed with other representation and hierarchy elements for risks
treatment. Also, the method is used to simulate the dynamic of the process analyzed. There are no
instruments for risk analyze based exclusively on the Fishbone diagram. But there are instruments
which include elements of primary or complementary analyze of this type. The condition to
successfully apply the method proposed here is a correct evaluation of the probabilities, weights and
impact of the causes. As a result of this, the method is recommended especially for initial or
comparative analyzes. Applying the method in relatively more simple cases is an excellent
opportunity to understand the essence of risk analyze, of its bonds with establishing risk treatment
measures and the dynamic evolution of risk values depending on the application of these methods.
https://expertprogrammanagement.com/2020/03/fishbone-diagram/
Advantages and Disadvantages
Advantages
Being a visual tool, it is easy to understand and shows clearly the relationship
between cause and effect.
It helps you find the most likely cause of the problem.
You can apply the diagram to a range of problems.
By involving the whole team in the creation of the Ishikawa diagram the whole
team better understands the issues surrounding the problem.
Disadvantages
The fishbone diagram itself doesn’t help you to prioritize causes. All causes are
equally weighted in the diagram and you have to prioritize them yourself.
Brainstorming is often reliant on opinion rather than fact. The output from the
diagram is only as good as your brainstorming session.
The diagram is less useful for complex causal relationships and can become
unwieldy to use.
You may waste time discussing causes that have very little impact on the problem.
Summary
The Fishbone diagram (Ishikawa diagram) was developed in the 1960s and
provides a visual way to find the causes of a problem. It can be used in any
industry and for many purposes, including identifying manufacturing defects and
process improvement. The process of creating an Ishikawa diagram can also help
create a shared understanding of the problem amongst your team members.
4. Pareto Chart
A Pareto chart is a bar graph of data showing the largest number of
frequencies to the smallest.
In this example, we are looking at the number of product defects in
each of the listed categories.
When you look at the number of defects from the largest to the
smallest occurrences, it is easy to see how to prioritize
improvements efforts.
The most significant problems stand out and can be targeted first.
Advantages :
One of the essential and most important advantages of Pareto
analysis is that it simply helps to identify and determine main cause
i.e. root causes of defects or problems.
Defects are firstly ranked in order of their severity i.e. in descending
order. Defects are organized from highest to lowest priority with
help of Pareto Chart. Defects that are on top with highest priority
are eliminated first or resolved first.
With help of Pareto chart, one can also determine cumulative
impact of defect. In simple words, cumulative impact is an effect
that is being caused due to defect happening over a long period of
time.
Pareto Chart also helps and provides a better explanation regarding
defects that are needed to resolved first. One can also decide and
plan about correct and important measures or actions that are
needed to be taken regarding defect.
With help of Pareto chart, skills of individuals are also increased in
better ways like problem-solving skills, decision-making skills, etc. It
simply helps to improve important skills that are required because
when someone prepares a Pareto chart, it enables to organize
problems related to defect or anything into cohesive facts.
It also helps in solving issues regarding problem-solving and
decision making, time management i.e. be at work on time or
personal, change management, etc.
One can plan, analyze, and resolve problems or defects with help
of Pareto Chart.
Disadvantages :
One of main disadvantages is that root cause analysis cannot be
done by itself in Pareto analysis. There is a requirement of tool i.e.
root cause analysis tool for determining or identify root causes or
major causes of defect.
It does not represent severity of defect or any problem. It only
shows qualitative data.
Pareto analysis only focuses on past data where damaged has
already happened. Yes, past data is essential too, but it’s not
necessary that it will be relevant in future scenarios also. Therefore,
it should focus on past data as well as present and future data also.
Pareto analysis cannot be applied to all cases. It is limited to some
cases only.
There are other disadvantages also like mistake in scoring
problems, mistake in applications, etc.
https://www.researchgate.net/publication/
272353122_Pareto_Analysis
Pareto analysis is based on the observation that operational results and economic wealth are not distributed
evenly and that some inputs contribute more than others. It is referred to as the 80/20 rule, a nomenclature
which has popularized a complex economic concept introduced by Vilfredo Pareto, a nineteenth-century
Italian economist. The underlying concept is that the majority of problems (roughly 80%) are often caused
by a small number of the sources (roughly 20%). The implication of the 80/20 rule is that most efforts are
not efficient and should be reduced. The strategic objective would be to leverage and maximize the efforts
that produce most of the results. In strategic management, Pareto analysis is linked to the analysis of an
organization's internal environment. It is particularly useful to identify internal strengths and weaknesses
through the evaluation of an organization's internal resources and capabilities, which are the source of its
core competencies and which in turn, create competitive advantage.
file:///C:/Users/rooke/Downloads/
APPLICATIONOFTHEPARETOANALYSISINPROJECTMANAGEMENT.pdf
Projects are specific endeavors which consist of a large number of activities, stakeholders, team
members and at the same have limited time and resources. This confrontation can lead to a large
number of problems that project managers have to solve in order to successfully implement a
project. Therefore it is necessary that project manager allocate the most critical activities or issues
that might endanger the planned implementation of the project. Pareto analysis is one of the tools
that can be used to extract critical activities or issues that arise in projects. So project managers can
pay attention to them. In this way, the project manager will be disburdened on analysis of those
items that do not have a big impact on the project until the resources will be able to focus on the
essential items that have a major impact on the success of the project.
5. Control Charts
Control charts or run charts are used to plot data points over time
and give a picture of the movement of that data.
These charts demonstrate when data is consistent or when there
are high or low outliers in the occurrences of data.
It focuses on monitoring performance over time by looking at the
variation in data points.
And it distinguishes between common cause and special cause
variations. The Dow Jones Industrial Average is a good example of
a control chart.
Disadvantages of Control Chart in Printing Quality Control over Solder Paste and Strategies for
Improvement
This paper takes the common control chart x − R as an example and discussed the advantages in the
process of application. When the boundary of control chart is determined, which only depends on
the sampling data to estimate and not consider the former conditions or other interfere or just
concerns the information of total and samples but ignores the acquired or existed information, it is
usually found that the conclusion may conflict with other judgment indexes. Thus, to solve the above
problems, we can use Bayes theory to introduce the proper “correction coefficient” to correct the
control parameters, which can high the accuracy of control limits and make the conclusion more
reasonable, reliable and believable.
https://cyberleninka.org/article/n/1079120/viewer
Benefits of using a control chart
Understand the variations that are always present in processes. ...
See when something is going wrong or may go wrong. ...
Notice patterns within plotted points. ...
Predict future performance.
Generate new ideas for improving quality based on your analysis.
A disadvantage of control charts for variables and attributes is that they only use data from
the most recent measurement to draw conclusions about the process. This makes it
quite insensitive to shifts on the order of 1.5 standard deviations or less
Control charts are important tools of statistical quality control to enhance quality. Quality
improvement methods have been applied in the last few 10 years to fulfill the needs of consumers.
The product has to retain the desired properties with the least possible defects, while maximizing
profit. There are natural variations in production, but there are also assignable causes which do not
form part of chance. Control charts are used to monitor production; in particular, their application
may serve as an “early warning” index regarding potential “out-of-control” processes. In order to
keep production under control, different control charts which are prepared for dissimilar cases are
established incorporating upper and lower control limits. There are a number of control charts in use
and are grouped mainly as control charts for variables and control charts for attributes. Points
plotted on the charts may reveal certain patterns, which in turn allows the user to obtain specific
information. Patterns showing deviations from normal behavior are raw material, machine setting or
measuring method, human, and environmental factors, inadvertently affecting the quality of
product. The information obtained from control charts assists the user to take corrective actions,
hence opting for specified nominal values enhancing as such quality. Keywords: qual
6. Histograms
Histograms are bar chart pictures of data that shows patterns that
fall within typical process conditions.
Changes in a process should trigger new collection of data.
A minimum of 50-75 data points should be gathered to ensure an
adequate number of data points have been collected.
The patterns that are detected demonstrate an analysis that helps
understand variation.
In this example, it shows that the receptionist received the most
phone calls about contribution statements for that period.
Example Histogram
https://histogramsdennard.weebly.com/pros-and-cons-of-
histograms.html
Although histograms are considered to be some of the most commonly used graphs to display data,
the histogram has many pros and cons hidden within its formulaic set up. Histograms allow viewers
to easily compare data, and in addition, they work well with large ranges of information. They are
also provide a more concrete from of consistency, as the intervals are always equal, a factor that
allows easy data transfer from frequency tables to histograms. Although useful in many different
cases, histograms are especially useful when dealing with large value ranges. For example, if a
sample of college students included over 2000 students, and we are collecting data concerning how
many times someone has driven their car outside of campus in the past year, the range could vary
from a person who has yet to find a reason to drive their car outside of campus to an individual who
feels the need to visit home every week. The range of data values would be very large, so in this case,
it would be convenient to use a histogram.
Although there are many instances where using a histogram would be considered convenient, there
are also many instances where using or interpreting a histogram could be troublesome as well. For
example, when interpreting a histogram, it is extremely difficult and practically impossible to extract
the exact amount of "input" in the histogram unless it is a frequency histogram. For example, if you
are given a histogram and asked how many people gave their data in a survey, it would be extremely
difficult to pinpoint an exact number. Histograms are often considered inconvenient when comparing
multiple categories, because even though you can compare several histograms side by side, it doesn't
quite create the desired effect.
https://www.researchgate.net/publication/
319572960_Histograms_Theory_and_Practice
The histogram is one of the most important graphical objects in statistical practice. In addition, the
histogram provides a consistent estimate of any density function with very few assumptions.
Construction of a density histogram with arbitrary mesh is described. Asymptotic theory of optimal
histograms is used to provide practical rules for choosing a bin width with real data. Cross-validation
is shown to provide useful estimates of both the bin width and the bin origin. Examples are displayed
using data from Sammy Sosa’s best year for hitting home runs. The histogram in pictorial form
provides the most common graphical summary of a random sample, as well an estimate of the
underlying probability density function. The data points are tabulated into a list of disjoint bins. The
invention of the histogram in the seventeenth century marked the transition to modern statistics.
Before, statistical data came in the form of lists of individual cases, or summary by category. Such
lists provide more of an inventory than any deep understanding. Thus an analysis of the causes of
death in the seventeenth century listed in England’s bills of mortality collected during the plague
years might be summarized by indicating the fraction dying from accidents, murder, being run over
by carts, or diseases such as smallpox, thrush, plague, etc. and might be depicted in a pie chart. In
1662, John Graunt (1662) analyzed the age of death from the bills of mortality and made a table,
which indicated a 36% chance of death in the first six years of life, a 24% chance in the next ten
1David W. Scott is Noah Harding Professor, Department of Statistics MS-138, Rice University,
Houston, TX 77251-1892 (email: [email protected]). This work was partially supported by NSF award
DMS-05- 05584, and ONR contract N00014-06-1-0060. 1 years, 15% in the next ten, then 9%, 6%,
and so on. This table of tables is the essence of a modern histogram and marked the beginning of
survival analysis by mortality tables. A single (continuous) quantity, age in this case, is analyzed to
determine the likelihood of all possible outcomes. In modern terminology, the quantity is called a
random variable, X, and the likelihood of possible outcomes is summarized by the cumulative
distribution function (cdf), F(x) = Prob(X ≤ x). If the random variable is assumed to be continuous and
its cdf differentiable, then the probability density function (pdf), f(x) = F ′ (x), provides an equivalent
summary. The histogram, properly scaled, may be thought of as an estimator of the pdf; see Scott
(1979). The proper construction of a histogram requires some notation. Let the kth bin interval, [tk,
tk+1), be denoted by Bk. Of course tk < tk+1 for all k. For convenience, we refer to t0 as the bin
origin. In many cases, t0 = 0. Let the width of the kth bin be denoted by hk = tk+1 − tk. Given a
random sample of size n, {x1, x2, . . . , xn}, the bin count in the kth bin may be computed as νk = Xn
i=1 I(xi ∈ Bk), k = . . . , −2, −1, 0, 1, 2, . . . Note that P k νk = n. Then the histogram in the kth bin is
defined as ˆf(x) = νk nhk , x ∈ Bk, ∀k. With this definition of a histogram, it is straightforward to check
that ˆf(x) ≥ 0 and that R ˆf(x) dx = 1, so that ˆf(x) is a proper density function. Often for graphical
purposes, the bin counts {νk} themselves are plotted. However, the bin counts are proportional to
ˆf(x) only when all of the bin widths are equal, say hk = h, for all bins Bk. Most elementary textbooks
recommend choosing bins of equal width, but failure to compute the proper histogram will lead to
severe distortion of the density distribution. Freedman et al. (1978) give a clear discussion and
description of the misperceptions that may result. The same caution holds if the bin frequencies,
{νk/n}, are plotted. For example, John Graunt’s mortality figures were computed over an unequal bin
mesh, which served to understate mortality in young children.
7. Scatter Diagrams
Scatter diagrams are graphs that show the relationship between
variables. Variables often represent possible causes and effect.
As an example, a scatter diagram might show the relationship
between how satisfied volunteers are that attend orientation
training.
The diagram shows the relationship between volunteer satisfaction
scores and volunteer orientation training.
https://www.researchgate.net/publication/
7923211_The_Early_Origins_and_Development_of_the_Scatterplot
Of all the graphic forms used today, the scatterplot is arguably the most versatile, polymorphic, and
generally useful invention in the history of statistical graphics. Its use by Galton led to the discovery of
correlation and regression, and ultimately to much of present multivariate statistics. So, it is perhaps
surprising that there is no one widely credited with the invention of this idea. Even more surprising is that
there are few contenders for this title, and this question seems not to have been raised before. This article
traces some of the developments in the history of this graphical method, the origin of the term scatterplot,
the role it has played in the history of science, and some of its modern descendants. We suggest that the
origin of this method can be traced to its unique advantage: the possibility to discover regularity in
empirical data by smoothing and other graphic annotations to enhance visual perception.
Explained briefly, the Plan-Do-Check-Act cycle is a model for carrying out change. It is an
essential part of the lean manufacturing philosophy and a key prerequisite for continuous
improvement of people and processes.
First, proposed by Walter Shewhart and later developed by William Deming, the PDCA cycle
became a widespread framework for constant improvements in manufacturing, management, and
other areas.
PDCA is a simple four-stage method that enables teams to avoid recurring mistakes and improve
processes.
As we've explained, the PDCA's meaning let’s get deeper into the topic and learn more about the
cycle.
PDCA cycle is an iterative process for continually improving products, people, and services. It
became an integral part of what is known today as Lean management. The Plan-Do-Check-Act
model includes solutions testing, analyzing results, and improving the process.
For example, imagine that you have plenty of customer complaints about the slow response rate
of your support team. Then you will probably need to improve the way your team works to keep
customers satisfied. That is the point where PDCA comes into play.
Let’s take a closer look at the four stages of the PDCA process.
PLAN
At this stage, you will literally plan what needs to be done. Depending on the project's size,
planning can take a major part of your team’s efforts. It will usually consist of smaller steps so
that you can build a proper plan with fewer possibilities of failure.
Before you move to the next stage, you need to be sure that you answered some basic concerns:
After you have agreed on the plan, it is time to take action. At this stage, you will apply
everything that has been considered during the previous stage.
Be aware that unpredicted problems may occur at this phase. This is why, in a perfect situation,
you may first try to incorporate your plan on a small scale and in a controlled environment.
Standardization is something that will definitely help your team apply the plan smoothly. Make
sure that everybody knows their roles and responsibilities.
10 Years Kanban Experience In 1 Free Book:
Project Manager's Guide to Kanban
Download your free ebook
CHECK
This is probably the most important stage of the PDCA cycle. If you want to clarify your plan,
avoid recurring mistakes, and apply continuous improvement successfully, you need to pay
enough attention to the CHECK phase.
Here, you need to audit your plan’s execution and see if your initial plan actually worked.
Moreover, your team will be able to identify problematic parts of the current process and
eliminate them in the future. If something went wrong during the process, you need to analyze it
and find the root cause of the problems.
ACT
Finally, you arrive at the last stage of the Plan-Do-Check-Act cycle. Previously, you developed,
applied, and checked your plan. Now, you need to act.
If everything seems perfect and your team managed to achieve the original goals, then you can
proceed and apply your initial plan.
It can be appropriate to adopt the whole plan if objectives are met. Respectively, your PDCA
model will become the new standard baseline. However, every time you repeat a standardized
plan, remind your team to go through all steps again and try to improve carefully.
The PDCA cycle is a simple but powerful framework for fixing issues on any level of your
organization. It can be part of a bigger planning process, such as Hoshin Kanri.
The repetitive approach helps your team find and test solutions and improve them through a
waste-reducing cycle.
The PDCA process includes a mandatory commitment to continuous improvement, and it can
have a positive impact on productivity and efficiency.
Finally, keep in mind that the PDCA model requires a certain amount of time, and it may not be
appropriate for solving urgent issues.
Try Kanbanize for free
In Summary
Plan-Do-Check-Act cycle is a useful tool that can help your team solve problems much more
efficiently. PDCA has some significant advantages:
10.30656/jsmi.v4i1.2186
License
CC BY-NC-SA
Project:
Product Development
The development of the industry that is increasingly making businesses try to be competitive by increasing
technical knowledge and improving systems both internally and externally. PDCA is a quality
management system that is used as a continuous improve-ment tool that is widely used in the service and
manufacturing sectors. PDCA activities consist of four steps namely Plan, Do, Check, and Action with
repeated stages forming like a circle. PDCA is a continuous improvement tool that is widely used in the
services and manufacturing sector. The cycle of PDCA begins with smallto check possible effects on
systems, but then eventually progresses to larger and more specific improvements. The results of the
implementation PDCA method can be used to solve the problems of qualitative and quantitative data
problems that have been widely applied in the service and manufacturing sectors for continuous
improvement and as a work pattern in improving a process or system in an organization and increasing
productivity. The method used in this research is the literature review research paper has been published in
2015-2020 that have consistently applied the PDCA cycle. The result of this literature review is to describe
the relationship between thinking or gaps in theoretical and practical thinking about the application of the
PDCA method and the successful implementation in the service and manufacturing sectors as a
contribution to further research.
January 2017
Pratik M Patel
Vivek A Deshpande
The PDCA Cycle is a systematic series of steps for gaining valuable learning and knowledge for the
continual improvement of a product or process. PDCA is an iterative four-step quality improvement and
productivity improvement process typically used for the better of the business strategy. PDCA is a
successive cycle which starts off small to test potential effects on processes, but then gradually leads to
larger and more targeted change. Though the method is applicable to process, business and organization as
generally utilized by the industry, but this is an attempt to try and adopt the same at an individual level to
bring productivity improvement in individuals which will trigger an improvement in process and quality
for the organization at a bigger level. How this method would help an individual to become more
accountable which will ultimately enable a group, a product line and an organization to be able to make a
difference in improving the overall quality. The method tries to bring in changes to the traditional ways
how an individual does an activity and with few improvements, the overall productivity can be increased
that will ultimately benefit the organization.
https://www.ionos.com/startupguide/productivity/pdca-cycle/
PDCA is a wonderful tool for introducing improvements in a sustainable and thoughtful way.
Instead of changing the usual procedures with a spontaneous hair-jerk procedure, one proceeds
with small steps and always under close observations. However, this is also one of the big
disadvantages of the Deming cycle: You have to plan enough time for the model. PDCA does not
allow for rapid problem solving.
Advantages Disadvantages
Firstly, PDSA is a simple and effective process to resolve a new and recurring problem in the
industry. In addition, it is a repeated approach that will allow you to measure results without a
waste of time. Moreover, it is a risk-free cycle to apply a small scale in the project; therefore,
no need to change the full process if it doesn’t work.
In contrast, the full process of the PDSA is slow when applying the four steps in the industry
to yield the final result. It is not a straightforward execution process that can apply to urgent
problem-solving issues. Additionally, you need to use some resources, including effective
team members, to ensure that each circle step is directed perfectly.
The Deming cycle has been evolved from time to time to make it an effective process. I will
discuss how the Deming cycle came to be and how it has evolved from time to time.
Dr. W. Edwards Deming
Deming had been inspired by an American engineer Walter Andrew Shewhart who was also
a physicist and statistician. In 1939, Shewhart published a book on the Viewpoint of Quality
Control. It was the first book that articulated a three-stage systematic process of
specification, production, and inspection. These three stages were developed to test the
hypothesis of experiments. Finally, he mentioned that these stages should not go straight, but
they should go in a cycle. Thus, the Shewhart Cycle came up with the process of a circle.
In 1950, Deming analyzed Shewhart’s Cycle to modify it. He introduced a new version of
Shewhart’s Cycle: a four-step of design, production, sales, and research. This cycle has been
known as the Deming’s Circle or Deming Wheel.
PDCA
Some Japanese executives probably modify the Deming cycle or Deming wheel into Plan, Do
Check, Act (PDCA), although the exact reason for changing the term name is unclear. This
four-step cycle was introduced to identify the problems in the industry and solve them by
applying four steps together. The prime objective of this cycle is to standardize if the results
are satisfying.
TQM With Continuous Improvement Tools
TQM Seeks to Improve Quality and Performance
•••
BY
MARTIN MURRAY
One core component of TQM is the principle of continuous improvement. There are
a number of tools available to ensure that continuous improvement is successful,
which include:
Process mapping
Root cause analysis
The plan to check act (PDCA) cycle
Process Mapping
For example, in the supply chain, a business process could be the manufacturer of
finished goods, the purchase of goods from a vendor, or the service of an item sold
to customers. The mapping of any of these processes involves identifying and
documenting the physical flow as well as the information flow.
The process mapping will show graphically the flows in the process from start to
finish, which will include activities, personnel, and the outcomes.
The benefit that the process map gives to providing continuous improvement is that it
defines the scope of the process, the interface with other processes, and a starting
point from which improvement can be measured against.
Root Cause Analysis
Root cause analysis is the manner in which a business will determine the root cause
of a problem, incident, or quality concern. This is achieved by these three steps,
which drive toward the identification of the root cause:
Analysis
Data gathering
Validation
There are three phases that make up a root cause analysis process.
1. Open Phase
This initial phase allows participants to brainstorm the issue to identify as many
possible root causes. In this phase, the team can create a cause and effect diagram
which can be useful during brainstorming sessions.
As part of this process, the team can identify their possible causes with one of five
areas listed on the cause and effect diagram. Those cause categories are
manpower, methods, materials, machines, and measurements. The team can then
organize their ideas for the root cause around those categories.
2. Narrow Phase
In this phase, the team reduces the number of possible root causes to a number that
can be focused on. Each of the possible root causes identified in the open phase is
discussed by the team in more depth to determine if they should be kept.
3. Closed Phase
In this final stage, the team must come to a consensus on a root cause. This will
involve validating the root cause based on evidence, whether that is using
measurable data or subjective evidence from interviews with staff, customers or
vendors.
The analysis of measurable data can be performed using a number of statistical
methods such as a scatter plot diagram, check sheets to identify the frequency of an
event, or using a Pareto chart.
The PDCA cycle was developed by W. Edwards Demming, the architect of TQM. He
created a simple approach for carrying out change. The PDCA cycle consists of four
phases; plan, do, check, and act.
Plan – The planning stage usually begins after a company has been through a root
cause analysis and identified an issue or problem that needs to be addressed. The
business will then plan how the improvements need to be achieved.
Do – Following the planning stage, the business will then proceed with the changes
that have been planned in order to address the issue identified in the root cause
analysis. This may be as simple as purchasing a new piece of equipment for a
production line, or complex such as changing the way in which vendors are
evaluated.
Check – After the business has followed the plan and made changes to address the
issue or problem, the check phase should be used to validate that the changes made
have had the required results. If the check phase does not validate that the issue has
been resolved, then the business will need to start the PDCA cycle again at the
planning stage and develop a new plan to achieve the necessary results.
Act – The final phase of the PDCA cycle requires that once the issue has been
resolved, the business should incorporate the changes into their standard operating
procedures and if necessary, roll out the changes to other parts of the business.