Six Sigma-A Sashay From Evolution To Revolution)
Six Sigma-A Sashay From Evolution To Revolution)
Six Sigma-A Sashay From Evolution To Revolution)
ABSTRACT
THEORY
SIX SIGMA-A SASHAY FROM EVOLUTION TO REVOLUTION
As we all know, we are living in an era of competition where companies are striving hard
to operate in the dynamic and vibrant corporate world for making not only a place to
survive but also emerge out as a winner in the end. The economic reforms in 1990s have
totally changed the face of corporate world. Globalization and liberalization has resulted
in reducing the gaps between the business and its customers that has further molded the
ways of carrying out business activities. No business can afford to have even a small
loophole in its functioning. Customer is king, is not only just a statement but also stands
true in todays corporate world. I has been implemented by all business houses as it is
being actually practiced by some of the corporate striving to reach the acme of success.
The responsibility of a business organization is not only to meet the customer
requirements but also to satisfy the customers to the fullest extent leaving no place for
complaints. This can be possible only if the organizations business houses to function in
such a way that the chances of defects are totally eliminated or at most reduced as far as
possible. Keeping this in view, Six Sigma has emerged as a tool of quality assurance to
the customers. Six Sigma, a quality tool as generally accepted, has traveled a long way
from the stage of just improving the business key processes to a stage of managing the
entire business. Now Six Sigma is not a concept solely for a specific business activity,
rather it covers under its ambit the activities of the organization as a whole. Today, Six
Sigma stands for a comprehensive quality assurance approach which ensures the presence
of quality in each activity that the organization undertakes. This stands true when we talk
about companies like Motorola, General Electric and the like. The situation now has
changed, whereby it is recognized as a synonym of not only quality but also a tool for
cost reduction, meeting customer specifications and achieving business targets. Six
Sigma has moved from the stage of evolution to the stage of revolution.
The term sigma is used to designate the distribution or spread about the mean
(average) of any process or procedure.
For a process, the sigma capability (z-value) is a metric that indicates how well
that process is performing. The higher the sigma capability, the better. Sigma
capability measures the capability of the process to produce defect-free outputs. A
defect is anything that results in customer dissatisfaction.
6 Sigma
5 Sigma
4 Sigma
3 Sigma
2 Sigma
3.4 Defects
233 Defects
233 Defects
66,807 Defects
308,537 Defects
Defect :
Process
Capability:
Variation:
Stable Operations:
Design for Six
Sigma:
99.99966% Good At 6
Methodology
Six Sigma has two key methodologies: DMAIC and DMADV. DMAIC is used to
improve an existing business process. DMADV is used to create new product designs or
process designs in such a way that it results in a more predictable, mature and defect free
performance.
Six Sigma methodologies used to drive defects to less than 3.4 per million
opportunities.
Data intensive solution approaches. Intuition has no place in Six Sigma -- only
cold, hard facts.
Implemented by Green Belts, Black Belts and Master Black Belts.
Ways to help meet the business/financial bottom-line numbers.
Implemented with the support of a champion and process owner
Define
Measure
Analyze
Improve
Control
Define
Measure
Analyze
Design
Verify
Green
Belt
The existing product or process exists and has been optimized (using either
DMAIC or not) and still doesn't meet the level of customer specification or six
sigma level
Map process
steps,
identify input/
output
measures
Define Process
Mission
Define
purpose of
the process,
its goal and
its
boundaries
MSA, DCP,
indicators
and monitors
Map Process
Identify
Critical to
Quality and
Critical to
process
Service
excellence
and process
excellence
Build PMS
Develop
Dashboards
Identify
Improvement
Opportunities
Visual
representatio
n of
performance
The
DMAIC
cycle
Six Sigma is a data driven business strategy that seeks to streamline production processes
to constantly generate quasi perfect products and services in order to achieve
Breakthrough ROI. One of the pillars of Six Sigma is the pursuit of the elimination of
production process variation. For a production process to generate perfect products, the
process itself has to be perfect, therefore, in the design phase of the production process,
the process engineers need to precisely and accurately determine all the Critical-ToQuality Characteristics of the products or services they are about to produce in order to
minimize the possibility for variations to occur. Variation is said to have occurred every
time a product does not exactly match its predetermined CTQ characteristics; therefore
the corrective actions require the identification of the sources of variation and their
elimination.
The strategy used by Six Sigma to improve production processes is called DMAIC
(Define, Measure, Analyze, Improve and Control) and very specific tools are used at
every step of the DMAIC Roadmap.
Define
Since it would be hard to fix what is not known, the first step in a Six Sigma project will
consist in defining the goals of the project, identifying the Ys (the problem being
addressed)
Define the purpose and the scope of the project.
Determine the resources needed
Determine who are the customer of the project
Map the SIPOC (Suppliers- inputs-Process-Customer Output-Customer)
Develop the project planning
Measure
Identify the Xs ( the factors that are thought to cause the problem being addressed) and
collect data
Identify the business processes that generate the Xs and the Y
Perform a regression analysis to measure the correlation between the Y and the Xs
Identify the Critical-To-Quality requirements (CTQ)
Define the metrics used to measure the CTQs
Measure the current process capabilities
Analyze
Identify what inputs affect the output and to what extent they do so
Identify the root causes of the problem
Using Pareto Analysis, determine the vital few factors that contribute to the problem
Determine the new metrics needed to monitor performance
10
Improve
Based on the results obtained from the Analyze phase; develop and implement plans for
process changes that lead to an improvement of the vital factors that impact the issue at
hand.
Control
Determine the standard process to be followed, monitor the process, communicate and
train the employees.
DMAIC Phases
Tools Used
11
Define
Project planning
Project Charter
Stakeholder Analysis
SWOT Analysis
SIPOC Analysis
Baseline
Kano Analysis
Brainstorming
Data Collection
7 Basic Tools
Process Metrics
System Diagram
Process Capabilities
Benchmarking
Gage R&R
Regression analysis
Measure
12
Analyze
Analysis Of Variance
Pareto Analysis
Fishbone diagram
o
Failure Mode and Effect Analysis
(FMEA)
Improve
Develop solutions
Hypothesis Testing
Reliability Analysis
5 Whys
Design Of Experiment
Taguchi Method
Brainstorming
Poka Yoke
5S
Pugh Matrix
13
Control
Control Charts
Pre-Control Charts
Provide training
Time Series
Standardization
Performance Management
15
16
17
Define: Large number of changes from client after approving engineering design.
Schedule slipping.
Measure: Number of changes, time involved in changes, compliance to critical path
schedule.
Analyze: No clear authority on client team to establish scope, any of client team could
make changes, verbal communication of changes, conflicting changes by client team
members. Language issues between client and engineers.
Improve: Regular engineering/client meetings where topics included: scope for each
section and desired objective, known limitations defined, unclear requirements were
questioned and options discussed. Written plan signed by client representative and
engineering lead. Change requests in writing and signed by client representative. Changes
decrease by factor of 4.7 and schedule met.
Control: Change requests all in writing. Shared approach with other disciplines on
project.
18
20
BRAINSTORMING
Most project executions require a cross functional team effort because different creative
ideas at different levels of management are needed in the definition and shaping of a
project. These ideas are better generated through brainstorming sessions.
Brainstorming is a tool used at the initial steps of a project, it consists in encouraging a
voluntary generation of a large volume of creative, new and not necessarily traditional
ideas by all the participants. It is very beneficial because it helps prevent narrowing the
scope of the issue being addressed to the limited vision of a small dominant group of
managers
Since the participants come from different disciplines, the ideas that they bring forth are
very unlikely to be uniform in structure and in quintessence so the synergy of gist that
they yield needs to be organized for the purpose of the project. If the brainstorming
session is unstructured, the participants can give any idea that comes to their minds but
this might lead the session to stray away from its objectives.
A structured Brainstorming provides rules that make the collection of ideas
organized. A form of Brainstorming called Nominal Group Process (or Nominal
Technique) is an effective way of gathering and organizing ideas. At the end
Brainstorming, a matrix called Affinity Diagram helps arrange and make sense
many ideas and suggestions that were generated.
better
Group
of the
of the
Because the participants come from different areas and of a company and
therefore have a different background, the facilitator needs to be very competent
and knowledgeable and flexible.
The more vocal participants may end up having it their way.
If there are too many participants, the meetings will end up being too long and
hard to conduct.
Affinity Diagram
If the ideas generated by the participants to the brainstorming session are few (less than
15), it is easy to clarify, combine them, determine the most important suggestions and
make a decision. But when the suggestions are too many it becomes difficult to even
establish a relationship between them.
An Affinity Diagram or KJ method (named after its author Kawakita Jiro) is used to
diffuse confusion after a brainstorm by organizing the multiple ideas generated during the
session. It is a simple and cost effective method that consists in categorizing a large
amount of ideas, data or suggestions into logical groupings according to their natural
relatedness.
1. The first step in building the diagram is to sort the suggestions into groups based
on a consensus from the members
2. Make up a header for the listings of the different categories
3. An affinity must exist between the items on the same list and if some ideas need
to be on several list, let them be.
22
Qualitative change will always be opposed by restraining forces that are either too
comfortable with the status quo or are afraid of the unknown. In a competitive global
market where constant innovation and continuous improvement are the driving forces
that keep businesses running, identifying those forces in order to assess the risks involved
and to better weight the effectiveness of potential changes becomes an imperative.
The Force Field Analysis is a managerial tool used for that purpose. FFA is a technique
developed by Kurt Lewin, -a 20th century social scientist- as a tool for analyzing forces
opposed to change. It rests on the premise that change is the result of a conflict between
opposing forces, in order for it to take place, the driving forces must overcome the
restraining forces.
Whenever changes are necessary, FFA can be used to determine the forces that oppose or
stimulate the proposed changes. The opposing forces that are closely affected by the
changes must be associated with the risk assessment and the decision making. The two
groups are charted according to how important they can impact the changes, with the
objective of abating the repulsive forces and invigorating the proponents of changes.
The first of which should be the description of the current and the ideal states, to
analyze how they compare and what will happen if changes are not made.
Describe the problem to be solved and how to go about it. Brainstorming sessions
can be an effective tool for that purpose.
Identify and divide the stakeholders who are directly implicated in the decision
making in two groups: the proponents for the changes and the restraining forces
and then select a facilitator to mend the fences.
Each group should list the reasons why it is for or against the changes. The listing
can be based on questionnaires for or against changes.
The listing should classify the reasons according to their level of importance; a scale
value can be used as a weight for each reason. Some of the issues to be considered are:
Company's needs
Cost of the changes
Company's values
Social environment (Institutions, policies..)
Company's Resources
23
Question every item on the lists to test their validity and determine how critical
they are for the proposed changes.
Add the scores to determine the feasibility of the changes. If the reasons for a
change are overwhelming, take the appropriate course of action by strengthening
the forces for change.
An operation manager has suggested that all the operations of a fictitious company
should be consolidated in one facility. The following diagram depicts and example of a
Force Field Analysis.
24
Pareto Analysis
Pareto analysis is simple; it is based on the principle that 80% of problems find their roots
in 20% of causes. That principle was established by Vilfredo Pareto, a 19 th century
Italian economist who discovered that 80% of the land in Italy was owned by only 20%
of the population. Later empirical evidence showed that the 20/80 ratio was determined to
have a universal application.
80% of customer dissatisfaction stems from 20% defects
80% of the wealth is in the hands of 20% of the people
20% of customers account for 80% of a business
When applied to management, the Pareto rule becomes an invaluable tool. In the case of
a problem solving for instance, the objective should be to find and eliminate the
circumstances that make the 20% "vital few" possible so that 80% of the problems are
eliminated. It is worthy to note that Pareto Analysis is a better tool to detects and
eliminate sources of problems when the those sources are independent variables.
The first step will be to clearly define the goals of the analysis. What is it that we are
trying to achieve? What is the nature of the problem we are facing?
The next step in the Pareto Analysis is the data collection. All the data pertaining to
factors that can potentially affect the problem being addressed need to be quantified and
stratified. In most cases, a sophisticated statistical analysis is not necessary; a simple tally
of the numbers suffices to prioritize the different factors. But in some cases the
quantification might require statistical analysis to determine the level of correlation
between the causes and the effect. A regression analysis can be used for that purpose, a
coefficient of correlation or a coefficient of determination can be derived to estimate the
level of association of the different factors to the problem being analyzed.
Then a categorization can be made, the factors are arranged according to how much they
contribute to the problem. The data generated is used to build a cumulative frequency
distribution.
The next step will be to create a Pareto Diagram or Pareto Chart in order to visualize the
main factors that contribute to the problem, and therefore concentrate on the "vital few".
The Pareto Chart is a simple histogram; the horizontal axis shows the different factors
while the vertical line represents the frequencies.
Since all the different causes will be listed on the same diagram, it is necessary to
standardize the unit of measurement and set the time frame for the occurrences.
25
The building of the chart pre-requires a data organization. A four columns data summary
must be created to organize the information collected. The first column will list the
different factors that cause the problem, the second column will list the frequency of
occurrence of the problem during a given time frame, the third column records the
relative frequencies, in other words, the percentage of the total and the last column will
record the cumulative frequencies, bearing in mind that the data are listed from the most
important factor to the least.
The following data was gathered during a period of one month to analyze the reasons
behind a high volume of customer return of cellular phones ordered on line.
Factors
58%
58%
37
13%
71%
30
11%
82%
Defective product
26
9.2%
91.2%
Changed my mind
13
4.6%
95.8%
12
4.2%
100%
Totals
283
100%
The diagram itself will consist in three axes: The horizontal axis lists the factors, the left
vertical axis lists frequency of occurrence, and it is graded from 0 to at least the highest
frequency. The right vertical line is not always present on Pareto charts; it represents the
percentage of occurrences, it is graded from 0 to 100%.
26
The breaking point (the point on the cumulative frequency line at which the curve is no
longer steep) on this graph is at around "wrong product". Since the breaking point divides
the "vital few" from "the trivial many", the two first factors "misinformed about the
contract" and "Wrong products" are the factors that need more attention, by eliminating
the circumstances that make them possible, we will eliminate about 71% of our problems.
Project Management
The objective of a project is either to solve an existing problem or to start a new venture.
In either case a carefully planned and organized strategy is needed to accomplish the
specified objectives. The strategy includes developing a plan which will define the goals,
explicitly set the tasks to be accomplished, determine how they will be accomplished,
estimate time and the resources (both human and material) needed for their completion.
How projects are planned and managed will seriously impact on the profitability of the
ventures that they are intended for and the quality of the products or services they
generate.
27
Most project management plans are subdivided into four major phases: the feasibility
study, the project planning, the project implementation and the verification or evaluation.
Each one of these phases requires strategic planning.
Since all the tasks included in a project cannot be executed at the same time because of
their interdependence, a critical path needs to be determined when scheduling of the
activities.
The three major tools that are used for the purpose of planning and scheduling the
different tasks in project management are the Gantt chart, the Critical Path Analysis (or
Method) and the Program Evaluation and Review Technique.
But before any scheduling starts, it is essential to accurately estimate the time that every
task might require. A good scheduling must take into account the possible unexpected
events and the complexity involved in the tasks themselves.
This requires a thorough understanding of every aspect of the tasks before developing a
list.
One way of creating a list of tasks is a process known as the Work Breakdown Structure
(WBS). It consists in creating a tree of activities that take into account their lengths and
contingence. The WBS starts with the project to be achieved and goes down to the
different steps necessary for its completion. As the tree starts to grow, the list of the tasks
grows.
Once the list of all the tasks involved is known, based on experience or good wit, an
estimation of the time required can be made and milestones determined.
Knowing the milestones of a project with certainty is extremely important, because they
can affect the timeliness of the project completion as a whole, and delays in project
completions can have serious financial consequences and above all they can cost
companies market shares. In a global competitive market, innovation is the driving force
that keeps businesses alive, and this is more obvious in high tech industries. Most
companies have several lines of products and each one of them is required to put out a
new product every year or every six months. If for instance DELL's Inspiron or Latitude
fails to put out new products on time, it is likely to lose profit from the forgone sales (the
loss is proportional to the products' Time-To-Live) and market shares to its competitors.
28
The charts also allow the project's team to visualize the resources need to complete the
project and the timing for each task, it therefore shows where the task owners must be at
any given time in the execution of the projects. The team working on the project should
know whether it is on schedule or not by just looking at the chart.
The chart itself is divided in two parts. The first parts shows the different tasks, the tasks
owners, the timing and the resources needed for their completion; the second part
graphically visualizes the sequence of the events.
Some tasks cannot start until the preceding ones are finished. The main disadvantage of a
Gantt chart is that it does not take into account the interdependence between the tasks. It
shows the sequence and the beginning and the ends of the tasks but does not indicate
whether one task has to wait for the end of a preceding one.
The Critical Path Analysis is another tool used for complex projects and it does take into
account the interdependence of the tasks
NONE
5
29
Based on this information, we can determine the Critical Path, the project duration and
the slack time for H.
Task A is the first on the list, no other task can start until it is completed. Tasks B and C
come next, they are contingent on task A. task E, G and H are on the same path as C,
while task D and F are on the same path and depend on B.
The letters on the diagram represent the different activities and the numbers beside them
represent the time it will take to accomplish the tasks.
The diagram shows that there are two paths to the project: ABDF and ACEGH. The
duration for ABDF is 20 day and the duration for ACEGH is 18 days. Since ABDF is the
longest path, it is also the Critical Path. The earliest that the task H can start is within 16
days.
The advantage of the Gantt chart over the CPA is the graphical visualization of the tasks
along with their timing, the task owners, the start time and end times. The advantage of
the CPA over the Gantt chart is the sequence of events that takes into account the
interdependence of the tasks.
The CPA is a deterministic model because it does not take into account the probability for
the tasks to be completed sooner or later than expected, the time variation is not
considered.
30
NONE
Solution:
31
Most
Activity Predecessor likely
time
NONE
.17
.029
.33
.11
.33
.11
4.8
.5
.25
5.83
.17
.029
6.83
.33
.11
3.83
.17
.029
.33
.33
The critical path has the longest duration. It is critical because any delay in any task will
cause a delay for the whole project. In this case, we have two paths ABDF which will last
20 days and ACEGH which will last 17 days. So the critical path is ABDF.
The estimated variance for the critical path is 0.029 + 0.11 + 0.25 + 0.11 = 0.499 with a
standard deviation of
.
Completing the project at least 1 day earlier means completing it in 19 days or less. The
probability for such an event to take place is found using Normal distribution
1.42 corresponds to 0.9222 on the Normal table. Since we have a negative sign, the area
we are looking for will be on the right side of 0.9222 under the normal curve which is
equal to 0.0778.
Stakeholder Analysis
32
The stakeholders are the people who can affect or can be affected by a project. They can
be department managers, customers, suppliers or anyone among the employees who will
contribute to its concretization.
But the importance of all the stakeholders in a project is not the same. So a stratification
of the stakeholders according to how they impact or are impacted by the project is very
important.
The techniques used to identify the stakeholders, determine their relative importance for a
project and stratify them are called the Stakeholder Analysis. The benefit of using this
technique is that it helps anticipate how the different groups will influence the project and
therefore develop the appropriate response strategies to remove obstacles and reduce
negative impacts.
The first step is to determine who the stakeholders are and how they affect the project.
This step is better achieved during a brainstorming session.
After the list of the stakeholders is agreed upon, they are subdivided into groups
according to their domain of interest and how they can benefit or hinder the project (a
special tool called Force Field Analysis deals with how to approach the negative forces
that might be opposed to changes).
The next step will consist into classifying them according to their importance for the
project and then assessing the responsibilities and expectations for all of them.
Assess every stakeholder's positive or negative impact for the project and grade
their impact (From 1 to 5 or from A to E for instance).
Determine what can be done to lessen their negative impacts and improve their
positive contribution to the project.
How does
How do
How to
How to
the
they
maximize the
minimize
project
impact the positive impacts negative impacts
impact
project? on stakeholders on stakeholders
them?
Customers
Program
managers
Operations
Inventory
Suppliers
Employees
33
Institutions
Outside the
Company
Fishbone Diagram
The cause-and-effect diagram also known as fishbone (because of its shape) or Ishikawa
diagram (after its creator) is used to synthesize the different causes of an outcome. It is an
analytical tool that provides a visual and systematic way of linking different causes
(input) to an effect (output). It can be used in the design phase of a production process as
well as in an attempt to identify the root causes of a problem. The effect is considered
positive when it is an objective to be reached as in the case of a manufacturing design, it
is negative when it addresses a problem being investigated
The building of the diagram is based on the sequence of events, "sub-causes" are
classified according to how they generate "sub-effects" and those "sub-effects" become
the causes of the outcome being addressed.
The Fishbone diagram does help visually identify the root causes of an outcome but it
does not quantify the level of correlation between the different causes and the outcome.
Further statistical analysis is needed to determine which factors contribute the most to
creating the effect. The Pareto Analysis is a good tool for that purpose but it still prerequires data gathering. Regression analysis allows the quantification and the
determination of the level of association between causes and effects. A combination of
Pareto and Regression Analysis can help not only determine the level of correlation but
also stratify the root causes. The causes are stratified hierarchically according to their
level of importance and their areas of occurrence
The first step in constructing a Fishbone diagram is to clearly define the effect being
analyzed.
The second step will consist into gathering all the data about the Key Process Input
Variables (the KPIV), the potential causes (in the case of a problem) or requirements (in
the case of the design of a production process) that can affect the outcome.
The third step will consist in categorizing the causes or requirements according to their
level of importance or areas of pertinence. The most frequently used categories are:
34
Sub-categories are also classified accordingly; different kinds of machines and computers
for instance can be classified as sub categories of equipment.
The last step is the actual drawing of the diagram.
35
SWOT Analysis
Any business that evolves in a competitive environment faces threats from its competitors
and its environment; what keep it in the market is the potential opportunities that can be
exploited.
The strength of a company depends on its ability to face the threats by better exploiting
these opportunities. So when starting a new project, a company needs to assess its
position in the market.
The SWOT (Strength- Weakness- Opportunity- Threat) Analysis is an effective tool that
can be used in project planning to assess a company's strategic position and gage its
ability to respond to the threats posed by its competitors or exploit the opportunities
offered by the market.
The SWOT Analysis is done at two levels: the first level will consist in assessing the
company's internal situation by analyzing its strength and weaknesses (how prepared is it
to engage in the planned project?): The external situation is contingent upon the level of
threats from the competitors and the opportunities offered by the market.
So SWOT analysis consists in analyzing information about a business and its
environment in order to determine how to overcome potential obstacles to a project.
36
The following chart is a synthesis of potential strengths and weakness that might be
addressed when analyzing a company's internal situation as it relates to a particular
project.
Strengths
Weaknesses
Resources:
Cash flow
Human
Material
Inexperienced management
Financial
Capabilities
Costly process
Managerial experience
Poor marketing
When the internal conditions are deemed to be favorable, the next step in the SWOT
Analysis will be the assessment of the threats and opportunities that come from the
environment in which the company evolves.
Opportunities
Threats
Market growth
Weak competitors
Tougher legislations
Technological evolution
Loosened regulations
37
But the first step in a project management is its specification, the definition of its goals
and objectives. A lucid definition of a project provides a solid basis for not only its
prompt completion but also for the future Quality Assurance.
38
I. Project definition
Since most of the cost savings are contingent upon the planning and execution of
projects, before any action is taken, it is necessary to have a clear common understanding
of all the aspects of the project, its extent, the key stakeholders, its goals and its
objectives. A good definition provides a clear appreciation of every stakeholder's role and
what is expected from him, it also provides a tacit agreement between the parties. The
definition of the project is laid on a document called the project charter.
1. Project Charter
The project charter is written either by the project sponsor or by the project Champion
with the approval of the sponsor. It is issued by upper management to formally make the
project official and it gives the project Champion and his team the power to use the
organization's resources for its purpose. The charter is then made public and distributed to
all the stakeholders
The following table summarizes the most common sections of a project charter and their
meaning.
Project Title
Project Champion
Project number
The project champion or manager who
ensures that the resources are available and
that the project is execute in a timely and
cost effective manner.
Project description
It defines the project and gives a clear reason for its purpose, what are the measurable
results expected. It includes the project background and relates it to our present
conditions. It makes clear the expectations from every stakeholder.
It also defines the success of the project and addresses the consequences of a failure.
The project description defines the constraints and the factors that can affect the project.
39
Value statement
Clearly set out the positive impact of the project on the company.
Stakeholders
It starts with the project sponsor, the team working on the project, the customers (who can
be the project sponsor), and the external customers.
It identifies the key people involved in the project and their role.
Risk Analysis
Timeframe
Project budget
Alternative plans
What to do if things to not go according to plans.
Documentation
Project Monitoring
How and when to conduct meetings, who participates and for what purpose. How to
measure the progress of the different aspects of the project.
Scheduling
Important events and dates that affect the project. Different deadlines for sub-parts of the
40
project.
The Theory Of Constraints was first introduced in 1985 (just a few years before Six
Sigma) by Eliyahu Goldratt in his famous book the Goal and later developed in his
subsequent books such as The Critical Chain, Its Not Luck, the Haystack
Syndrome and The Theory Of Constraints.
It is founded on the notion that in any business structure, at any given time, one factor
tends to impede the companys ability to reach its full potential. All business operations
are structured like a chain of events, like linked processes with each process being a
dependent link and at any given time, one link on the chain tends to restrain the whole
chain and prevent it from reaching its Goal. Since the objective of a company is not to
maximize the efficiency of the different parts that compose it, but to maximize the overall
efficiency of the business as an entity, it becomes necessary to identify the constraint and
proceed with the needed improvements.
One of the first lessons that Goldratt gives in The Goals is an obvious one, even thought
some businesses fail to understand it: Companies do not exist for the sake of being
productive or for the sake of producing high quality goods and services or making their
customers happy. The reason why companies are set up, their raison detre, their Goal is
to make profit, to make money. Productivity, high quality products and services and
customer satisfaction are nothing but very necessary ways and means that companies
have to use to reach their Goal. So the cost of being productive, the cost of quality and
customer satisfaction must be contingent upon the Goal.
In his quest to show the ways and means to reach the Goal, Goldratt borrows some
commonly used business terms but he gives them a different meaning. Three of the most
important of which are the Throughput, Inventory and Operational Expenses which he
defines as follow.
Some of the derivatives of these metrics are the Throughput per Unit and the Throughput
per Unit of the Constraining factor.
41
To maximizing total Throughput, The Company must concentrate on improving the sales
of the products that provide the highest throughput per unit of the constraining factor.
This
is
because
the
bottleneck
determines
the
throughput.
The objective of a company must be to maximize the Throughput by minimizing the
Inventory and the Operational Expenses. To reach that objective, it must continuously
strive to identify the Constraints, the Bottlenecks and proceed with the necessary
changes. The bottleneck is defined as a resource whose capacity is equal to or is less than
the demand placed on it. The slowest performing area in a process determines the level of
output generated by that process.
To make the necessary changes, the company needs first, to answer the following three
questions:
What to change?
The changes that need to be made must address the area of the business that constitutes
the bottleneck; Overlooking the interactions between the different departments in a
company and only improving on areas that are perceived to constitute constraints might
only address the symptoms and in some cases aggravate the problems.
To make the necessary changes, Goldratt suggests the 5 following steps:
1.
2.
3.
4.
Elevate the constraint if after exploiting it and subordinating all other operations
to it, more capacity is needed to meet market demand.
5.
Restart the process without letting inertia become the systems constraint. The
process needs to be restarted again and again until the current constraint is no
longer the constraint.
The way we can tell that the current constraint is no longer the constraint is that when
further changes are made on the current constraint, they do not positively impact the
42
bottom line of the company as a whole. Therefore another process, another department,
another link must have become the weakest link, the new constraint, and it needs
improvement.
PROCESS MAPPING
43
One easy way to identify a loop's type as a balancing or reinforcing is by counting the
quantity of opposite interactions or o's in the loop. Whenever you have an odd number
of o's you can determine that the loop is a balancing. Conversely any loop with an
even number of o's automatically indicates your loop to be reinforcing.
You should always review your diagram before labeling it as a Balancing (B) or
Reinforcing (R) loop to ensure the story and data agree with the label.
Basic Examples
Let's use variables that represent quantities that can vary over time. A term
like Gross Profit will have variation.
For our example of a Balancing Loop, let's look at the interaction of
Production Errors and Gross Profit. In our hypothetical organization the data
tells us that as Production Errors increase, Gross Profit decreases and as
Production Errors decrease Gross profit increases. In the life of this
organization we learn that as the GP decreases management attention increases. This
temporary attention produces the Hawthorn Effect and we see a decrease in Production
Errors and GP begins to increase. But as GP increases the Hawthorne Effect expires and
Production Errors increase again. These two variables interact over time to produce a flat
GP and Production error rate. (Management Attention is not listed as a variable as it
is not a constant step or measurement in our process)
Now let's look at an example of a Reinforcing Loop. In this example the data shows that
as the Gross Profit increases the R&D Budget increases. As the R&D budget increases
new products are built, more automation in the production line is implemented and GP
44
increases. As the GP increases the budget for R&D increases and the organization
produces better products and reduces production costs. This interaction increases Gross
Profit. Here we can graph a reinforcing loop producing a Virtuous Cycle. Theoretically
this cycle will go on continuously, however eventually the Law of Diminishing Returns
comes into play.
Now that we have laid the foundation of causal loops let's illustrate a real life process
with a traditional process map and then illustrate the same process in a causal loop
diagram.
Scenario:
An IT network monitoring firm is tasked to grow its business by improving its accuracy
remotely monitoring servers. It is believed by the head of the organization that if the
remote server monitoring accuracy grows, new business can be won and the overall
business will grow. As such, Business Growth is the KPOV (Key Process Output
Variable). A linear high level process diagram would be illustrated as follows:
This linear approach is accurate but fails to examine and identify the interrelationships
between and across all process steps and effectively drive the KPOV. What is missing is
the correlation of the end of the linear process map to its starting point. By using a
simple Causal Loop Diagram to map this process we can see the process from a 360
degree perspective. This alternative view can help identify potential root causes and
potential solutions via process interventions.
45
46
47
complex business operations are linked to one another like a chain and at any given time,
the poorest performing operation acts as the weakest link and constitutes a constraint, an
impediment that limits the ability of the system as an entity to achieve what it is meant
for. The improvement of the system requires the focus on the weakest link. Disregarding
the constraint, the weakest link and improving any other aspect of a business can
eventually lead to an aggravation of the problems.
The improvement process itself requires some steps, the first of which is the
identification of the constraint. Once the constraint is identified, it becomes necessary to
determine what changes need to take place and how they are to be brought about.
The ideal production process that would eliminate the waste that comes under the form of
excess inventory or idle machines would be the leveled production process based on the
idea of one-piece flow. A process that does not build inventory and that uses both labor
and machines to their full potential. In that case, every operation will take the same
amount of time to process a given quantity of material.
Yet in the real world, this seldom does happen. Because of the complexity of
manufacturing processes and the usually uneven process capabilities of the companies'
resources, a one-piece flow process will make it hard to use all the factors of production
to their full potential at the same time. The task of management becomes therefore to
determine the optimal combination of eliminating clutters and at the same time making
the best use of the resources.
For the sake of our argument let's consider the operations in a fictitious soap
manufacturing company. The operations are simplified to the following steps.
The time spent by the different operations to process 5 tons of material is:
1 hour for reception and inbound inventory, 4 hours for mixing, 5 hours for cutting and
packing, 2 hours for outbound stocking and 2 hours for shipping.
This scenario clearly shows the "cutting and packing" operations to be the critical
constraint for the company as a whole because no matter how well all the other
departments perform it will be impossible for the business to process 5 tons of soap in
less than 5 hours. The bottleneck caused by the "cutting and packing" operations appears
48
in two forms: overproduction generated by the upstream operations because the "cutting
and packing" operations cannot process all the output that comes from the "mixing"
operation (if it operates at its full potential) and underused resources downstream.
An improvement in any aspect of the company's operations other than the "cutting and
packing" operations will lead to an increase in either excessive idle inventory after the
mixing operations or more idle machines and higher cycle time.
This example applies to services as much as it does to manufacturing since bottlenecks do
not always come under the form of excess inventory or idle resources. In the service
industry, the materials are usually replaced by the flow of information which in our days,
is managed by the Information Technology. Let's suppose that cellular phone service
provider uses Oracle, Compass and a WMS (Warehouse Management System) as
channels to process information. If these three Software packages process the information
in a linear flow and one of them is slower than the other two, it will become a constraint
for the IT department and improving the performance of any software other than the
bottleneck will only lead to more problems.
49
The first step therefore will be the assessment of a periodic (weekly, monthly...) volume
of orders from the customers that will later be subdivided to determine the daily
production requirement.
The next step will be to evaluate the demand for raw materials that will be required to
satisfy the customers' demand and last but not least the determination of the company's
resources abilities to meet the demand along with their productivity and performance.
How long does it take every operation to process a given number of orders?
The current state Value Stream Mapping is not intended to describe the ideal state of the
company but to visualize what is happening now.
The following map is a simplified map of the Value Stream of our soap manufacturer.
50
LEAN MANUFACTURING
We try to understand lean manufacturing through a case study as Improving
Turnaround in Back Office Processes: A Lean-Six Sigma case study
There are continuing questions about the relationship between Lean Manufacturing and
Six Sigma techniques. This relationship has been expressed as follows:
51
The realization that the first category of problems was the one to be attacked (customer
focus) came spontaneously.
Then prioritization was done to select the most important problem using the weighted
voting system followed by a quick discussion to produce a consensus. The theme (CTQs)
selected was "Consistency of Quality and Timeliness."
The Consistency of Product Quality was resolved first and a 98% error reduction was
achieved.
52
The project described here was born out of a chance remark by one of the participants in
the group: "We are going to add new capacity." To my casual query, "Why?", came the
answer: "We need to improve the turnaround." Immediately I intervened stating that
turnaround is not dependent on capacity. The disbelief that stared back at me was but a
reflection of the mindset prevailing and the task at hand to change it.
A cross-functional team including the planning personnel, and the key representatives of
the operations from each stage of the process was formed to test the principles of Lean
Manufacturing in practice.
1.2) Definition of the problem: A second level of brainstorming generated a list of
problems which were affinitized into customer problems and internal problems. The
customer problems were expressed:
1. Delayed delivery -- frequent customer complaints
2. Peaking of incoming loads aggravates delays.
The other problems were set aside as they were causes of the customer problems rather
than intrinsic problems themselves.
The Project Charter was then set out as follows:
Problem = Customer desire - Current state
1.3) Measure the problem: A suitable data collection check sheet was designed and data
was collected two weeks on the turnaround time of documents to define the problem
quantitatively. The following results were obtained:
Customer Requirement Of Turnaround Time: <5 days
Current State Average Turnaround: 5 days
sigma: 1 day
3-sigma (99.7%) Delivery To Standard: <8 days
The interpretation of consistency of delivery (turnaround) using sigma created disbelief at
first as the group struggled to understand the concept. Gradually however it was grasped
-- the problem was not the average turnaround, which was within the customer limit but
the variability. This was the second major mindset change and led to the definition of the
goal: Reduce Turnaround time by 50% so that its (average + 3 sigma) < 4 days.
3. Analysis of the Problem
A flowchart was prepared outlining each activity in the process. Many gaps were
revealed that had to be filled up and thought through. Standard times of each process
per batch of 50 pages were tabulated in a specially designed check sheet. The team
was amazed when the time for the value adding steps added up to only 31 hours. The
most important mindset change had begun, asking, "Why do we take 5-8 days?"
53
The principles of Lean Manufacturing and turnaround time reduction were then
introduced:
Finding the vital causes: Data was collected for three batches clocking the timing at each
stage and comparing it to the standard timings to find where time was being lost on a
specially designed data sheet.
With the data it took the group only a few minutes to draw a Pareto Diagram of delays
and conclude three vital reasons causing 70% of the delay was non-processing (waiting)
time due to:
Lack of awareness -- large waiting times for small items falling between
departments
Inventory
Unscheduled work patterns and therefore unavailability of personnel at the right
time
4. Idea Generation
The old mindsets were shattered but the group was struggling to understand the
concepts confidently enough to start applying them in regular production. An
experiential simulation classroom exercise in which the group members participated
was designed and carried out to experience the concepts first hand. Armed with this
conviction, the team proceeded to the next step to design a pilot test.
Planning the Pilot: A step-by-step implementation plan was drawn up. It was estimated
that cutting inventory and scheduling the production cycle to flow in the current batch
sizes would lead to the achievement of the goal. The whole chain was briefed about the
new method and agreed on a schedule. The team was ready to run the pilot.
4. Idea Modification
A pilot batch was run to test the scheme: It took 36 hours. Amazed jubilation followed by
an enthusiastic buy-in of the concepts -- demonstrating my belief that nothing works
better than results in accomplishing mindset change. From then on it was difficult to
restrain the group from pushing ahead too fast.
54
55
Results
ROI - annualized
1. Well downtime reduced $2,500,000
from 467 to 138 days/
month
2. Production increase of
377 Barrels of Oil per Day
(BOPD)
Soft water usage by
1. Reduced CGS by 65% $1,000,000
steam generators in oil through proper chemical
chemical savings
reclamation
mixture
$18,900,000
2. Increased soft water
increased oil
production by 30,000
production
barrels per day
3. Increased oil reclamation
Improving automatic well 1. Increased test accuracy $500,000
test process
by 25%
2. Improved % of
successful workovers
Oil content of injection 1. Reduced content from 40 $800,000
water
ppm to 25 ppm
2. Increased oil production
by over 100 BOPD
3. No increase in chemical
use
Periodic well test process 1. Increased accuracy by $400,000 well
25%
work decisions
56
$100,000 value of
extra tests.
Results
Results
ROI annualized
Industry: Transactional
Project
Results
ROI-
57
annualized
Human Resources:
1. Lost productivity due to failure $479,700
Employee productivity is of direct deposits, interrupted health
reduced when employees care, large out-of-pocket expenses,
transfer to different jobs as well as lot access to
within the company
computer/phone was calculated.
2. DMAIC process resulted in
improved process flows and SOPs
to include a transfer binder, use of
intranet, and pre-transfer training.
3. Annual survey instituted.
Project
Results
Upgrade
Monoblock
duplexer
Industry: Automotive
Project
Crankshaft
production
failures
ROIannualized
1. Decreased the number of crankshaft failures $100,000
from 15,000/million to 1,000/million
2. Used Six Sigma tools to stratify the errors;
most were due to grinding failures. DMAIC
decreased the interference caused by design of
main axle pocket and flange area
3. Improved operator training
Results
Industry: Pharmaceutical
58
Project
Pharmacist Information
(PI) Outsert (shrink
wrapped on outside)
Results
1. PI Outsert bottleneck on
the drug secondary
packaging line was
eliminated.
2. New adhesion process
and automated cartoner
installed.
3. Eighteen FTE moved
from process to another
process within the business
4. A new, smaller, and less
expensive outsert created.
ROI- annualized
$4 million (after the
cost of capital
equipment)
Results
ROIannualized
$90,000
59
Results
Project
Results
ROIannualized
$118,000
Results
Customer
ROIannualized
$100,000
60
Other examples of project case study savings we have seen as a result of our Lean
Six Sigma training and consulting support:
$292,000
275,000
309,000
454,000
650,000
246,000
578,000
1,940,000
58,000
950,000
950,000
430,000
61
280,000
41,000
963,000
624,000
1,015,000
40,000
150,000
50,000
120,000
125,000
86,000
347,000
1,012,385
2,839,327
130,000
151,266
125,000
Break-Even Analysis
In microeconomic theory, the cost of production of goods and services is generally
subdivided into two sub-costs, the fixed cost and the variable cost. While the variable
cost is proportional to the quantity of the output, the fixed cost would be incurred
regardless on whether production is made . In other words, even if the business does not
produce anything, it will be liable for the fixed costs.
62
The fixed cost in general would be the cost of the buildings, the machines, the electricity
and so on, while the variable cost would be composed of the labor cost and the cost of the
raw materials.
A Break-Even happens when the total revenue obtained from the sales of the goods or
services equals the total cost of production.
TR = TC
With TC = F + VN
And TR = PN
Where N is the quantity of output sold at P, F is the fixed costs, V is the variable cost per
unit, P is the price of N and TR is the total revenue
So
F + VN = PN
Therefore
N = F/(P V)
For profit to be made, the revenue must be greater than the total cost, therefore N must
be greater than the ratio of F/(P - V).
63
An increase in the fixed cost relative to the Total cost is risky because a small downward
shift in sales will result in a fall in profits.
Let us now consider the following example. The financial statement bellow summarizes a
computer repair company's accounting for the past quarter.
Revenue from sales
$450,000
Total Loss
($73,000)
Total
$523,000
Fixed cost
$80,000
Building space
$50,000
Equipment
$30,000
Variable Cost
$443,000
Labor
$360,000
Parts
$83,000
Utility Bills
$60,000
Total Cost
$523,000
The purpose of conducting a Break Even analysis is to generate actionable data, i.e. data
that can be used to improve management. In the previous example, if the fixed costs
cannot be altered, the actions that might be envisaged should affect the parts and /or the
64
labor. Labor cost can be cut through productivity improvement or overtime reduction.
One metric our computer repair company might consider would be the "cost per unit
repaired" since in this case it is what determines the profit level.
65
Example
What is the Future value of $15,000.00 in 5 years if the interest rate is 5%?
Answer:
66
After defining the various tools and the case study of the utilization of lean six sigma
projects, now we study the various case studies of the companies that have made use of
six sigma for enhancing its productivity and minimizing the defects. The chief among
them being MOTOROLA and GENERAL ELECTRICS.
Example #1: Once conducted a project focused on reducing the cycle time to renew client
contracts. The project was sponsored by a Director of Operations who owned divisional
responsibility for account retention. This sounds good, but one key finding the project
team identified was the necessity for sales to change their process and behavior.
Unfortunately the sponsoring director had no authority over sales who contributed to 30%
of the cycle time. Needless to say this project stalled at the improve phase when the
sponsor was unable to get the support from sales.
The end result was that instead of saving the company $250,000 per year in unbilled
revenue we cost the company $18,000 in project team resources. This is a classic case of
sponsorship at a level too low to execute necessary change.
Project sponsors also need to have an honest interest in the problem being solved. In
many organization projects are commissioned at a senior management level and parsed
out to mid level managers to sponsor. In theory this sounds good, however this does not
ensure project acceptance. The problem results from sponsors not having ownership of
the problem or confidence in a methodology they are ignorant of. Without strong sponsor
support, the projects progress will suffer. With properly positioned and committed
sponsorship challenging projects can deliver significant benefits.
Example #2: While leading a Lean Six Sigma project for an external client with Senior
Leadership sponsorship, encountered a roadblock between two associate VPs. Each
Associate VP organization held resources capable of delivering the service required for
this account. The labor costs for the team who initially delivered the service was 25
times higher than the customer was willing to pay. Additionally travel was required for
them to meet with the client. Conversely, the local team possessed the capability to
deliver the work at half the cost and be onsite with the client without travel expense.
Having senior level sponsorship allowed the data to drive the decision on placement of
the work. Strong sponsorship produced a 90% reduction in the cost and cycle time to
produce each unit. This resulted in a $600,000.00 annual savings to the client and
$80,000.00 in annual revenue to my organization.
Scope:
Project failure is inevitable if the breadth of the project is too big. It is imperative
to establish clear starts and stops to minimize misinterpretation or confusion in the
68
mind of the stakeholders. If this step is not properly performed the scope of a
project can simply be too broad. We call this Boiling the Ocean.
A projects scope should be firmly established BEFORE the project is ever
launched. This will help avoid potential problems arising from out of scope
additions to the project. It also helps manage stakeholder expectations by clearly
communicating what will and won't be delivered. Many projects are completed
meeting or exceeding all goals and still be perceived as a failure by stakeholders.
This will occur if the scope of the project has been misunderstood. Hence the
stakeholders and/or sponsors benefit expectations were either unattainable or
never targeted in the project.
The project scope should only cover two to three Key Process Input or Output
Variables.
No one project can reasonable develop a process change with more than four Key
Process Output Variables. When the scope encompasses more than three KPOVs
the project needs to be either divided into multiple concurrent projects or into
sequential or multi-generational projects.
Project Team
Here is one of the most challenging and underappreciated areas in building the
framework for a successful project. Many projects either have too many, too few
or the wrong mix of representation on the team.
It is essential to have a good mix of skill sets, roles and personality types on the
project. Personality evaluation tools are excellent aids to gauge the cohesiveness
and synergy of a team. By using one of these tools, the prospective team
members from the projects stake-holding teams can be evaluated in advance. The
results can then be used to build the team with the best potential synergy. This
will improve the success of the team and the project.
The rank or position of the team members also plays a key part.Once led a project
with six team members all at the Director and Associate VP level. While this
sounded like a great mix of players to produce change, the team had little
availability to participate in meetings let alone perform the project tasks. Due to
the amount of responsibility held by the team our cycle time and meeting
attendance were both severely hampered.
Time Table
Timing is everything. This age old statement stands true in the Six Sigma
world today. When building a project team little things like vacation, holiday
schedules and peak work cycles must be taken into consideration. Launching a
project for the accounting department at then end of the fiscal year will have some
69
obvious challenges. This may sound elementary, but when projects are identified
by leadership without local departmental visibility, these basic considerations are
often missed. Small issues can become great bottlenecks to minimizing the
projects cycle time.
By carefully and thoroughly executing these steps within the Lean and Six Sigma
methodology substantial improvement in the projects cycle time, goal attainment and
acceptance can be achieved.
Now we study the case studies of various companies-
General Electric
Six Sigma has forever changed GE. Everyonefrom the Six Sigma
zealots emerging from their Black Belt tours, to the engineers, the auditors,
and the scientists, to the senior leadership that will take this Company into
the new millenniumis a true believer in Six Sigma, the way this Company
70
72
73
1980s. While the late 1990s presented some tough challenges for
Motorolabased largely on setbacks and competition in the cellular
and satellite telephone businessesthe company seems to be turning
the corner in late 1999, with most areas back in the black.)
The results Motorola has achieved at the corporate level again
have been the product of hundreds of individual improvement efforts
affecting product design, manufacturing, and services in all its business
units. Alan Larson, one of the early internal Six Sigma consultants
at Motorola who later helped spread the concept to GE and
AlliedSignal, says projects affected dozens of administrative and transactional
processes. In customer support and product delivery, for
example, improvements in measurement and a focus on better understanding
of customer needsalong with new process management
structuresmade possible big strides toward improved services and
on-time delivery.4
More than a set of tools, though, Motorola applied Six Sigma as a
way to transform the business, a way driven by communication, training,
leadership, teamwork, measurement, and a focus on customers.
AlliedSignal/Honeywell
AlliedSignalwith the new name of Honeywell following its 1999
mergeris a Six Sigma success story that connects Motorola and GE.
It was CEO Larry Bossidya longtime GE executive who took the
helm at Allied in 1991who convinced Jack Welch that Six Sigma was
an approach worth considering. (Welch had been one of the few top
managers not to become enamored of the TQM movement in the
1980s and early 1990s).
Allied began its own quality improvement activities in the early
1990s, and by 1999 was saving more than $600 million a year, thanks to
the widespread employee training in and application of Six Sigma
principles.5 Not only were Allieds Six Sigma teams reducing the costs
of reworking defects, they were applying the same principles to the
design of new products like aircraft engines, reducing the time from
design to certification from 42 to 33 months. The company credits Six
Sigma with a 6 percent productivity increase in 1998 and with its
record profit margins of 13 percent. Since the Six Sigma effort began,
the firms market value hadthrough fiscal year 1998climbed to a
compounded 27 percent per year.
Allieds leaders view Six Sigma as more than just numbersits a
statement of our determination to pursue a standard of excellence
using every tool at our disposal and never hesitating to reinvent the way
74
we do things.
As one of Allieds Six Sigma directors puts it: Its changed the way
we think and the way we communicate. We never used to talk about the
process or the customer; now theyre part of our everyday conversation.
AlliedSignals Six Sigma leadership has helped it earn recognition
as the worlds best-diversified company (from Forbes global edition) and
the most admired global aerospace company (from Fortune).
75
76
77
terms of sustaining the improvements. BBNL is applying Six Sigma to processes for
timely complaint resolution, timely order implementation, timely invoice submission and
NOC complaint resolution.
When the teams started measuring critical business processes they found that the baseline
was not as per customer expectations. There were gaps of around 30-40 percent in some
processes. The baseline having been measured, targets were set for improving the
processes. After analysing defects, process improvement kicks in. Simplifying the
process instead of changing the entire process brings in the improvement. The tool
essentially requires fine-tuning the process and eliminating those that do not add value.
"When you are simplifying the projects productivity goes up within the same resources,
thereby leading to optimum utilisation of the resources," says Juneja. One of the ways of
simplifying processes is to use IT for automating processes.
The executive committee continuously monitors the projects. There are monthly reviews
carried out by the Champion, Sponsor and IGE. A quality dashboard has also been
created, wherein every month performance is reported. The CEO and the COO monitor
whether the objectives are being met.
Benefits
In six months BBNL had achieved timely complaint resolution 66 percent from the
baseline, timely order implementation up 70 percent from baseline, timely invoice
submission up 51 percent from baseline and NOC complaint resolution that was 49
percent from baseline. The Six Sigma process improvements have translated into
productivity enhancements, improved customer satisfaction and process effectiveness.
BBNL is targeting an estimated saving of around Rs 10 crore in the first year of
operation.
The target was to achieve 99 percent (i.e. approximately four Sigma level) ('First Time
Right') with respect to respective set norms by March 2004 on all key critical processes.
Since Six Sigma is a continuous improvement initiative, the company will be undertaking
another business objective for the next financial year. On the future roadmap are Six
Sigma for all processes and higher E-SAT (employee satisfaction) and C-SAT (customer
satisfaction) index. BBNL plans to get almost 90 percent of the employees to be Green
Belts by 2005, with almost 100 percent of the employees to be involved in the Six Sigma
journey by the same time.
SIX SIGMA IN IT
Eighty-five percent of the reasons for failure to meet customer expectations are related
to deficiencies in systems and process rather than the employee. The role of management
is to change the process rather than badgering individuals to do better Dr. Deming
Several process improvement methodologies like Six Sigma, Total Quality Management
(TQM), Quality Circles, Taguchi, Statistical process control, etc. are being successfully
implemented in the manufacturing industries sector. It was perceived that such
78
Though these factors are true in some sense, the Six Sigma methodology can still be
applied to IT processes.
The software processes are definitely difficult to measure but its not an impossible task.
Industry leaders like IBM and institutions like Software Engineering Institute have
designed and published many metrics for software processes for the benefit of the entire
industry. Capability Maturity Models prescribe the quantitative management processes as
one of the Key Process Areas at level 4. Lot of books and other material is available
publicly to choose right metrics from. Six Sigma offers strong tools like Quality Function
Deployment (QFD), CTQ flow-down and other templates to convert high-level VOC into
measurable CTQs.
90% of the processes in a software services company are repeatable and can be improved
by the process improvement DMAIC methodology. The DFSS methodology can be
applied to the remaining
5-10 % of the processes, which involve creativity.
It is true that Six Sigma concepts evolved with normal distribution. But, Six Sigma tools
can be easily adapted to handle processes having non-normal distribution
Having discussed the arguments supporting the applicability of Six Sigma to IT
processes, let us make an attempt to understand the applicability of Six Sigma to the
processes that are an integral part of IT services.
79
Software Quality Assurance (SQA), Reviews, etc. are an integral part of the Quality
Management System of any IT service provider. The effectiveness of these core processes
directly impact the
CTQ parameters. There is a large scope for improvement in these processes in most IT
companies. Six Sigma can be deployed to improve these processes.
One of the key factors in deploying Six Sigma is identifying the Y metrics
(dependants). But for core processes this becomes simpler since historical data for key
metrics such as review efficiency, review effectiveness, productivity, defect density,
schedule variance and effort variance are already available. After prioritization, critical
poor performing metrics can be taken as Six Sigma DMAIC projects.
Six Sigma DFSS methodology can be applied for software development projects. Six
Sigma in SDLC helps in making the software manufacturing process more predictable
and ensuring that all Customer CTQs are met. Some Sigma tools that can be applied in
this methodology are
Failure Mode Effect Analysis (FMEA) is a tool that provides effective risk
management for the entire SDLC, and identifies the probable failure modes of
software at design phase. This initiates corrective action on the design.
Pugh matrix enables software developer / analyst to compare different concepts
with reference to customer CTQs and create strong alternative concepts from
weaker concepts Scorecard is a predictive tool used for:
80
The processes that are value enablers are equally important to consistently deliver best
quality service to the customers. These processes consist of infrastructure and network
services, Resource Management, HR processes, Finance and accounting, Training,
Central Quality organization etc.
Efficiency and effectiveness of delivery support processes directly or indirectly
contribute to the productivity of core delivery processes. Processes like infrastructure and
network maintenance are extremely important for offshore development / BPO models.
Six Sigma DMAIC projects can be forked to improve any or all the processes mentioned
above.
Some Y metrics for Delivery support processes are
On time invoicing
Accuracy of invoicing
Network utilization
Training effectiveness
In effect, Six Sigma has a profound impact on the most critical resource in IT industry i.e.
human resources.
81
The DFSS methodology as applicable for software processes cannot be directly mapped
to DFSS methodology as implemented in manufacturing processes. In manufacturing, a
product once designed is produced for years together. Whereas, in case of software
development, a software design is manufactured (coded, to be precise) only once. This
makes the application of DFSS in software development tougher. In a typical
manufacturing setup, the crux of DFSS lies in achieving manufacturability at Six Sigma
quality levels. For a manufactured product, the design budget might be flexible but in the
case of software solutions, the budget for design is very limited and all the CTQs must be
met in the given budget. The DFSS rigor ensures that the software is designed, coded and
approved with minimum rework.
The DMAIC methodology can be applied to improve the Product Quality Attributes of
existing applications, too. Many a times, as the user base increases or if the application is
deployed in a global environment, response time decreases. Round the clock availability
of application has also become a critical issue in todays global work culture and BPO
scenario. DMAIC projects can be implemented to tackle such issues and find a cost
effective fix. Improving reliability measures like MTBF (Mean time between failures)
and MTTR (Mean Time to Repair) can be other focus areas of DMAIC improvement
projects.
CUSTOMERS PROCESSES
Most IT companies provide End to End solutions to their clients and therefore enjoy a
long-term relationship with their customers. This has benefited the service providers in
acquiring significant domain knowledge. The consultants possess fairly good amount of
tacit knowledge about the clients core business processes in addition to IT skills. Six
Sigma tools and techniques provide an excellent channel to develop a basis for solution
based consulting.
Six Sigma methodologies can help core business processes as well as IT processes.
Owing to the consultants exposure to customers processes through IT support, they are
familiar with the best functioning processes, processes which are not operating efficiently
and those processes which have reached entitlement. This enables prioritization to tackle
the relevant processes and this prioritization of improvements makes implementation of
Six Sigma easier.
PATNIS APPROACH
Patnis Process Consulting Practice offers customers a complete range of process
improvement related solutions that covers the best of process/quality models and applied
proven methodology and practices. PCP facilitates IT organizations to move to newer
levels of business excellence through incremental process improvements that are either
benchmarked against established models
82
Process Diagnostics
Customized solutions
Performance Improvement
Process Improvement
83
People Development
Patni has developed its own specific Six Sigma based methodology to execute
development projects and maintenance projects respectively.
CONCLUSION
Six Sigma can be successfully applied to the IT services industry where human
resources is a critical input
Better predictability
Though some of the processes in IT industry may not fall under normal
probability distribution, other quantitative and qualitative tools could be used to
improve the process.
85
resources, equipment, staff, scheduling and transport should all be considered for their
potential impact on departmental productivity.
Using the Six Sigma Methodology
With the general areas defined, the team then chooses one or more projects to focus on,
depending on the department size. Scoping is an important part of the process, since team
organization and effort can often become unmanageable when trying to coordinate more
than three project teams within a single imaging department. Projects are selected based
on initial findings and are chosen for their alignment with organizational goals and the
probability that they will produce results in terms of financial, quality and productivity
improvement.
During the Define and Measure phases, it is important to clearly identify critical
elements and obtain voice of the customer (VOC) information through stakeholder
interviews. Key performance indicators also are gathered, including exam volume, exam
duration and room utilization for all modalities; patient, referring physician and staff
satisfaction, and staffing to identify current operational performance relative to labor
expense, revenue and operational quality metrics. Financial data is pulled from existing
systems within the facility and cycle time data needs to be collected manually. Process
mapping and sub-process mapping with assigned indicators for selected modalities helps
to outline existing procedures within the department.
During the Analyze phase, the project team determines the most critical drivers that may
impact the process under examination. Analysis may reveal issues such as slow start-up
times in the mornings or scheduling conflicts with physicians. The information also may
indicate that a high volume of patients failing to show up for appointments is consuming
capacity, or that utilization fluctuates during the day due to bottlenecks in the system or
variability in patient arrival patterns. With analysis complete, the team then develops
action plans and recommends performance improvement opportunities that are aligned
with the organizations strategic objectives. Moving into the Improve phase often is the
most challenging, yet most rewarding part of the project. As process changes are actually
implemented, long-standing issues are finally addressed and better processes are put in
place changes that will ultimately improve the overall effectiveness of the facilitys
diagnostic imaging services.
The Control phase begins once process changes have been established and appear to be
working. The importance of monitoring results during this phase cannot be
underestimated. This is one of the most critical keys to long-term success and a
differentiating element for Six Sigma. During this phase, control tools are implemented
such as dashboards or balanced scorecards to monitor key indicators and ensure that
project gains remain on track. It also is important during the Control phase to
institutionalize the wins, celebrate success and instill ongoing change management
capabilities through change management tools.
86
Increased satisfaction
Process improvement and workflow adjustments using Six Sigma and other tools can
have a measurable impact on cost and quality of services. Addressing additional areas
such as marketing and providing specialized training for technologists also helps the
diagnostic imaging department gain advantages in market share and accelerates their
return on investment for equipment such as CT scanners and MRI machines.
Diagnostic imaging departments must recognize and respond to new market realities. The
business of radiology in all its various forms is growing at a rate of 10 percent each year,
driven by an aging population and increasing demand for services. This growth is
continuing to strain the ability of healthcare organizations to maintain adequate services.
Reimbursement and compliance issues also present certain challenges, and the quest for
market share continues unabated. To survive and thrive in this environment, diagnostic
imaging departments and facilities must adopt strategies for increasing efficiency and
cost effectiveness.
Achieving optimal efficiency, service quality, customer satisfaction and financial success
in diagnostic imaging requires more than the installation of superior equipment and
information technologies. It also entails adopting a performance-improvement approach
that incorporates both a technical and cultural strategy to realize significant, long-term
results. Experience provides several additional recommendations for reaching and
maintaining Six Sigma levels of excellence in diagnostic imaging:
Develop a plan to accurately set and meet customer expectations
Focus adequate attention on project selection and scoping
Establish a clear understanding of current operations and direction
Determine and focus on key metrics and success indicators
Do not underestimate the importance of the Control phase
Just as the discovery of X-rays a century ago became the new light in medicine,
operational performance improvement continues to shed new light on opportunities for
streamlining and optimizing the services provided within diagnostic imaging.
90
91
94
- Thought Leadership
- Expert on Six Sigma
- Mentor Green and Black Belts
Master Black
Belt
Black
Belt
Black
Belt
Green
Belt
Green
Belt
Green
Belt
Yellow Belt
Six Sigma Online's Yellow Belt certification provides an overall insight to the
techniques of Six Sigma, its metrics, and basic improvement methodologies.
95
Green Belts
A lot has changed in the last few years, and businesses are slowly but certainly starting to
live in this new possibility. Of late, businesses that have provided Green Belt training to
existing employees have reported phenomenal success with Six Sigma implementation
projects, something that validates the effectiveness of this new concept. These businesses
have also reported a substantial reduction in the overall project implementation costs, the
reason being that they no longer have to pay the high fee demanded by Six Sigma Black
Belts.
Using this new concept is beneficial for all types of organizations, but it is the small and
96
medium sized enterprises that stand to gain the most - because they are the ones who
often complain about the lack adequate funds and resources, and as such find it very
difficult to hire the costly services of Black Belts.
Black Belts
In most organizations, Six Sigma implementations are no longer limited to just a few
processes; they are now carried out across all the functional departments that might be
there in an organization including sales, finance, production and inventory. The aim is to
improve the overall efficiency of the organization rather than just focusing on a single
product or process. As such, the conventional role played by Black Belts is constantly
being redefined to include new dimensions such as managerial skills and the ability to get
things done within the specified time and costs.
Black Belts are expected to take on the role of a manager, who shoulders all the
responsibility related to a particular project or plan of action. When Black Belts play the
role of organizational managers, they are required to display their expertise in handling
all the related responsibilities such as gathering all the resources that might be required,
making the best possible use of organizational resources, creating implementation teams,
selecting implementation team members, defining roles and responsibilities, allocating
resources and making sure that everything is being done as planned.
The Master Black Belt is the highest level of Six Sigma technical mastery. Because only
Master Black Belts can and are required to facilitate the training of Black Belts, they have
to know everything the black belts do. In addition, they need to understand how the
theory of mathematics is linked with statistical methods.
Six Sigma Master Black Belts must be able to help their lower-level counterparts (Black
and Green Belts) in applying the appropriate methods throughout the Six Sigma project
process.
Generally speaking, training in statistical methods should be done solely by Master Black
Belts. Due to the basis of the Master Black Belts work, communication skills, as well as
the aptitude to teach, should be deemed just as important as technical skill when
evaluating candidates.
97
Green Belt
(GB)
Black Belt
(BB)
99
CASE STUDY
A case study is one of several ways of doing research. Rather than using large samples
and following a rigid protocol to examine a limited number of variables, case study
methods involve an in-depth, longitudinal examination of a single instance or event: a
case. They provide a systematic way of looking at events, collecting data, analyzing
information, and reporting the results. Case study should be defined as a research
strategy, an empirical inquiry that investigates a phenomenon within its real-life context.
Insurance return inventory typically represents five percent of the factorys ending
inventory forecast and is a significant financial holding. Floor space and capital tied
up in this category of inventory constrain the factory from positioning good
available inventory in finished goods to meet unplanned customer demand.
A noticeable build up in field return inventory first led us to look at the insurance
returns process at EMC. Recognizing inventory accumulation as one of the seven
wastes of Lean, we formed a cross functional team, led by a green belt, to try to
eliminate this waste at one of our plants. The primary goal of this Lean Six Sigma project
was to review and understand the existing insurance returns process at a single
manufacturing plant and the extent of variation within the process.
The objective of the project was to reduce the dollar holding on insurance inventory
by 50%. A secondary measure was to improve the cycle time it takes to process the
material.
The team embarked on the define phase of the project by first completing an IPO
diagram, which indicated insurance inventory levels and time spent in the insurance area
were the key outputs. The key inputs were the number of insurance claims,
types of claims, capacity for re-test, and storage space, among others. The detailed
IPO is listed below for reference.
Since many contributors to the process had visibility only to their own input and
lacked an understanding of inputs from other functions, the IPO diagram was a
useful tool in clarifying the input factors across the entire cross-functional process.
The team then performed a stakeholder analysis, identifying key stakeholders and
their level of readiness to change the process.
101
MEASURE
The first step in the Measure phase was to develop the current state process flow
which helped define the boundaries of what we were trying to improve. While
constructing the process flow, it became clear there were elements of the process
outside of manufacturing control, highlighted outside the red line in the process flow.
To facilitate progress of this project, we decided to treat these elements as out of
scope for this project.
The tools utilized up to this point in the project helped refine understanding, scope
and objectives of the initiative. However, at the measure phase it was also necessary
to gather some baseline quantifiable data around our primary and secondary
metrics.
We found that over the previous twelve month period the levels of insurance claim
inventory being held by the factory was consistently high. Throughput was evident
in this inventory pile, as the annual figures for insurance claims were much higher
than the residual inventory in the factory.
The team then looked at the secondary measure - the cycle time of insurance claims.
Baseline average cycle time was 330 days, with a standard deviation of 165 days, as
compared to an upper spec limit of 90 days. In fact, 75% of the insurance inventory
exceeded the 90-day goal for cycle time. The data also showed that our insurance
inventory was linked to 24 specific top level assembly units (TLAs), which in turn are
102
ANALYZE
The first stage of the Analyze phase was to revisit the process flow with a view to
identifying all forms of waste in the process. From the data in the measure phase, it
was obvious that there was wide variation, contributing to the poor performance of
the process. We now sought to identify the causes for this variation. To carry out the
analysis effectively the team focused on a few specific insurance claims, tracking
them through the process steps and observing their cycle times through each step.
We also noted inventory levels at each step and the catalysts that prompted
movement from one step to the next.
As visible in the process flow, waiting time was evident throughout the process with
insurance inventory often sitting idle, waiting to move to the next step in the
process. This occurred when ownership for input to the process crossed functional or
departmental boundaries and that each possible delay introduced greater opportunity
103
104
IMPROVE
The charter for the improve phase was defined based on the outcome from the
Analyse phase, including:
Focus on the scrap routing where 73% of claims go.
Improve communication within the internal cycle to facilitate a better physical
flow and reduce waiting.
Avoid batch processing anywhere in the process.
Streamline hand-offs and clearly identify ownership throughout process.
Review or create service level agreements within each stage of the process.
Review the insurance assessors requirement to physically keep inventory
onsite following any audit recommendation to scrap.
Addressing the corrections to the information and process flow, we deployed a swim
lane approach to the process flow, clearly denoting where hand-offs occurred in the
process, both for information and physical product. This also enabled us to clearly
address the matter of ownership, as every stakeholder could now see the importance
of their input to the overall success of the process.
For most variable input, basic service level agreements were put in place to drive a
performance standard. For example, the initial inspection on claims was expected to
be completed and written up within fourteen days of receipt of goods. Metrics on all
the key gateways were also developed to ensure ongoing maintenance of standards.
Elements of the Lean principles were also introduced at this stage with the removal
of the batching of work in this process and an emphasis placed on one-piece flow.
105
Among other things, testing and inspection had been handled in a batch manner but
it was now agreed that each claim would be processed individually thus ensuring
minimal delays, enhanced flow and constant throughput.
The final element we needed to address for the routing was to optimize the
requirement on us as a factory to hold inspected inventory on our books awaiting
closure of the claim. At this point in the process, we would have completed all
required evaluations on the product and had decisive recommendations on the next
course of action for the material, however nothing was done with product until the
insurance claim was completed.
After discussing with the insurance assessors they concurred that there was no real
value-add in us holding material indefinitely and that given a thirty day notice of our
intent the assessor was agreeable to let us process the material onwards with the
appropriate documentation. This meant that while the insurance claim could go on
for months, the inventory associated with it, once inspected and tested, could be
dispositioned without much delay (i.e. held for 30 days post inspection). While this
change in the process and some of the other general ownership and information flow
improvements benefited this focused routing specifically, it also would be
complementary to many of the other routings the remaining 27 percent of insurance
claims would go through.
CONTROL
Once the improvements were implemented the positive impact became very visible
in the primary and secondary metrics. Inventory holding was reduced by almost 55%
from levels before the project started.
For the secondary metric we used some box plots to illustrate the change in the
process. Three box plots are presented, the first being the baseline data, the second
shows the improved process with all claims considered and the third shows the
improved process with only claims that followed the routing we choose to improve.
Overall there was a shift in the process mean from 330 days to 56 days and the
106
range was reduced and shifted downwards. P values from two sample t tests and F
tests for before and after data indicated that both the mean and standard deviation
had shifted significantly.
The initial control plan is largely based around a monthly meeting chaired by the
inventory management group. This group will also maintain a tracking sheet of all
the current insurance claims, detailing the cycle time of each process step and the
inventory holding in each. An automatic report has been set up from the
manufacturing system to gather this information, so as to avoid creating any extra
work and reduce manual dependencies in the improved process. In the mid to long
term, the monthly meeting remains in place to facilitate rotation of personnel,
general health check on the process, any new developments and customer
satisfaction. Once confidence is built in the process control, we are looking to deploy
control charts to track the performance of the main steps.
The results of this project were aptly summarized by Bill OConnell, Director of
International Supply Chain at EMC, who said, The gains made in this project by
freeing up monies from non-value adding inventory, will really service the business
well in terms of customer experience and in providing flexibility. This project really
proves the power and value in applying the basics of Lean Six Sigma, specifically
tools such as process mapping, cause and effect analysis and standard operating
procedures.
107
Six Sigma Case Study : Shows Six Sigma Role in Financial Services
Sub-Process
Process Indicator
Number of contacts per month
Number of complaints from car
dealers
Car dealer turnover per month
Percent abandoned calls;
speed of answer in seconds
A key success factor of this exercise was to have the respective sub-process owners
contributing to pinpoint the specific problem areas.
From this pre-analysis, the team identified two Black Belt and two Green Belt projects
(Table 2). The distinction between Black Belt and Green Belt was made depending on the
urgency of the task. The reasoning was that full-time Black Belts could deal with the
problem faster and more effectively than part-time Green Belts.
108
Find New
Car
Dealers
Contract
with
New
Dealers
Communication
with Car Dealers
Loan
Approval
Loan
Collection
Customer
Service
> Approval
Cycle Time
> Application
Errors
> Customer
Complaints
> Phone
Handling
> Payment Issues
> Account
Reconcillation
> Inquiries
> Customer
Complaints
> Phone
Handling
Black Belt
Project
Green Belt
Project
Green Belt
Project
Loan
Loan
Approval Collection
Customer
Service
109
Defect Definition: Car Dealer with Turnover Within the Last Three Months (Inactive
Rate)
One of the biggest obstacles at this point was engaging the process owner in the project.
The car loan business process owner the sales director was one of the few managers
who were very skeptical of Six Sigma. Additionally, the team was not used to working as
a team: Different office locations for marketing/sales and operations led to a breakdown
of communication between these functions. The first team meeting was a very quiet
exercise with obvious and hidden finger-pointing.
However, the Black Belt did an excellent job in influencing the process owner, by helping
him to understand and see the benefit of Six Sigma.
Lessons Learned:
Make sure senior management buys in to Six Sigma first.
Show staff at all levels right from the start that Six Sigma is an imperative that
contributes to the strategy of the company.
Start Six Sigma implementation with needed projects rather than some "learning
and training projects."
110
During the Analysis phase, the team focused on those two issues. The team first decided
to examine the communication process between the sales team and the clients.
Surprisingly, it found that there was no process. The sales representatives complained
about the workload they had to do every day. They were kept busy preparing reports,
making sales presentations and attending a lot of internal meetings. They did not really
focus on talking to their clients. One of the typical comments was: "If I have some time
left, I give my clients a call."
The analysis of the interest rate revealed an additional, even worse issue: Some of the
clients did not know the newly reduced interest rate of the bank.
The root cause for this serious fault was that the communication channel between
marketing and operations simply did not work well. Immediate action was taken to
inform all clients about the better rate.
Lessons Learned:
Do not assume the company knows what the customers want. Ask them.
Do not blame people for problems. It is the process which needs to be fixed.
111
The customer satisfaction survey phone calls had created a positive impact on the car
dealers, who perceived that the bank did value them as priority clients. Another reason for
business growth was the communication of the new rates, which were more aggressive
and competitive.
The Six Sigma team developed solutions for addressing the main problem root causes:
Development of a communication process between sales representatives and
clients.
Development of a monitoring tool to alarm sales in case of inactivity of clients.
Refinement of the roles of marketing, sales and operations, resulting in less
administrative work for sales personnel in order to give them more time for their
first priority talking to clients.
Redefinition of internal interfaces to improve communication between
departments.
Production of a marketing handbook to support clients in selling the bank's
services.
Especially during the Improve phase, the presence and support of one of the car dealers
was essential. He gave the important input about how often and in what way he would
like to be contacted by the sales force.
The solutions required some financial investment. Getting approval was easier than the
team had thought it would be. The major factor was the data about the additional business
and about the decrease in the rate of inactive car dealers. The team used the data to
extrapolate the growth for a one-year period and compared that figure with the estimated
cost of the solutions. The sales director supported the solutions 100 percent.
112
113
To decide whether a deviation was critical or not, the team implemented a control chart.
This was built using the weekly inactive rate after the process had stabilized, a quarter
after implementing the changes.
Using a control chart for this purpose seemed to some to be questionable. At the
beginning, a lot of education in how to use control charts was needed. It was stressed that
the chart was just a measuring device; the importance was in what was done in reaction to
movements within or without the control limits.
Lessons Learned:
Measuring a process tends to change the behavior of people; measuring the right
indicators tends to change the behavior in the right direction.
Involving clients in project work normally builds a long-term relationship, with
benefits for both the clients and the business.
Understanding control charts means knowing they are primarily signals of when
changes in the process are significant enough to require action.
114
After implementing the changes and after the results became obvious, Six Sigma gained
momentum within the bank. The start of further Six Sigma projects did not depend on a
push from senior management, but became more and more part of normal business. The
sales director showed his newly acquired commitment by proposing a Six Sigma team for
a reward-and-recognition event at company headquarters.
In addition to the increased profits, the results from this project included:
The bank gained valuable information about the voice of the clients and their
needs, and the impact of internal processes upon that.
The team experienced the power of teamwork, communication and process
analysis, not just the application of complex statistical tools.
Additional improvement opportunities were identified during the project work,
e.g., restructuring the client communication process in other business areas.
The Define Phase outlined the scope of the project and the finance group
projected a savings of more than $45,000 per month ($540,000 per year).
The Measure Phase identified the current baseline data, as well as identified a
measurement system problem with one of the performance metrics. The
measurement system was corrected and several quick wins were also identified to
be implemented.
The Analyze Phase identified potential root causes utilizing the operator's
knowledge and then validated those that were preventing the process from a
higher output. Three validated root causes accounted for over 90% of the capacity
problem. The improvement tools used included a simple Cause and Effect
Diagram and several intermediate statistical tools, such as t-test and ANOVA.
In the Improve Phase, the team developed solutions for each validated root
cause, ranked the solutions and presented them to management, along with databased information on each solutions impact. Management then made the decision
115
Finally, in the Control Phase, the team developed a control plan to help the team
and process owner hold the gains made. The plan used Statistical Process Control
and auditing techniques to check a variety of indicators on an ongoing basis.
The Results of Applying the Six Sigma DMAIC Process:
The team exceeded the pounds per month goal by more than 3%
(150,000lbs/month).
116
The realization that the first category of problems was the one to be attacked (customer
focus) came spontaneously.
Then prioritization was done to select the most important problem using the weighted
voting system followed by a quick discussion to produce a consensus. The theme (CTQs)
selected was "Consistency of Quality and Timeliness".
1.2) The problem area: Within the theme, intuitively the management recommended a
particular customer line. When asked to collect data for different customer lines and
present it, to their surprise they found that another major line had a bigger problem. This
was the line selected. The realization of the importance of data based had begun!
1.3) Definition of the problem: Data (including errors) was collected for 30 days. During
this exercise it was realized that different auditors were classifying the same error in two
different ways, leading to measurement system discrepancies. This led to a
reclassification of the errors, and training of the auditors.
From the data then collected and analysed the problem was defined as follows:
Customer requirement: <50 ppm errors
Current process average errors: 510 ppm
Variability (sigma): 710 ppm
(Average + 3 sigma): 2640 ppm
Note: Errors were collected before rework to ensure that the root causes would be
exposed.
Problem definition: Reduce error density to assure 3-sigma quality under 50 ppm from
the current 2640 ppm (i.e. 98%).
2. Finding The Vital Few To Attack
The errors collected were categorized using a Pareto diagram. Prioritization was required
at three levels:
Level 1: Four categories, C1 to C4 - one category (C1) constituted 85% of the errors
Level 2: C1 into 4 categories, C11 to C14 - one (C11) category constitutes 98% of the
errors
Level 3: C11 into 4 categories, C111 to C114 - one (C111) constituted 85% of the errors
Category C111 was attacked as it constituted approximately 65% of the total problem.
3. Idea Formulation For Countermeasures
Seven error types were found in C111 in two broad categories. They were examined to
determine why each one could have occurred, and a brainstorm for possible
countermeasures was done. The most likely measures to "Kill the Problems" were
selected for trial implementation.
4. Idea Testing And Modification
The selected countermeasures were analyzed and tested for each error type and the
successful countermeasure was short-listed for implementation.
117
5. Implementation Of Countermeasures
Training instructions were prepared for the new procedures and all the operators were
trained. Implementation of all the countermeasures was done across the system from a
particular date.
6. Confirming The Results
The team was trained in control charts and the X bar-sigma charts were introduced to
monitor the results. A dramatic reduction occurred from the day of implementation ,and
the first three weeks confirmed that a drop of 90% in error density had been achieved
from 2640 ppm to around 300 ppm.
Tremendous enthusiasm was generated in the team as the result of this project far
exceeded their expectation.
7. Maintenance Of Improvement - Continuous Small Improvements
Standard operating procedures (SOP) were drawn up for the process changes. A special
session with the operating personnel emphasizing regular review, and killing any
abnormal peaks that may have occurred in the control chart was explained. An SOP
covered the frequency of review meetings for each level of supervision and management
and a review format was introduced. The line supervisor who was part of the team
became the enthusiastic owner of quality and the control chart, as well as the leader of the
team charged with maintaining quality and continuously improving it. The slogan "If you
do not improve, you deteriorate" was introduced.
This effort gradually brought down the (average + 3 sigma) error density further from
300 ppm to <50ppm.
The Quality Improvement (QI) Story
A QI Story was prepared for presentation to senior management detailing the
improvements that occurred:
Tangible
Customer delight: Customer reported 100% quality in his sampling consistently
over six months. He could not find errors at such a low density.
Productivity and Cost: Inspection and rework reduced to almost zero. 99.7% first
pass efficiency. Sampling sizes were reduced. These resulted in savings of US $
50000 per annum at Indian wage levels (in US equivalent US $ 300,000 per
annum).
Volume Increase: Approximately 50% by the customer. The production went
through without increased manpower.
Turnaround of the documents was improved dramatically due to no rework and
started meeting customer requirements.
Intangible
Senior management time saved
Motivation of the operations personnel very high
118
Future plans for improving the turnaround by 50% using just in time methods are being
implemented now.
Conclusion, Six Sigma - Techniques and Mind-set
The case here emphasizes the importance of Six Sigma techniques implementation being
accompanied by building a culture and mind-set of continuous improvement and change
in all employees. In the author's view and experience it is the creation of synergy between
people and techniques that ensures maximum and continuing benefits from a Six
Sigma/TQM initiative.
Six Sigma case study : Syndicate Bank saves its way to success
A centralised banking system has helped Syndicate Bank save crores of rupees
"WE were looking for a solution to improve the productivity of the bank and its
customers. It was never about an IT process," declares B S Murthy, general manager, IT,
Syndicate Bank.
The bank had two alternatives:
Implement core banking by carving out a small area in existing branches while
keeping the existing branch automation system intact so that only new customers
would get into the core banking set-up.
Go in for a wholly-centralised system.
Turnkey project
KPMG interacted with the bank's business council at this stage. The decision was taken
to go in for a core banking system (CBS) where the bank would totally migrate to the
new system as it was felt that splitting into two systems would pose too many accounting
issues.
"We had 72 vendors bidding for the contract," says Murthy. That number came down to a
dozen, and from that to six, then three. A detailed evaluation procedure was laid down
before selecting Flexcube from i-flex.
"From the start we were clear that only one consortium would be responsible for the
entire project." The IBM-i-flex consortium was picked for the CBS deployment. "It was a
119
turnkey project, including core banking, ATM, telebanking, Internet banking and cash
management. We worked with i-flex, HMA, Servion and CashTech on this project," says
Sriram. At the peak of the project, between IBM and i-flex, there were 25 project
executives interacting with the bank.
Multicity pilots
The project began in August 2001. The first branch went live on December 15, 2001.
"The first branch went live in three-and-a-half months. We had to go live in that time with
all the basics. We identified seven critical bits of customisation and went live with i-flex
core banking. Time was the biggest challenge, as it normally takes six to eight months,"
says S Sriram, associate partner at IBM Business Consulting Services.
Pilots were undertaken at six branches in six weeks by January 2002. The first was in
Mumbai, which accounted for two of the six pilots; Delhi accounted for another two
while Bangalore and Manipal had one each. "Branches from different cities were picked
out deliberately to ensure uniform technology adoption across the country, while WAN
issues were ironed out in the pilot itself. Normally, pilots are undertaken in a single city,"
says R Narasimhan, senior relationship manager, i-flex solutions.
After this came the launches of new delivery channels. ATMs were launched in February
2002. The telebanking launch went live in July 2002, followed by Internet banking in
January/February 2003.
The core team consisted of 20 to 30 people. The core team was trained by i-flex and
IBM-the training included product training and IBM AIX training. The bank organised
training on Oracle. The core team began training end-users and so the acceptance level
was higher. Training in setting up servers and other similar tasks is being given to bank
employees; Syndicate Bank is keeping all these functions in-house. There are plans to
decentralise the helpdesk operations by setting up a helpdesk in Delhi, in addition to the
Bangalore operation. "The biggest challenge is to retrain 3,000 officers. We have a 45seat set-up in Manipal that is dedicated to CBS training. We run 42 programmes in a year,
accounting for 1,800 officers," adds Murthy. There is a second training set-up in Delhi.
Single window
For customers, the big boost has been the introduction of ATMs. Murthy cites the
example of an ATM that was installed at the Central Railway Workshop in Matunga,
Mumbai. 4,000-plus employees at the railway workshop now use this ATM, and thanks to
it they no longer spend an hour or two visiting the nearest branch.
The bank also captures a greater amount of information about its customers today. "Data
enrichment has been done to enable cross-selling. We collect more details about a
customer than we used to," says S R Vijayakumar, the bank's deputy general manager, IT.
Existing business processes were revamped through a BPR (business process
engineering) exercise that involved doing away with the 'Maker Checker' concept that
was part and parcel of the old branch automation set-up.
The bank is now in a position to offer a single-window facility to its customers. Staffing
requirement went down by as much as 40 percent, driving down space requirements. The
deployment of a CBS has helped the bank launch a number of new products. Its global
debit card has been a smash hit, with the bank signing up 1,50,000 customers in 11
weeks. Other new products and features introduced include any-branch banking and
telebanking.
The bank's helpdesk generates a branch- and zone-wise report of the number of new
120
accounts at all the branches every fortnight. This is then faxed to zonal heads. "Earlier,
there was a lag in this process, it used to take a month for the information to reach the
head office for consolidation. Today we are in a position to send the July statement on
August 3," says Murthy.
The system also returns a list of non performing assets (NPAs), and the credit department
is the first to know of these. This makes monitoring and response a lot faster.
Six Sigma case study : A Black Belt Case Study for Bank Deposits
In early 2000, dot-coms were all the rage. Any idea even remotely related to the ability to
transact online was immediately funded. Consequently, many decisions were made
quickly and without supporting data. And many of these decisions were made in error -but could Six Sigma have made a difference? This case study will review how a Black
Belt entered a dot-com transactional business, reviewed a process, and came to his own
conclusions about process performance.
Case Study: Online Banking
The Black Belt began working at online bank, and his first project involved the process of
how deposits were made to this bank. Since it was an "online" bank, there were no
branches for customers to use. Instead, deposits were mailed using the United States
Postal Service (USPS). Savings resulting from the lack of branches and tellers were
passed along to the customer in the form of higher rates, free services, etc.
Customer focus groups and surveys indicated that the process of making a deposit is of
critical importance to a customer. The process from the customer's viewpoint is very
straightforward -- they sign a check, fill out a deposit slip, and mail both to the bank.
Deposits were the second largest driver of inquiries to the customer call center (13% of
all calls). Customers expressed frustration in mailing delays and couldn't understand why
their checks took so long to post to their account.
The Deposit Process
The bank's mission was to receive the deposits as quickly as possible and begin the
deposit and check clearing cycle. When the bank originally set up the processes, a
decision was made to establish 'local' deposit locations around the United States. These
local deposit locations received the deposits and overnight express reshipped them to a
central processing location daily.
This local receipt and express reshipment to a central location was done for two main
reasons:
1. A deposit being mailed to a local location would take less time than mailing to a
centralized, national location.
121
The express reshipment process was manual. Manual processes that are not
reinforced daily and that do not have adequate control plans tend to break down.
That is exactly what occurred with the local deposit locations. Some locations
wouldn't receive deposits on a daily basis. When deposits were received, they
sometimes wouldn't be express reshipped that night because of a lack of
engrained process.
For deposits that were received during the week, the express reshipment process
functioned properly. On the weekend, however, express reshipment wasn't
possible, so deposits arriving on Saturday were not express reshipped until
Monday evening.
Because of USPS processes, some deposit mailings to 'local' deposit locations
took as long as three days. Tack on a weekend stay for the above listed bullet, and
you can see how a deposit made via mail to a 'local' deposit location may take
longer than five days just to be received by the bank.
122
Additional Findings
An additional analysis of deposits made to a 'local' deposit location with express
reshipment to a national location versus mailing directly to a centralized, national
location yielded the following results:
The 'local' process operates at a 2.1 sigma level, while the centralized, national
process operates at a 2.5 sigma.
Conclusions
It didn't take further data collection to convince the leaders of the business to modify
their deposit process and move to a centralized, national process. The facts spoke for
themselves. Cost savings resulting from only printing one address envelopes (instead
of numerous local), reduced overhead associated with processing, fewer customer
inquiry calls and investigations, and a more stable process resulted in savings of
$4MM per year. Not bad for a six month Black Belt project.
123
PROJECT CONCLUSION
Six Sigma is a philosophy, the application of which results in break through
improvements. The use of Six Sigma help us in controlling the input and process
variations, which yield a predictable product. The basic process involved in six sigma are
measure, analyze, improve and control, the use of these processes lead us to solutions
which are highly predictable. Six Sigma is therefore tool of quality assurance to the
customers. The use of Six Sigma is not only a tool to provide great quality to customers
but also a tool for cost reduction, meeting customer specifications and achieving business
targets. Six Sigma helps us in developing and delivering near-perfect products and
services.
Six Sigma has been used in many manufacturing and business organizations for
manufacturing defect reduction, cycle time reduction, cost reduction, inventory reduction
and labor reduction. Besides all this it also leads to increased utilization of resources,
product sales improvement, capacity improvements and delivery improvement. Six
Sigma is thus a very efficient tool for product development and minimizing its defects.
Not only its implementation yield to better quality products for the customers and provide
customer satisfaction but also its leads to increased employment in a company.
The use of Six Sigma can also lead to development of new products besides increasing
the quality of the of the existing products. Six Sigma design techniques provides greater
flexibility for product development and faster turnaround at a key manufacturing facility.
Therefore Six Sigma can be very well be used by a company to improve its services for
the customers. It helps the company to deliver error-free services to customers by doing
the job right the first time, every time. It provides accompany with better effectiveness
any efficiency.
Through the various case studies we came to know that the implementation of Six Sigma
enhance productivity and thus results in large amount of savings for the company. Hence,
the usage Six Sigma results in increased profit for a company and provides much better
financial returns. It is thus a comprehensive management system which helps in
strengthening the position of a company in the market, enhancing its profitability. It is a
innovative improvement concept which in a consistent way track and compare
performance to customer requirements and an ambitious target of practically-perfect
quality. Six Sigma thus generates sustained success for a company. Six Sigma creates
the skills and culture for constant revival for an organization. Six Sigma sets a
performance goal for a organization and also provides a methodology for
achieving it. Six Sigma accelerates the rate of improvement and also provide
value to the customers. Six Sigma makes monitoring and response a lot faster.
124
REFERENCES
The following books and references helped us during the development of the project.
Breyfogle, Forrest W. Implementing Six Sigma. New York: John Wiley & Sons,
1999.
Complete reference for advanced Six Sigma statistical tools. Uses Minitab for examples.
Juran, J.M. and Frank M. Gryna. Quality Planning and Analysis. New York.
McGraw-Hill, 1993.
Excellent concise summary of statistical tools for analyzing data
Harry, Mikel J. The Vision of Six Sigma. Phoenix: Sigma Publishing Company,
1994.
Presentation slides used for training by Mikel Harry.
Kiemele, Mark J. Basic Statistics. Colorado Springs: Air Academy Press, 1993.
Excellent general statistics text with a large number of case studies
Womack, James P. and Daniel T. Jones. Lean Thinking. New York: Simon &
Schuster, 1996
Case studies for lean principles and techniques.
125