Journal of Management & Organization
Journal of Management & Organization
Journal of Management & Organization
http://journals.cambridge.org/JMO
Journal of Management & Organization / Volume 21 / Issue 02 / March 2015, pp 237 - 262
DOI: 10.1017/jmo.2014.77, Published online: 14 January 2015
Abstract
It is often claimed that 50–90% of strategic initiatives fail. Although these claims have had a
significant impact on management theory and practice, they are controversial. We aim to clarify
why this is the case. Towards this end, an extensive review of the literature is presented, assessed,
compared and discussed. We conclude that while it is widely acknowledged that the implementation of
a new strategy can be a difficult task, the true rate of implementation failure remains to be determined.
Most of the estimates presented in the literature are based on evidence that is outdated, fragmentary,
fragile or just absent. Careful consideration is advised before using current estimates to justify changes in
the theory and practice. A set of guiding principles is presented for assisting researchers to produce
better estimates of the rates of failure.
INTRODUCTION
he Business Policy field was founded at the start of the 20th century (Rumelt, Schendel, & Teece,
T 1994; Hambrick & Chen, 2008) and strategic management was formally born in the 1960s
(Amitabh, 2010), when Chandler (1962), Ansoff (1965) and Learned, Christensen, Andrews, & Guth
(1965) published their pioneering books. Since then, strategic management has gone through several stages
(Ansoff, Declerck, & Hayes, 1976; O’Shannassy, 2001), taken many forms (Mintzberg, Ahlstrand, &
Lampel, 1998) and changed profoundly. One of the most challenging and unresolved problems in this area is
the ‘apparently high’ percentage of organisational strategies that fail, with some authors estimating a rate of
failure between 50 and 90% (e.g., Kiechel, 1982, 1984; Gray, 1986; Nutt, 1999; Kaplan & Norton, 2001;
Sirkin, Keenan, & Jackson, 2005). By failure we mean either a new strategy was formulated but not
implemented, or it was implemented but with poor results. This is a simple definition but still consistent
with the three features of a successful implementation as defined by Miller (1997): (1) completion of
everything intended to be implemented within the expected time period; (2) achievement of the performance
intended; and (3) acceptability of the method of implementation and outcomes within the organisation. It is
also consistent with the planned and emergent strategy modes. In both strategy modes, strategy may or may
not be completed, may achieve different degrees of performance and its acceptability may also vary.
The difficulty of successfully implementing new business strategies has long been recognised in
the literature (e.g., Alexander, 1985; Wernham, 1985; Ansoff & McDonnell, 1990), and a 1989 Booz
CEFAGE, Évora, Portugal; and Faculty of Economics, University of Algarve, Faro, Portugal
Corresponding author: [email protected]
Allen study (cited by Zairi, 1995) concluded that most managers believe that the difficulty of
implementing strategy surpasses that of formulating it. As an example, the study found that 73% of
managers believed that implementation is more difficult than formulation; 72% that it takes more
time; and 82% that it is the part of the strategic planning process over which managers have least
control.
In order to understand the reasons behind failure and improve the success rate of implementation,
several researchers have provided comprehensive sets of implementation difficulties (Alexander, 1985;
Wernham, 1985; Ansoff & McDonnell, 1990; O’Toole, 1995; Beer & Eisenstat, 2000; Cândido &
Morris, 2000; Hafsi, 2001; Miller, Wilson, & Hickson, 2004; Sirkin, Keenan, & Jackson, 2005;
Hrebiniak, 2006; Gandolfi & Hansson, 2010, 2011; Cândido & Santos, 2011). Many researchers –
some of which following on from the inspiring work of Lewin (1947/1952) – have also proposed
integrated frameworks for strategy formulation and successful implementation (e.g., Ansoff &
McDonnell, 1990; Gioia & Chittipeddi, 1991; Baden-Fuller & Stopford, 1994; Kotter, 1995;
Hussey, 1996; Galpin, 1997; Johnson & Scholes, 1999; Calori, Baden-Fuller, & Hunt, 2000;
Cândido & Morris, 2001). Some others have adopted a different approach and decided to empirically
test the impact of these frameworks and of their success factors (e.g., Pinto & Prescott, 1990; Miller,
1997; Bauer, Falshaw, & Oakland, 2005; Bockmühl, König, Enders, Hungenberg, & Puck, 2011).
Several major debates in the literature (Eisenhardt & Zbaracki, 1992) have also contributed to the
advancement of possible solutions to the implementation problem, namely those around the rationality
of the strategy formation process (Fredrickson & Mitchell, 1984; Fredrickson & Iaquinto, 1989; Dean
& Sharfman, 1993; Papadakis, Lioukas, & Chambers, 1998); the accidental, evolutionary or natural
selection approaches of strategy (Alchian, 1950; Cohen, March, & Olsen, 1972; Nelson & Winter,
1974; Hannan & Freeman, 1977; Aldrich, 1979; March, 1981; Van de Ven & Poole, 1995); the rate,
rhythm or pattern of organisational change (Dunphy & Stace, 1988; Weick & Quinn, 1999); the
incremental or emergent additions to intended strategy (Mintzberg & Waters, 1985; Mintzberg, 1987;
Quinn, 1989); the idiosyncratic nature of each individual strategic decision (Mintzberg, Raisinghani,
& Théorêt, 1976; French, Kouzmin, & Kelly, 2011); the impact of top management team compo-
sition and relationships between members (Hambrick & Mason, 1984; Naranjo-Gil, Hartmann, &
Maas, 2008; O’Shannassy, 2010); the alternative management styles and strategic change methods
(Hart, 1992; Stace & Dunphy, 1996; Johnson & Scholes, 1999; Balogun & Hailey, 2008); the
distinction and relationships between strategy process, content and context (Pettigrew, 1987; Barnett
& Carroll, 1995); and also the ‘less rational’: political, cultural, behavioural, learned and even symbolic
aspects of effective strategic change (Cyert & March, 1964; Carnall, 1986; DeGeus, 1988; Senge,
1990; Gioia & Chittipeddi, 1991; March, 1997; Nonaka, 2007; Goss, 2008).
Although remarkable progress has been made in the strategic management field, the problem of
strategy implementation failure persists, and it is still an important and ongoing concern for researchers
and practitioners (Mockler, 1995; Barney, 2001; Hickson, Miller, & Wilson, 2003).
Probably, one of the most important challenges in this area is to discover how to ensure successful
implementation. A useful first step in this direction is to assess what the real scale of the problem is.
This assessment is important for three main reasons. One important reason is that currently both
researchers and practitioners seem to assume that the rates of failure are very high. Considering that
some of the high estimates have been used to guide some of the research and practice on strategic
management, an assessment of the extent to which they provide an accurate and up-to-date account of
the problem of strategy implementation failure is required. This is particularly relevant as some of the
estimates presented in the literature have played an important role on the adoption or abandonment of
some management tools by practitioners, and on the choice of topics researched by academics.
Therefore, a rigorous assessment of the extent of the problem can assist decision makers to make better
informed decisions on strategies to adopt and on topics to research.
A second reason in favour of this assessment is that it will allow to determine whether the failure rates
estimated over the years show any particular pattern or trend. This can be an important finding as it might
indicate important changes in the way strategies have been implemented over the years, changes in the
nature of the strategies or changes in the way implementation success has been measured. Therefore,
identification of clear patterns or trends in the results can open several avenues for research. In particular, it
can be an important catalyst for research on the reasons behind the patterns observed.
The other main reason is that the percentage of strategies that fail to succeed is a controversial issue
as no one seems to know what the real rate of failure is. By reviewing and discussing relevant literature,
this research provides a clear and comprehensive understanding of the nature of this problem, so that
the factors contributing to it can be identified and properly addressed. In doing so, this research
exposes the need for, and lays the foundations of a clear protocol to guide researchers in the process of
estimating more rigorously the rates of failure on strategy implementation. The development of this
protocol is fundamental to assist managers and researchers in making better judgments of the value of
strategy types, implementation approaches and management instruments.
This paper therefore aims to contribute to the discussion on the estimation of strategy implementation
failure rates. In particular, we aim to show that the current state of affairs on the field of strategic
management does not allow a single robust estimate of the failure rate of strategy implementation to be
provided. In line with this objective, we also aim to suggest a template for a protocol that can help
researchers develop better measures of strategy implementation failure rates. To this purpose, an extensive
review of the literature on strategy implementation failure rates is presented and scrutinised.
In pursuit of this research agenda, the remainder of this paper is organised into several sections. It
starts by discussing the research methodology and the process we have followed to address the
objectives of this paper. It then addresses the issue of what the rate of strategy implementation failure
is. A discussion of the literature dealing with this issue ensues and evidence is presented that supports
the conclusions we have reached. The paper concludes by deriving implications for the literature and
practice on strategy implementation.
websites of major consulting companies and on several national library on-line catalogues (England,
United States, Ireland, Scotland, Canada, Australia and Portugal), which allowed the identification of
some additional and relevant studies. Unfortunately, however, some of these studies were not available
for consultation and it was not possible for us to gain access to either a hard or an electronic copy.
Interestingly, many of these unavailable studies were authored by consulting companies (Arthur D.
Little, 1992; A.T. Kearney, 1992; Prospectus Strategy Consultants, 1996) and were abundantly
quoted, even by reputed academic researchers. Finally, we have also contacted by e-mail the consulting
companies, the individual authors of the reports, when their names were publicly available and the authors
who have quoted those studies. In total, more than 45 e-mails were sent. In spite of all the efforts made to
obtain copies of the studies, most of these efforts proved unfruitful. Many of the companies and authors
contacted replied, but we did not succeed in obtaining the required information either because the studies
were no longer available (e.g., A.T. Kearney, A.D.L., Prospectus) or because the companies were unable to
assist individuals with specific research requests (e.g., B.C.G., McKinsey).
Therefore, the literature reviewed in this paper includes all the academic studies that have met the
search criteria above and the consultancy studies that were relevant and available for consultation. The
latter account for 45% of the studies analysed. The results of this search are presented and discussed in
the next section.
Rate of
Study Method Variable failure Observations
Kiechel (1982, 1984) Interviews carried out in the period of Perception of the 90% Kiechel (1982, 1984) is cited by researchers
1979–1984 with theoreticians, corporate percentage of such as Mintzberg (1994) and Kaplan &
executives and consultants from most companies that can Norton (2001). We have searched the
of the major consulting firms. implement a strategy websites of some of the companies
Complementary analysis of case studies successfully interviewed by Kiechel (A.D.L., B.C.G.,
and of the history of the strategic McKinsey, Bain and others), looking for
management field. No research any studies that the consultants
instrument explained and no other interviewed might have been quoting,
information on methodology presented and have also sent e-mails to these
companies asking for a copyb
Strategy implementation
Gray (1986)a and Judson Questionnaire applied to chief executive Perceived failure of 51–90% Based on Gray’s (1986) results, we have
(1991)a (Gray-Judson- officers, corporate planning directors and strategic planning calculated the rate of failure:
Howard Inc.) business unit heads, all with substantial systems, because of 87% × 59% = 51%. Judson (1991) quotes
experience in strategic planning in difficulties in the a rate of failure of around 90%, which was
American multi-business corporations of the implementation phase based on Gray’s (1986) 87% estimate
service and manufacturing industries and in
American Government Agencies.
Population and sampling method not clearly
specified. Sample of 300 respondents.
Further details obtained through 14
Executive Seminars of a day-and-a-half with
the participation of 216 of those
respondents. Descriptive statistics only
Nutt (1987) Case studies of 68 strategic planning projects Strategic projects 30% According to the author, the sample is not
in hospitals and other third-sector abandoned, rejected or random and a ‘positive bias may be
organisations from Canada and United shelved present because organisations that
States. Sample not random. Data collected participate voluntarily are likely to share
from interviews with key executives and information about practices and ideas
company documents. Additional interviews they believe to be high quality’. This
were made to secure an agreement positive bias may have resulted in an
between interviewees’ recall of events, underestimated rate of failure
existing documents and a written
description of the process prepared by the
researcher. In case of lack of agreement the
241
TABLE 1. (CONTINUED )
Rate of
Study Method Variable failure Observations
Strategy implementation
on methodology available. Descriptive
statistics only
Unknown, cited by Sirkin, na na 67% Sirkin, Keenan, & Jackson (2005) do not
Keenan, & Jackson, provide sufficient information to identify
(2005)a (BCG) the original empirical study or studies.
We asked all of the authors by e-mail to
indicate the complete references for the
studiesb
McKinsey (2006)a On-line survey. Sample size 796 from a Perceived effectiveness of 28%
worldwide panel of executives the strategic plan
responsible for finance or strategy in
organisations from a wide range of
industries, each with a revenue of at least
500 million USD. No other information
available on sampling and methodology.
Descriptives only
Jørgensen et al. (2008)a Survey of 1,532 practitioners – project Number of projects that 59% According to the study, 59% of the projects
(IBM Global Business leaders, sponsors, project managers and did not miss any of the ‘failed to fully meet their objectives: 44%
Services) change managers – from companies of 15 objectives (time, budget missed at least one (time, budget or
different nations in the world and in 21 and quality goals) quality goals), while a full 15% either
different industries. Sample included missed all goals or were stopped by
practitioners from companies with more management’
than 100,000 employees (14%), between
243
TABLE 1. (CONTINUED )
Rate of
Study Method Variable failure Observations
Strategy implementation
Middle East, Africa, Latin America and the
Caribbean region. No other information
on methodology available. Descriptive
statistics only
Rate of
Study Method Variable failure Observations
Golembiewski et al. Analysis of 574 case studies from the Probability of successful 14% (global)– Golembiewski (1990) and Golembiewski
(1981, 1982)a and period 1945–1980, covering the private intervention based on 30% et al. (1981, 1982) focus on the rate of
Golembiewski (1990) (53%) and the public sector (47%), as researchers’ codification (multiple success of Organisational Development
well as United States (83%) and non-US variables) interventions. They estimated two rates
settings (17%). Published and of success, one using a global
unpublished data sources were both assessment (86% success) and another
Strategy implementation
Descriptives only. Given the sample size,
rigorous conclusions were not reached
Park (1991) Analysis of 151 case studies from the (No. of positive outcomes 40% Park (1991) focused on quality circles
period of 1978–1988. Published and – no. of negative programmes. We calculated the failure
unpublished sources of data. Internal outcomes)/total no. of rate using the data provided in the
documents of 45 organisations and variables examined paper as the average for the public and
interviews with 50 practitioners and ≥50% the private organisations
researchers. All cases coded in terms of
30 outcomes variables. Interrater
reliability of 97%, assessed for 20% of
the cases. Descriptives and stratistical
tests
Unknown, cited by na na 90% The rate of failure refers to the proportion
Jantz & Kendall of new consumer products that fail.
(1991)a (Arthur D. Jantz and Kendall (1991) do not provide
Little) sufficient information to identify the
original empirical study. We were
unable to establish contact with Jantz or
Kendallb
A.T. Kearney (1992)a Survey. Sample of over 100 British firms na 80%c This study has been abundantly cited, for
(according to The Economist, 1992) example, The Economist (1992),
Wilkinson, Redman, & Snape (1994) and
Soltani et al. (2005). Unfortunately, we
could not find a copy. We searched A.T.
247
TABLE 2. (CONTINUED )
Rate of
Study Method Variable failure Observations
Strategy implementation
success rate and factors contributing to levels of the studies by consulting companies,
effective mergers and post-merger the ‘broad pattern of results reported …
integration. Critical review of the does not differ much from that found in
consulting companies’ studies and the finance/business academic field’. In
comparison with academic research. fact, ‘[f]ailure rates for mergers in the
Descriptives only range of 35% to 60% are common in
academic studies’ (Pautler, 2003)
Taylor & Wright (2003) Longitudinal study over a period of 5 years Perceived TQM outcome/ 41% The rate of failure corresponds to the
of a cohort of 109 TQM firms in success proportion of unsuccessful and of
Northern Ireland. Postal questionnaire discontinued TQM programmes
mailed to CEOs or Managing Directors.
Further details obtained through 25
follow-up interviews. Stratified sample
representative in terms of sector and
size of firm. Descriptives and statistical
tests associated with research
hypotheses
Hackett Group (2004a, Survey of more than 2,400 client Perceived maturity degree 73% The Hackett Group focused on the
2004b)a organisations from several countries, reached by the initiative development of a mature balanced
including 93% of the Dow Jones scorecard. ‘Mature’ meaning that it does
Industrials, 80% of the Fortune 100 and not have too many metrics, does not
90% of the Dow Jones Global Titans focus heavily on historical finance data
Index. No other information on and has enough forward-looking
249
TABLE 2. (CONTINUED )
Rate of
Study Method Variable failure Observations
Lawson, Stratton, & International on-line survey conducted in 44 Perceived benefits to the 37% Lawson, Stratton & Hatch (2006, 2008
Hatch (2006, 2008) countries and in eight different languages. organisation focused on the implementation of the
(sponsored by Methods used to solicit survey balanced scorecard
professional and participation were different across
consulting countries, but everyone could participate.
organisations)a Sample size of 382 companies, divisions
and subsidiaries. Of these 382, 193 (51%)
Strategy implementation
that most UK organisations were in the early stages of developing a total approach to quality, that is, in the beginning of implementation.
DCs = developed countries; IVJs = international joint ventures; LDCs = less-developed countries; na = information not available; TQM = Total Quality Management.
251
Carlos J F Cândido and Sérgio P Santos
Strategy Consultants, 1996; Hackett Group, 2004a, 2004b; Dion, Allday, Lafforet, Derain, & Lahiri,
2007). Although we were unable to access the scientific rigour of some of these studies, as it was not
possible to obtain details regarding the robustness of the research methodologies used and the results
achieved, it has long been recognised that some overestimation may have been committed by con-
sulting firms (Powell, 1995). Overestimated failure rates can be used to the advantage of consulting
firms, namely as a marketing strategy to convince customers of the importance of adopting their
services (Powell, 1995). Second, the results on the tables seem to suggest a downward trend on the
estimates of failure, indicating that the percentage of strategic initiatives that fail has decreased over
time (see Figure 1), a likely result of the scientific progress made in this field over the past two decades
and its inclusion in business education programmes. In particular, the identification of obstacles to
strategy implementation and a better understanding of the ways they interact with each other, made by
both researchers and practitioners (e.g., Alexander, 1985; Ansoff & McDonnell, 1990; Kotter, 1995;
Beer & Eisenstat, 2000; Kaplan & Norton, 2001) might have played an important role in improving
the rates of failure over the years. Therefore, although some of the higher estimates could have been
appropriate and reflect the true dimension of the problem one or two decades ago, they are likely to be
outdated nowadays. This is also the case because time since adoption of a new strategy contributes to a
better internalisation of the elements of that strategy and consequently to a better performance (Powell,
1995; Prajogo & Brown, 2006). Considering that some strategies and some management tools have
been in practice for a long time, it is likely that familiarity with these strategies and tools has increased
leading to the accumulation of knowledge and, consequently, to more successful implementations
(Taylor & Wright, 2003). Several other explanations can be offered to justify the downward trend on
the estimates of failure. For example, companies may follow successful early adopters, benefiting from
their experience, thus resulting in the improvement of failure rates. Companies may also have become
more aware of the need to carefully customise new strategies or management tools to their char-
acteristics and to the contexts in which they operate, instead of blindly adopting general undiffer-
entiated strategies and tools. Independently or in combination each of these factors might help explain
the apparent improvement in failure rates.
It seems therefore reasonable to assume that the current rates of failure are well below some of the
estimates often quoted in the literature. However, if this is the case, what is then the real percentage of
strategies that fail? Although there have been several studies on this issue in the past two decades, our
view is that the current state of affairs does not allow a robust estimate to be provided. Several reasons
can be advanced for this.
First, the studies discussing the success/failure rate of strategy implementation vary considerably in
the amount of effort put into the estimation of the rate. In some of these studies, the estimation
of the rate of failure/success was their main objective (e.g., Golembiewski, 1990; Park, 1991;
Wilkinson, Redman, & Snape, 1994; Pautler, 2003; Makino, Chan, Isobe, & Beamish, 2007). In
other studies, this objective was part of a broader research agenda (e.g., Beamish, 1985; Voss, 1988,
1992; Taylor, 1997; Nutt, 1999; Walsh, Hughes, & Maddox, 2002; Taylor & Wright, 2003;
McKinsey, 2006), while in others the rates of success/failure were presented as complementary
information in an introduction or as an aside (e.g., Gray, 1986; Harrigan, 1988a; Hall, Rosenthal, &
Wade, 1993; Mohrman, Tenkasi, Lawler, & Ledford, 1995; Lewy & Mée, 1998a, 1998b; Sila, 2007).
An implication of the effort put into the estimation of the rate in each study is the consequent impact
on the complexity of the computational method used. In some studies, the computation is very simple
(e.g., Beamish, 1985; Harrigan, 1988a; Sila, 2007), while in others it is much more complex and
demanding (e.g., Golembiewski, 1990; Park, 1991).
Second, these studies are not easily comparable because the criteria used to define success/failure are
very distinct and consequently, they can account for some of the differences between estimations. It is
possible to distinguish between ‘technical success’ and ‘competitive success’ (Voss, 1992), between
‘success as process ease’ and ‘success as process outcomes’ (Bauer, Falshaw, & Oakland, 2005) and,
similarly, between ‘implementation success’ and ‘organisational success’ (Hussey, 1996; Mellahi &
Wilkinson, 2004). The higher rates of failure estimated may depend on a stricter sense of success
adopted by researchers.
Estimates of technical success and success as process ease may be higher than estimates of success as
process outcomes or organisational competitive success in the marketplace, since more internal and
external contingencies can affect the latter types of success. In Tables 1 and 2, we have reported mainly
failure rates from a ‘competitive success’ or an ‘organisational success’ perspective. Even so, the studies in the
tables are not easily comparable because in some cases researchers relied on managements’ perceptions to
derive an estimate of success/failure (e.g., Beamish, 1985; Gray, 1986; Voss, 1988, 1992; Taylor & Wright,
2003), whereas in others, they used more objective measurements (e.g., Golembiewski, Proehl, & Sink,
1981, 1982; Golembiewski, 1990; Hall, Rosenthal, & Wade, 1993; Pautler, 2003; Makino et al., 2007).
Furthermore, some studies have used a single criterion to define success/failure (e.g., Gray, 1986; Walsh,
Hughes, & Maddox, 2002; Sila, 2007), whereas others have used multiple criteria (e.g., Golembiewski,
1990; Park, 1991; Wilkinson, Redman, & Snape, 1994; Mohrman et al., 1995).
Third, different studies have used different research strategies to estimate the rate of success/failure of
strategy implementation. Some researchers have adopted a case study approach (e.g., Voss, 1988, 1992;
Hall, Rosenthal, & Wade, 1993; Lewy & Mée, 1998a, 1998b; Nutt, 1999). Others have employed a
survey method (e.g., Beamish, 1985; Wilkinson, Redman, & Snape, 1994; Mohrman et al., 1995;
Walsh, Hughes, & Maddox, 2002; McKinsey, 2006; Makino et al., 2007; Sila, 2007), while still
others have used a combination of methods (e.g., Gray, 1986; Harrigan, 1988a; Charan & Colvin,
1999; Taylor & Wright, 2003). It is well know that, while some research strategies allow statistical
generalisations to be made, others, like case-based research, only allow analytical generalisations.
Fourth, the unit of analysis varies considerably from one study to another. Some researchers have
considered as their unit of analysis a single project, such as developing a new product or launching
quality circles, which may be seen as part of wider strategic initiatives (e.g., Nutt, 1987, 1999; Voss,
1988, 1992; Park, 1991; Lewy & Mée, 1998a, 1998b; Hackett Group, 2004a, 2004b; Lawson,
Stratton, & Hatch, 2006, 2008). Other researchers have focused on business-wide strategic initiatives,
which may in turn be decomposed into several smaller projects (e.g., Kiechel, 1982, 1984; Harrigan,
1988a, 1988b, 1988c; Mohrman et al., 1995; Walsh, Hughes, & Maddox, 2002; Pautler, 2003;
McKinsey, 2006; Sila, 2007).
Fifth, some studies prove very difficult to obtain/access, in particular those undertaken by some
management consulting firms such as A.T. Kearney, Arthur D. Little, McKinsey, Prospectus and Booz
Allen Hamilton. Therefore, any conclusions taken from the estimates they have produced without a
proper understanding of the context, methodology and results obtained might lack legitimacy and
scientific rigour. In spite of this, it is common to find researchers (e.g., Holder & Walker, 1993;
Mintzberg, 1994: 25, 284; Smith, Tranfield, Foster, & Whittle, 1994; Zairi, 1995; Dow, Samson, &
Ford, 1999; Korukonda, Watson, & Rajkumar, 1999; Kaplan & Norton, 2001: 1; Walsh, Hughes, &
Maddox, 2002; Sterling, 2003) that quote the results of these studies not because they have read the
original work but because these estimates have been quoted by other researchers or in well-known
journals such as The Economist or The Wall Street Journal. Unfortunately, this has lead some of these
studies to be widely misquoted and misunderstood (Taylor, 1997).
Finally, it is not always easy to distinguish what is fact and what is fiction in some of the estimates
offered in the literature. In particular, there seems to be no scientific grounds behind some of the
estimates. For example, Mintzberg (1994: 25, 284), Kaplan & Norton (2001: 1), Burnes (2004,
2005), Raps (2005) and Sila (2007) quote several sources for the rates of failure they mention in their
papers (e.g., Kiechel, 1982, 1984; Judson, 1991; Dooyoung, Kalinowski, & El-Enein, 1998; Beer &
Nohria, 2000; Waclawski, 2002; Sirkin, Keenan, & Jackson, 2005). However, a detailed analysis
of these sources shows that they did not carry out an estimation of the quoted rates of failure. They
claim their estimates were based on ‘Interviews’, ‘Studies’, ‘Experience’, ‘The Literature’ or ‘Popular
Management Press’, rather than on solid empirical evidence. On other occasions, the sources of the
estimates are incorrectly interpreted (e.g., Kaplan & Norton, 2001, in interpreting the findings of
Charan & Colvin, 1999). We also found evidence of studies incorrectly identifying their sources (e.g.,
Dyason & Kaye, 1997) and studies not identifying their sources at all (e.g., Jantz & Kendall, 1991;
Neely & Bourne, 2000; Becer, Hage, McKenna, & Wilczynski, 2007).
Unless these factors are accounted for, any attempts to present estimates for the real success/failure
rates of strategy implementation are doomed to fail or are of little practical value.
Independently of the ‘real’ success/failure rate, and despite success rates that seem to have improved
over time, it is reasonable to conclude that the number of strategic initiatives that fail is still con-
siderably higher than would be desirable. This suggests that organisations either need better imple-
mentation guidelines or need to make better use of the existing ones. The need for better
implementation processes has been widely acknowledged by researchers (e.g., Dean & Bowen, 1994;
Mockler, 1995; Barney, 2001; Hickson, Miller, & Wilson, 2003) and research on how to avoid
implementation obstacles and improve implementation has been underway for many years (e.g.,
Stanislao & Stanislao, 1983; Alexander, 1985; Ansoff & McDonnell, 1990; Kotter, 1995; Beer &
Eisenstat, 2000; Miller, Wilson, & Hickson, 2004; Stadler & Hinterhuber, 2005). It is, therefore,
imperative to assess the extent to which these guidelines account for some of the improvements
achieved as well as to understand the reasons why so many initiatives still fail.
Although efforts should be made to reduce failure rates, it is important to emphasise that failure can
be seen as an important part of the strategic learning process within organisations (e.g., Mintzberg,
1987; Krogh & Vicari, 1993; Sitkin, Sutcliffe, & Schroeder, 1994; Edmondson, 2011). Unintended
past mistakes and deliberate strategic experiments can both generate useful lessons (Wilkinson &
Mellahi, 2005), which may prove highly advantageous in the marketplace (Krogh & Vicari, 1993).
CONCLUSION
Business strategy implementation has long attracted the interest of researchers and practitioners. In
spite of being often quoted that 50–90% of strategic initiatives fail (e.g., Mintzberg, 1994: 25, 284;
Kaplan & Norton, 2001: 1), an exhaustive analysis of the literature on strategy formulation and
implementation seems to suggest that some of the evidence supporting these figures is outdated,
fragmentary, lacks scientific rigour or is just absent. Much of the uncertainty relating to this issue is also
due to the fact that different studies have obtained mixed results. These findings are important to the
field of strategy and change management in two different ways. First, they add to the discussion of the
appropriateness of the failure rates proposed by some studies, which has attracted interest in recent
years but on which much remains to be done. As far as we know, there are only two studies, which
explicitly address this issue. A paper by Cândido and Santos (2011), which focuses on total quality
management rates of failure, and a paper by Hughes (2011), which questions the assertion that ‘70 per
cent of all organisational change initiatives really fail’. Our research presents, however, important
departures from these previous studies. While the former study focused on the implementation of a
specific business strategy (i.e., total quality management) and the latter focused its analysis exclusively
on five selected papers, none of which presenting evidence to support the claim they had made, our
study is much broader in focus and comprehensive in the analysis. We scrutinise both the imple-
mentation of general and specific business strategies and carry out an extensive review of all the studies
that discuss strategy implementation failure rates. In so doing, we have found out that the range of
variation of the estimates is remarkable, spanning from a rate of failure as low as 7 to one as high as
90%. Several factors can help explain why there is such variation in the estimates produced including
possible overestimation, exposure of organisations to different contextual and environmental factors
and differences in the concepts used to define success/failure and in the samples and methodologies
adopted. These differences can be attributed to several factors, one of the most important being the
lack of a comprehensive review of the relevant literature by some of the studies. This has prevented the
authors of these studies from becoming aware of the state of the art on the topic and, consequently,
adopting concepts and methods consistent with previous research. Another important explanation for
the differences mentioned above relates to the fact that the research objectives vary considerably
between studies. Some studies have established the estimation of the rate of failure as their main goal,
whereas in others, the estimation of this rate has assumed a less important role. This has had various
implications for the higher or lower sophistication of the methodology and of the criteria adopted for
the calculations. A third factor explaining the differences in the criteria and methods used to estimate
failure rates relates to the use that is intended for these rates. While academic researchers are likely to be
more interested in the study of a particular type of implementation approach/tactic, strategy, or even
management instrument (such as balanced scorecard or total quality management), practitioners are
likely to be more interested in the promotion of a specific kind of consulting service. Finally, the fact
that the literature does not offer a clear research protocol to be followed when the objective is to
estimate the rate of failure in the implementation of strategic initiatives, also plays a fundamental role
in explaining the differences between studies. Given the exceptionally broad range of estimates pro-
duced as a result of the factors mentioned above, their quotation in generic terms may have little more
than academic value. This conclusion should also be seen as a warning against the use of the current
higher estimates of rates of failure (70–90%) to justify any course of action, whether in research or in
management practice.
Another important contribution of this research to the literature is that it exposes the need for and
lays the foundations of a protocol that guides researchers in the process of estimating strategy
implementation failure rates. This is a distinctive feature from the two studies previously discussed. In
what follows, we propose a template for such a protocol, which is aimed at enhancing the compar-
ability between estimates and increasing their predictive capability. This protocol should be regarded,
however, as a starting point for discussion rather than as a complete proposal. As discussed in what
follows, when the objective is to estimate strategy implementation failure rates, there are five principal
aspects of the protocol.
First, it is important to accurately characterise the context of the study. In particular, it is funda-
mental that relevant organisational factors (e.g., firm size, sector of operation, ownership, management
style) and environmental variables (e.g., economic, social and cultural context) that might impact on
the degree of success or failure of a strategy are clearly identified and discussed. It is well known that
some contingency factors might impact on the success or failure of a strategic initiative, and therefore,
knowing them is important to enhance comparability between estimates and to design tailor-made
guidelines for implementation.
Second, once the context has been established, the actual types of the business strategies being
assessed must be carefully detailed. Considering that the process of implementing different strategies
can have quite different outcomes within the same organisational and/or environmental context, it is
critical to clarify which type of strategy is being analysed. Besides this, it is important to clarify whether
the study is assessing the success/failure of modifications of existing strategies or the implementation of
whole new strategies and whether it is focused on transactional or transformational changes.
Third, it is fundamental to establish a clear and consistent definition of ‘failure’ or of ‘success’.
Although a universally acceptable definition of strategy implementation failure is not compulsory, a
clear definition is nonetheless important for methodological consistency as it will ensure a common
understanding of what is being assessed and enhance comparability between studies. As part of this
definition, it is fundamental to specify the intended outcomes of the implementation process, the
measureable indicators of these outcomes and the specific target levels to be attained for these indi-
cators in order for an implementation to qualify as a success (or as a failure).
Fourth, the research methodology used to estimate the rate of failure has to be clearly discussed.
Considering that a crucial aspect in estimating the degree of success/failure of strategy implementation
is the ability to identify and quantify the outcomes of the process, and that different research strategies
have often distinctly different methods for data collection and analysis, it is important that these
methods and their assumptions are properly discussed. Information regarding the reliability and
validity of the measurement instruments must also be provided to allow for an independent assessment
of their methodological rigour and consistency.
Finally, as in any carefully done research, weaknesses of the analysis, which may limit the ability to
generalise its conclusions to other contexts, must be identified and characterised. The identification of these
weaknesses and the provision of suggestions on how to address them is a key step towards improvement.
Adherence to this protocol is imperative for a better understanding of the reasons behind the
different estimates produced and to derive more robust estimates for the rates of strategy imple-
mentation failure. Only in this way will we be able to identify the real scale of the problem and plan
appropriate corrective actions. Unless it is properly understood whether there are strategic initiatives
more difficult to implement than others, whether there are sectors or areas of activity where strategies
are more difficult to implement and whether there are cultural issues and other contextual factors that
explain the differences on the estimates produced, the mere quotation of these estimates will be of little
practical value.
It is important to acknowledge, however, that considerable progress has been made on this topic in
the last two decades, and that the lower failure rates recently estimated by some researchers might be a
consequence of these advances. However, while progress has been achieved there are still a number of
issues that need further understanding to better guide research and practice on this issue.
First, it is important to understand whether the rates of failure are context-dependent. The fact that
the estimates produced are so different might suggest context-dependence, indicating that imple-
mentation should be tailored in accordance to the characteristics of the organisations and/or of their
environment.
Second, it is important to understand whether the apparent improvement in the rates of success is in
fact a verified tendency and the extent to which each of the possible explanations advanced here have
contributed to this improvement (e.g., scientific progress in the fields of strategy implementation and
change management, better management education programmes, time since adoption of a strategy
and familiarity with it, accumulation of knowledge, particularly from the experience of early adopters
and customisation of general strategies). The identification of best practices, resulting from this line of
research, might also play an important role in further promoting successful implementation.
ACKNOWLEDGEMENT
The authors thank two anonymous referees for the insightful comments and helpful suggestions.
The authors are also pleased to acknowledge the financial support from Fundação para a Ciência e a
Tecnologia (SFRH/BSAB/863/2008), FEDER/COMPETE (grant PEst-C/EGE/UI4007/2011),
Faculdade de Economia, Universidade do Algarve, and Newport Business School, University of Wales.
References
Alchian, A. A. (1950). Uncertainty, evolution, and economic theory. Journal of Political Economy, 58(3), 211–221.
Aldrich, H. E. (1979). Organizations and environments. New Jersey: Prentice-Hall.
Alexander, L. D. (1985). Successfully implementing strategic decisions. Long Range Planning, 18(3), 91–97.
Amitabh, M. (2010). Research in strategy-structure-performance construct: Review of trends, paradigms and methodologies.
Journal of Management and Organization, 16(5), 744–763.
Ansoff, H. I. (1965). Corporate strategy. New York: McGraw Hill.
Ansoff, H. I., Declerck, R. P., & Hayes, R. L. (1976). From strategic planning to strategic management. New York:
John Wiley & Sons.
Ansoff, H. I., & McDonnell, E. (1990). Implanting strategic management. New York: Prentice Hall International.
Arthur D. Little (1992). Executive caravan TQM survey summary. Cambridge, MA: Arthur D. Little Corporation.
Baden-Fuller, C., & Stopford, J. M. (1994). Rejuvenating the mature business: The competitive challenge. Boston, MA:
Harvard Business School Press.
Balogun, J., & Hailey, V. H. (2008). Exploring strategic change. Harlow: Pearson.
Barnett, W. P., & Carroll, G. R. (1995). Modeling internal organizational change. Annual Review of Sociology, 21(1),
217–236.
Barney, J. B. (2001). Is the resource-based ‘view’ a useful perspective for strategic management research? Yes. Academy of
Management Review, 26(1), 41–56.
Bauer, J., Falshaw, R., & Oakland, J. S. (2005). Implementing business excellence. Total Quality Management, 16(4), 543–553.
Beamish, P. W. (1985). The characteristics of joint ventures in developed and developing countries. Columbia Journal of
World Business, 20(2), 13–19.
Becer, E., Hage, B., McKenna, M., & Wilczynski, H. (2007). Performance-improvement initiatives – Three best practices
for project success. New York: Booz Allen Hamilton. Retrieved from www.boozallen.com.
Beer, M., & Eisenstat, R. A. (2000). The silent killers of strategy implementation and learning. Sloan Management
Review, 41(4), 29–40.
Beer, M., & Nohria, N. (2000). Cracking the code of change. Harvard Business Review, 78(3), 133–141.
Bockmühl, S., König, A., Enders, A., Hungenberg, H., & Puck, J. (2011). Intensity, timeliness, and success of
incumbent response to technological discontinuities: A synthesis and empirical investigation. Review of Managerial
Science, 5(4), 265–289.
Burnes, B. (2004). Kurt Lewin and the planned approach to change: A re-appraisal. Journal of Management Studies,
41(6), 977–1002.
Burnes, B. (2005). Complexity theories and organizational change. International Journal of Management Reviews,
7(2), 73–90.
Calori, R., Baden-Fuller, C., & Hunt, B. (2000). Managing change at novotel: Back to the future. Long Range Planning,
33(6), 779–804.
Cândido, C. J. F., & Morris, D. S. (2000). Charting service quality gaps. Total Quality Management, 11(4–6), 463–472.
Cândido, C. J. F., & Morris, D. S. (2001). The implications of service quality gaps for strategy implementation. Total
Quality Management, 12(7/8), 825–833.
Cândido, C. J. F., & Santos, S. P. (2011). Is TQM more difficult to implement than other transformational strategies?
Total Quality Management, 22(11), 1139–1164.
Carnall, C. A. (1986). Managing strategic change: An integrated approach. Long Range Planning, 19(6), 105–115.
Chandler, A. D. (1962). Strategy and structure: Chapters in the history of the American industrial enterprise. Cambridge:
The MIT Press.
Charan, R., & Colvin, G. (1999). Why CEOs fail. Fortune, 139(12), 68–78.
Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organisational choice. Administrative
Science Quarterly, 17(1), 1–25.
Corboy, M., & Corrbui, D. (1999). The seven deadly sins of strategy implementation. Management Accounting, 77(10),
29–30
Cyert, R. M., & March, J. G. (1964). The behavioral theory of the firm: A behavioral science – Economics amalgam.
In W. W. Cooper, H. J. Leavitt, & M. W. Shelly (Ed.), New perspectives in organizational research. New York:
John Wiley & Sons, 289–384.
Dean, J. W., & Bowen, D. E. (1994). Management theory and total quality management: Improving research and
practice through theory development. Academy of Management Review, 19(3), 392–418.
Dean, J. W., & Sharfman, M. P. (1993). Procedural rationality in the strategic decision-making process. Journal of
Management Studies, 30(4), 587–610.
DeGeus, A. P. (1988). Planning as learning. Harvard Business Review, 66(2), 70–74.
Dion, C., Allday, D., Lafforet, C., Derain, D., & Lahiri, G. (2007). Dangerous liaisons, mergers and acquisitions:
The integration game, Report by Hay Group, Philadelphia, USA, pp. 1–16. Retrieved from www.haygroup.com.
Dooyoung, S., Kalinowski, J. G., & El-Enein, G. A. (1998). Critical implementation issues in total quality manage-
ment. Advanced Management Journal, 63(1), 10–14.
Dow, D., Samson, D., & Ford, S. (1999). Exploding the myth: Do all quality management practices contribute to
superior quality performance? Production and Operations Management, 8(1), 1–27.
Doyle, M., Claydon, T., & Buchanan, D. (2000). Mixed results, lousy process: The management experience of
organizational change. British Journal of Management, 11(3), S59–S80.
Dunphy, D. C., & Stace, D. A. (1988). Transformational and coercive strategies for planned organizational change:
Beyond the O. D. model. Organization Studies, 9(3), 317–334.
Dyason, M. D., & Kaye, M. M. (1997). Achieving real business advantage through the simultaneous development of
managers and business excellence. Total Quality Management, 8(2/3), 145–151.
Economist Intelligence Unit (2013). Why good strategies fail: Lessons from the C-Suite. London: Economist Intelligence
Unit Limited.
The Economist (1992). The cracks in quality. The Economist, 18, 69–70.
Edmondson, A. C. (2011). Strategies for learning from failure. Harvard Business Review, 89(4), 48–55.
Eisenhardt, K. M., & Zbaracki, M. J. (1992). Strategic decision making. Strategic Management Journal, 13(8), 17–37.
Franken, A., Edwards, C., & Lambert, R. (2009). Executing strategic change: Understanding the critical management
elements that lead to success. California Management Review, 51(3), 49–73.
Fredrickson, J. W., & Iaquinto, A. L. (1989). Inertia and creeping rationality in strategic decision processes. Academy of
Management Journal, 32(3), 516–542.
Fredrickson, J. W., & Mitchell, T. R. (1984). Strategic decision processes: Comprehensiveness and performance in an
industry with an unstable environment. Academy of Management Journal, 27(2), 399–423.
French, S. N. J., Kouzmin, A., & Kelly, S. J. (2011). Questioning the epistemic virtue of strategy: The emperor has no
clothes. Journal of Management and Organization, 17(4), 434–447.
Galpin, T. J. (1997). Making strategy work – Building sustainable growth capability. San Francisco: Jossey-Bass
Publishers.
Gandolfi, F., & Hansson, M. (2010). Reduction-in-force (RIF) – New developments and a brief historical analysis of a
business strategy. Journal of Management and Organization, 16(5), 727–743.
Gandolfi, F., & Hansson, M. (2011). Causes and consequences of downsizing: Towards an integrative framework.
Journal of Management and Organization, 17(4), 498–521.
Gioia, D. A., & Chittipeddi, K. (1991). Sensemaking and sensegiving in strategic change initiation. Strategic
Management Journal, 12(6), 433–448.
Golembiewski, R. T. (1990). The irony of ironies: Silence about success rates. In R. T. Golembiewski (Ed.), Ironies in
organizational development. NJ, USA: Transaction Publications, 11–29.
Golembiewski, R. T., Proehl, C. W., & Sink, D. (1981). Success of OD applications in the public sector: Toting up the
score for a decade, more or less. Public Administration Review, 41(6), 679–682.
Golembiewski, R. T., Proehl, C. W., & Sink, D. (1982). Estimating the success of OD applications. Training and
Development Journal, 36(4), 86–95.
Goss, D. (2008). Enterprise ritual: A theory of entrepreneurial emotion and exchange. British Journal of Management,
19(2), 120–137.
Gray, D. H. (1986). Uses and misuses of strategic planning. Harvard Business Review, 64(1), 89–97.
Hackett Group (2004a). Balanced scorecards: Are their 15 minutes of fame over?. Miami: The Hackett Group. Retrieved
from www.thehackettgroup.com.
Hackett Group (2004b). Most executives are unable to take balanced scorecards from concept to reality, press release,
The Hackett Group, Miami, October, pp. 1–4.
Hafsi, T. (2001). Fundamental dynamics in complex organizational change: A longitudinal inquiry into Hydro-
Québec’s management. Long Range Planning, 34(5), 557–583.
Hall, G., Rosenthal, J., & Wade, J. (1993). How to make reengineering really work. Harvard Business Review, 71(6),
119–131.
Hambrick, D. C., & Chen, M. (2008). New academic fields as admittance-seeking social movements: The case of
strategic management. Academy of Management Review, 33(1), 32–54.
Hambrick, D. C., & Mason, P. A. (1984). Upper echelons: The organization as a reflection of its top managers.
Academy of Management Review, 9(2), 193–206.
Hannan, M. T., & Freeman, J. (1977). The population ecology of organizations. American Journal of Sociology, 82(5),
929–964.
Harrigan, K. R. (1988a). Strategic alliances and partner asymmetries. Management International Review, 28(4), 53–72.
Harrigan, K. R. (1988b). Joint ventures and competitive strategy. Strategic Management Journal, 9(2), 141–158.
Harrigan, K. R. (1988c). Joint ventures: A mechanism for creating strategic change. In A. M. Pettigrew (Ed.),
The management of strategic change. New York: Basil Blackwell, 195–230.
Hart, S. L. (1992). An integrative framework for strategy-making processes. Academy of Management Review, 17(2),
327–351.
Hickson, D. J., Miller, S. J., & Wilson, D. C. (2003). Planned or prioritized? Two options for managing the
implementation of strategic decisions? Journal of Management Studies, 40(7), 1803–1836.
Holder, T., & Walker, L. (1993). TQM implementation. Journal of European Industrial Training, 17(7), 18–21.
Hrebiniak, L. G. (2006). Obstacles to effective strategy implementation. Organizational Dynamics, 35(1), 12–31.
Hughes, M. (2011). Do 70 per cent of all organizational change initiatives really fail? Journal of Change Management,
11(4), 451–464.
Hussey, D. (1996). A framework for implementation. In D. Hussey (Ed.), The implementation challenge. Chichester,
England: John Wiley & Sons, 1–14.
Jantz, C. J., & Kendall, D. A. (1991). Consumer-driven innovative product development. Prism, 1, 24–29. Retrieved
from www.adlittle.com.
Jørgensen, H. H., Owen, L., & Neus, A. (2008). Making change work. Somers: IBM.
Johnson, G., & Scholes, K. (1999). Exploring corporate strategy: Text and cases. New York: Prentice Hall.
Judson, A. S. (1991). Invest in a high-yield strategic plan. The Journal of Business Strategy, 12(4), 34–39.
Kaplan, R. S., & Norton, D. P. (2001). The strategy-focused organization – How balanced scorecard companies thrive in the
new business environment. Boston, MA: Harvard Business School Press.
Kearney, A. T. (1992). Total quality: Time to take off the rose tinted spectacles. Kempston: IFS Publications.
Kiechel, W. (1982). Corporate strategists under fire. Fortune, 106(13), 34–39.
Neely, A., & Bourne, M. (2000). Why measurement initiatives fail. Measuring Business Excellence, 4(4), 3–6.
Nelson, R. R., & Winter, S. G. (1974). Neoclassical vs. evolutionary theories of economic growth: Critique and
prospectus. The Economic Journal, 84(336), 886–905.
Nonaka, I. (2007). The knowledge-creating company. Harvard Business Review, 85(7/8), 162–171.
Nutt, P. C. (1987). Identifying and appraising how managers install strategy. Strategic Management Journal, 8(1), 1–14.
Nutt, P. C. (1999). Surprising but true: Half the decisions in organizations fail. Academy of Management Executive,
13(4), 75–90.
O’Brien, C., & Voss, C. A. (1992). In search of quality – An assessment of 42 British Organizations using the criteria of
the Baldrige Award, Operations Management Paper 92/02, London: London Business School.
O’Shannassy, T. (2001). Lessons from the evolution of the strategy paradigm. Journal of Management and Organization,
7(1), 25–37.
O’Shannassy, T. (2010). Board and CEO practice in modern strategy-making: How is strategy developed, who is the
boss and in what circumstances. Journal of Management and Organization, 16(2), 280–298.
O’Toole, J. (1995). Leading change: Overcoming the ideology of comfort and the tyranny of custom. San Francisco:
Jossey-Bass.
Papadakis, V. M., Lioukas, S., & Chambers, D. (1998). Strategic decision-making process: The role of management
and context. Strategic Management Journal, 19(2), 115–147.
Park, S. (1991). Estimating success rates of quality circle programs: Public and private experiences. Public Administration
Quarterly, 15(1), 133–146.
Pautler, P. A. (2003). The effects of mergers and post-merger integration: a review of Business Consulting Literature,
draft paper, Federal Trade Commission, Bureau of Economics, pp. 1–41. Retrieved from www.ftc.gov/be/rt/
businesreviewpaper.pdf.
Pettigrew, A. M. (1987). Context and action in the transformation of the firm. Journal of Management Studies, 24(6),
649–670.
Pinto, J. K., & Prescott, J. E. (1990). Planning and tactical factors in the project implementation process. Journal of
Management Studies, 27(3), 305–327.
Powell, T. C. (1995). Total quality management as competitive advantage: A review and empirical study. Strategic
Management Journal, 16(1), 15–37.
Prajogo, D. I., & Brown, A. (2006). Approaches to adopting quality in SMEs and the impact on quality management
practices and performance. Total Quality Management, 17(5), 555–566.
Project Management Institute (2014). The high cost of low performance. Newtown Square: Project Management
Institute.
Prospectus Strategy Consultants (1996). Profiting from increased consumer sophistication – A survey of retail financial
services in Ireland and Great Britain. Dublin, Ireland: Prospectus Strategy Consultants.
Quinn, J. B. (1989). Strategic change: ‘logical incrementalism’. Sloan Management Review, 30(4), 45–60.
Raps, A. (2005). Strategy implementation – An insurmountable obstacle? Handbook of Business Strategy, 141–146.
Rumelt, R. P., Schendel, D. E., & Teece, D. J. (1994). Fundamental issues in strategy. In R. P. Rumelt, D. E. Schendel,
& D. J. Teece (Ed.), Fundamental issues in strategy – A research agenda. Boston: Harvard Business School Press, 9–47.
Senge, P. (1990). The leader’s new work: Building learning organizations. Sloan Management Review, 32(1), 7–23.
Sila, I. (2007). Examining the effects of contextual factors on TQM and performance through the lens of organisational
theories: An empirical study. Journal of Operations Management, 25(1), 83–109.
Sirkin, H. L., Keenan, P., & Jackson, A. (2005). The hard side of change management. Harvard Business Review,
83(10), 109–118.
Sitkin, S. B., Sutcliffe, K. M., & Schroeder, R. G. (1994). Distinguishing control from learning in total quality
management: A contingency perspective. Academy of Management Review, 19(3), 537–564.
Smith, M. (2005). The balanced scorecard. Financial Management, 27–28.
Smith, S., Tranfield, D., Foster, M., & Whittle, S. (1994). Strategies for managing the TQ agenda. International
Journal of Operations & Production Management, 14(1), 75–88.
Soltani, E., Lai, P., & Gharneh, N. S. (2005). Breaking through barriers to TQM effectiveness: Lack of commitment of
upper-level management. Total Quality Management, 16(8/9), 1009–1021.
Stace, D. A., & Dunphy, D. C. (1996). Translating business strategies into action: Managing strategic change.
In D. Hussey (Ed.), The implementation challenge. Chichester: John Wiley & Sons, 69–86.
Stanislao, J., & Stanislao, B. C. (1983). Dealing with resistance to change. Business Horizons, 26(4), 74–78.
Stadler, C., & Hinterhuber, H. H. (2005). Shell, Siemens and DaimlerChrysler: Leading change in companies with
strong values. Long Range Planning, 38, 467–484.
Sterling, J. (2003). Translating strategy into effective implementation: Dispelling the myths and highlighting
what works. Strategy & Leadership, 31(3), 27–34.
Taylor, W. A. (1997). Leadership challenges for smaller organisations: Self-perceptions of TQM implementation.
Omega – The International Journal of Management Science, 25(5), 567–579.
Taylor, W. A., & Wright, G. H. (2003). A longitudinal study of TQM implementation: Factors influencing success
and failure. Omega – The International Journal of Management Science, 31(2), 97–111.
Van de Ven, A. H., & Poole, M. S. (1995). Explaining development and change in organizations. Academy of
Management Review, 20(3), 510–540.
Voss, C. A. (1988). Success and failure in advanced manufacturing technology. International Journal of Technology
Management, 3(3), 285–297.
Voss, C. A. (1992). Successful innovation and implementation of new processes. Business Strategy Review, 3(1), 29–44.
Waclawski, J. (2002). Large-scale organizational change and performance: An empirical examination. Human Resource
Development Quarterly, 13(3), 289–305.
Walsh, A., Hughes, H., & Maddox, D. P. (2002). Total quality management continuous improvement: Is the
philosophy a reality? Journal of European Industrial Training, 26(6), 299–307.
Weick, K. E., & Quinn, R. E. (1999). Organizational change and development. Annual Review of Psychology, 50(1),
361–386.
Wernham, R. (1985). Obstacles to strategy implementation in a nationalized industry. Journal of Management Studies,
22(6), 632–648.
Wilkinson, A., & Mellahi, K. (2005). Organizational failure. Long Range Planning, 38(3), 233–238.
Wilkinson, A., Redman, T., & Snape, E. (1994). The problems with quality management – The view of managers:
Findings from an institute of management survey. Total Quality Management, 5(6), 397–406.
Woodley, P. M. (2006). Culture management through the balanced scorecard: A case study, unpublished PhD thesis.
Defence College of Management and Technology, Cranfield University.
Zairi, M. (1995). Strategic planning through quality policy deployment: A benchmarking approach. In G. K. Kanji
(Ed.), Total quality management: Proceedings of the first world congress. London: Chapman & Hall, 207–215.