Understanding Project Evaluation - Haas&Guzman 2019
Understanding Project Evaluation - Haas&Guzman 2019
https://www.emerald.com/insight/1753-8378.htm
Understanding
Understanding project project
evaluation – a review evaluation
and reconceptualization
Omid Haass 573
School of Property, Construction and Project Management,
Received 10 October 2018
RMIT University, Melbourne, Australia, and Revised 19 October 2019
Gustavo Guzman Accepted 26 November 2019
Abstract
Purpose – The purpose of this paper is to understand the underlying logics applied by different project
evaluation approaches and to propose an alternative research agenda.
Design/methodology/approach – This paper explores the project evaluation literature via conducting a
qualitative research applying systematic literature review and thematic analysis.
Findings – The project evaluation literature has mainly concentrated on the objective aspects of project
evaluation and overlooked the subjective aspects that reflect the temporal, dynamic, complex and subjective
nature of today’s projects. The authors propose a meta-framework that helps project practitioners to select an
appropriate project evaluation criterion for their projects by considering the strengths and limitations of their
preferred project evaluation model as well as making project evaluators aware of the underlying logics
associated to diverse project evaluation approaches.
Research limitations/implications – This study suggests that new conceptual approaches to deal with
some of the major challenges in the project evaluation field. Practice-based views, narrative analysis and
actor-network theory are likely to be useful tools to better understand and cope with the projects’ uncertainty
and complexity.
Practical implications – The findings of this research assist project management practitioners and
particularly project evaluators to enhance their understanding of the subjectivity, complexity and dynamics
of current projects. To increase the reflexivity and resilience of project evaluation practice, this study also
proposes new directions to apply different criteria, sub-criteria and indicators to the evaluation practice.
Originality/value – The originality of this study relies on transcending the conventional objective and
rational approaches prevailing in current project evaluation practices. It proposes a research agenda that
pave the way to address the shortcomings of conventional project evaluation practice.
Keywords Project management, Systematic literature review, Epistemology, Project evaluation,
Evaluation criteria, Project appraisal, Project assessment
Paper type Literature review
Introduction
Project evaluation is a multi-layered affair. Because projects vary in size, industrial sector,
availability of resources and specific goals, they must be adapted to the uniqueness of its
context, especially with changes occurring over time. Additionally, project evaluation also
plays diverse roles. Project evaluation can be useful to demonstrate project transparency,
accountability and allows for project lessons learned to be shared, constructing knowledge
and expertise (Arrow et al., 2003; Andersen et al., 2002) that can be incorporated into policy
and practice (Rolstadås et al., 2014; Sato and Chagas, 2014). Project evaluation can also
provide a solid foundation for examining our prior assumptions and constraints to review
whether they are still reliable (Davis, 2014; Hanisch and Wald, 2014). But the “official International Journal of Managing
Projects in Business
narrative” of project evaluation can also be used to confer either status or stigma, legitimize Vol. 13 No. 3, 2020
pp. 573-599
particular behaviors or courses of action, justify large or risky projects or distance the future © Emerald Publishing Limited
1753-8378
from the past (McLeod et al., 2012; Snowden and Boone, 2007). DOI 10.1108/IJMPB-10-2018-0217
IJMPB As a result of this multiplicity of project situations and diversity of roles project
13,3 evaluation can play, there is a constellation of ways of evaluating projects. This brings a set
of issues regarding how projects are really evaluated. There is a lack of consensus in the
literature regarding how to evaluate projects (Anzoise and Sardo, 2016; Turner and Zolin,
2012); the project evaluation field is fractioned with multiple approaches attempting to show
diverse angles of the phenomena to different audiences; most project evaluation approaches
574 consider only time, cost and quality – called the golden triangle in project management
(Anzoise and Sardo, 2016) and neglecting “soft” criteria such as long-term goals and impact
on society and the environment ( for exemptions see Ngacho and Das, 2014; Ika et al., 2012).
The latter means the project evaluation literature has mainly concentrated on the more
objective aspects of project evaluation and overlooked the subjective aspects that reflect the
temporal, dynamic, complex and subjective nature of today’s projects.
Notably, our investigation shows that the project evaluation literature is bias toward the
application of objective/tangible criteria to evaluate projects. That is, while project
evaluation criteria based on objective measures is widely used and uncontroversial,
evaluation criteria looking at more subjective indicators is less commonly applied and still
controversial. Several authors have expressed concern of the predominance of objectivist
project management approaches (e.g. Ika, 2009; Söderlund, 2004). Because of the increasing
complexity of projects due to uncertainty, ambiguity and known and unknown risks
(Whitty and Maylor, 2009), new perspectives that account for both objective and subjective
aspects of the project evaluation phenomena are needed (Hanisch and Wald, 2014; Turner
et al., 2009) since evaluation is, partly, based on informed judgment. Cicmil and Hodgson
(2006), for example, called for the development of new conceptual project management
trajectories that look at project management as non-neutral socially constructed phenomena
constituted by interactions among people objects materials unexpected events, all
permeated power relations (Linde and Linderoth, 2006).
In this paper we develop a meta-framework that helps to grasp the underlying strengths
and weaknesses of diverse project evaluation approaches. Based on this examination, we
propose a novel research agenda that can help to address some of the shortcomings
pinpointed in our analysis.
We argue for the unfeasibility of a general framework for project evaluation. Instead, we
contend for an adaptive approach, that considers simultaneously objective and subjective criteria
as well as timing. That is, project evaluation needs to be customized. Every single project needs
to be treated differently and appropriately based on ongoing conversation between different
stakeholders involved with the evaluation process and how they make sense of project’s outcome
and results, which can be different in different times.
Our investigation reveals that the project evaluation literature has overlooked the
necessity to view project evaluation as a socially constructed endeavor, in which
evaluators and those who are evaluated interact with each other in an ongoing basis to
make sense the evaluation process and its outcomes. Specifically, we propose an adaptive
meta-framework that can be used to gage and fine-tune the evaluation criteria selected for
specific projects. In this sense, the proposed meta-framework functions as a sorting device
that can help project practitioners to recognize the strengths and weaknesses of their
preferred project evaluation approach.
The proposed framework can be useful for practitioners as it helps to uncover the degree of
subjectivity embedded in used evaluation criteria and timing aspect associated to the specific
situation and; select balanced criteria to account simultaneously for both objective and
subjective dimensions of projects. This means that the framework also advances project
evaluation theory since it goes beyond the development of ad hoc criteria for specific projects
and its association to the simplistic examination of success and failure by looking to the extent
to which criteria has been achieved.
This paper is structured as follows. In the second section the methodology used in this study Understanding
is described in detail. The results of the literature review and the proposed conceptual framework project
for the evaluation of large projects are presented in the third section. The fourth and fifth sections evaluation
discuss the results and provide conclusions and directions for future research, respectively.
Methodological considerations
In this study we conducted a systematic literature review to explore project evaluation 575
constructs. To build categories that explain the wide diversity of project evaluation criteria and
perspectives used in the literature, we applied a thematic analysis on 138 articles. Following an
inductive qualitative process, the authors independently selected criteria and classified in
groups. Then, both authors jointly agreed on a final classification of project evaluation criteria.
These methodological processes avoided bias and minimized chances of using pre-exiting
frameworks to interpret current project evaluation practices (Krippendorff, 2004).
This paper focuses on articles published in major project management journals. The literature
search was performed using a number of keywords from the relevant project evaluation labels
and definitions to obtain relevant samples of research. The search terms were then inserted
into nine search engines. This first step identified 957 research papers. By improving the
search analysis process and criteria, the number of findings was reduced systematically, as
demonstrated in Figure 1. The relevant research background selection process relied on reading
and understanding the literature including the research title, abstract, introduction and
conclusion, together with a set of selection criteria. These selection criteria are listed as follows:
• the paper’s topic obviously represents the project management context;
• the paper used a minimum one keyword; and
• the paper focused on the evaluation concept in project environments.
The second phase involved an investigation of the reference lists of the refined articles
(see Figure 1), as some of the most important works in the project evaluation field (which had
to be included in the review because of the significance of the contributions) had appeared
either in books or as articles. The 19 works added by this strategy were included in the sample
(see Figure 1). In the end, a total of 72 papers were included in the literature review.
EPISTEMOLOGY OF
EVALUATION
Sustainability 12 8 15
SUBJECTIVE
Impact 20 20 58
Business 10
Success
7 21
OBJECTIVE
Efficiency and
Effectiveness 55 97 137
TIME
EXANTE INTERIM EXPOST Figure 2.
Reframing project
n Frequency of each criteria in the literature evaluation criteria
IJMPB Project efficiency and effectiveness. While we have grouped these criteria for the sake of
13,3 simplicity, they are different. Project efficiency is a measure of how economic resources are
converted to the desired results (Ngacho and Das, 2014; Xu and Yeh, 2014; Dvir et al., 2006)
and indicates whether or not the project met its schedule, budget and quality metrics.
Tangible indicators are used to measure project efficiency. Project efficiency albeit is a
limited criteria as there are other factors that contribute to the overall outcome of the project
578 (Serrador and Turner, 2015). Project effectiveness is the extent to which the project’s
objectives, set out in the project plan, were achieved (Hanisch and Wald, 2014; Rolstadås
et al., 2014; McLeod et al., 2012). It also seeks to determine the factors that influence the
achievement or non-achievement of the objectives of the projects (Davis, 2014; Mir and
Pinnington, 2014). The most commonly used indicator in the literature for measuring project
effectiveness is “how well the project’s product satisfies users’ need” (Ika et al., 2012). These
two criteria are also proposed by OECD[1] to evaluate international development programs
(Chianca, 2008). Then, project efficiency and effectiveness criteria focus on measuring
objective outcomes and processes.
There are two project evaluation approaches that rely on efficiency and effectiveness
criteria – tactical (or operational) evaluation (Im et al., 2015) and goal-oriented evaluation
(Marsh, 1978). Tactical (or operational) evaluation focuses on measurement of objective
operational factors and/or outcomes of the project or program. Criteria for evaluating
operational indicators are the most well developed and applied by practitioners (see Figure 2)
and, can be applied ex ante, interim or ex post (Eder et al., 2006). In goal-oriented evaluation,
like in tactical evaluation, the specification and measurement of project’s goals is the central
aspect of evaluation (Marsh, 1978). It is based on Logical Framework Approach that, in turns,
is grounded on the principles of planning and control. That it, embedded within this model
there are assumptions about clear cause and effect relationships; linearity of events; and the
pre-determination of the project’s goals processes and outputs. Thus, it is possible to say that
both tactical and goal-oriented evaluation approaches fall within the objectivist approach of
project evaluation (see Figure 2).
Objectivist approaches of project evaluation are suitable for projects which perform in
stable contexts/situations with stable goals, inputs, processes, stakeholders, resources
available and outputs (e.g. Crawford and Bryce, 2003). Their objective character,
nevertheless, brings some shortcomings. First, to be measurable, factors/outcomes to be
measured need to be narrowed defined. This is problematic because project outcomes are
the result of combinations of multiple variables. Second, operational criteria are unable to
consider aspects of the project which are difficult to measure or aspects which there is no
available hard data. Third, tactical and goal-oriented approaches overlook subjective
aspects that affect both the evaluation process and evaluation outcomes. Fourth,
dependency on project objectives is a concern since project objectives can be as diverse as its
stakeholders (Baccarini, 1999). In short, objectivist PE approaches do not seem to addresses
projects in which non-linear relations emerge; no strong connections between effects and
causes could be made; goals/outputs keep changing along the project and; there is no
agreement among key stakeholders on the project’s goals and objectives (Cicmil et al., 2006).
Business success is concerned with the wider organization (the firm, consortium and
government), sponsoring the project and its long-term viability (Ika et al., 2012; Müller and
Turner, 2007; Shenhar and Dvir, 2007). This criterion considers the accomplishment of
strategic objectives and benefits, as well as the impacts on markets and competitors,
business development or expansion, and the ability to react to future opportunities or
challenges (Mir and Pinnington, 2014; McLeod et al., 2012; Shao et al., 2012). It also
encompasses project team gains – learning, motivation and lessons learnt (Davis, 2014;
Maylor et al., 2006). We consider this criterion at medium level of subjectivity as it usually
uses a mix of both tangible and intangible data and stakeholders are likely to agree on the Understanding
financial bottom line as definition of business success. While interim and ex post financial project
benefits can be measured by objective data such as costs and income, it is hard to quantify evaluation
ex ante and interim strategic benefits since they might be either long term or unknown at the
time of project development. Conversely, the strategic benefits can be accurately measured,
but in the long term (ex post) only.
Project impact refers to the direct or indirect, primary and secondary long-term effects 579
produced by a project, intentionally or unintentionally (Ika et al., 2012). This criterion
assesses whether the outcomes achieved address the needs, problems and issues of key
stakeholders including project investors, project sponsors, contractors, customers and
project team (Ngacho and Das, 2014; Ika et al., 2012; Shenhar and Dvir, 2007). That is, this
criterion goes beyond the set project’s goals. Because of this project impact criteria
apply more subjective than objective criteria. For example, achievement of strategic goals
of the organization or impact on the project team can hardly be quantitatively measurable
and specific.
Project sustainability is related to the extent to which the project’s outcome maximizes
inter-generational welfare and maintenance of the environment (Müller and Turner, 2007),
rather than solely maintenance of the economy’s productive base (Ngacho and Das, 2014;
Ika et al., 2012). This criterion is also subjective as there are too many definitions of the
concept of sustainability as stakeholders and, sustainability outcomes are usually long-term
affairs. We have conceptualized sustainability as an independent criterion because, being
indirect and long term in nature, it is not covered under any of the other criteria (Ngacho and
Das, 2014). The sustainability criterion is especially relevant in development projects, which
deal with a range of social, economic, environmental, cultural and political concerns, usually
occurring during the project execution, and after the project completion (Ika et al., 2012;
Tatikonda and Rosenthal, 2000). Table II summarizes the above criteria.
Timing of project evaluation in the project life cycle: the examination of criteria used in
the literature considering time lead us to categorize evaluation criteria used by the literature
into three different scales including: ex ante, interim and ex post.
Ex ante evaluation evaluates a project prior to its implementation. It is conducted by
project investors or on their behalf, to ensure that the project is feasible and will provide
returns on investment. Usually criterion in this category follows clock time logic. Ex ante
evaluation usually follows a combination of objective and subjective criteria. While it is
based on past hard data (economic indicators and cost of labor), it also uses subjective views
to justify taken-for-granted contextual assumptions embedded in objective criteria, such as
the famous ceteris paribus principle that orthodox economist apply and the view that the
future is repetition of the past. Of course, the latter is hardly true since projects are unique
(Makarova and Sokolova, 2014).
Interim project evaluation reflects the status and progress of the project against its plan.
At this stage the actual data and information regarding project’s performance and its results
are collected and analyzed during its implementation. Like in the previous category, clock
time is used here. This type of evaluation works well when focusing on criteria that deal
with highly tangible outcomes that can be assessed through application of different project
management techniques such as Earned Value Analysis.
Ex post evaluation mostly focuses on the long-term outcomes of the project. Because it is
a medium and/or long-term exercise, most ex post evaluation criteria follow a social time
perspective. A key characteristic to these criteria is its high level of subjectivity. There is
neither agreed definition of criteria nor calculative device to measure it. For instance, the
concept of sustainability is defined differently in diverse industrial sectors, by practitioners,
academics and government organizations (Ngacho and Das, 2014; Turner and Zolin, 2012).
IJMPB Criteria Sub-criteria Authors
13,3
Efficiency Project achievements Sheffield and Lemétayer (2013), Muller et al. (2012), Ika
Time Product achievements (2009), Söderlund (2004), Chua et al. (1999), Atkinson
Cost Financial performance (1999), Davis (2014), Naoum (1994), Brown and Adams
Quality Project management (2000), Cheung et al. (2000), Wang and Huang (2006),
performance Diallo and Thuillier (2004), Mishra et al. (2011), Beringer
580 et al. (2013), Milosevic and Patanakul (2005), Andersen
et al. (2002), Müller and Turner (2007), Hanisch and
Wald (2014), Ngacho and Das (2014), Wohlin and
Andrews (2001), Lim and Mohamed (1999), Prakash
and Nandhini (2015), Maloney (1990), Doloi et al. (2011),
Freeman and Beale (1992), Tayler (1992), Parfitt and
Sanvido (1993), Riggs et al. (1992), Bubshait and
Almohawis (1994), Chan et al. (2002), Westerveld (2003),
Liu and Walker (1998), Serrador and Pinto (2015),
Serrador and Turner (2015), Chang et al. (2013), Mazur
et al. (2014), McLeod et al. (2012), Ika et al. (2012), Bryde
and Robinson (2005), Mir and Pinnington (2014), Dvir
et al. (2003), Turner and Zolin (2012), Pinto and Slevin
(1988), Shenhar et al. (1997), Xu and Yeh (2014),
Polydoropoulou and Roumboutsos (2009)
Effectiveness Project achievements Muller et al. (2012), Müller and Turner (2007), Hanisch
How well the project’s Product achievements and Wald (2014), Tatikonda and Rosenthal (2000),
product satisfies users’ Financial performance Alderman and Ivory (2011), Mollaoglu-Korkmaz et al.
need Project management (2011), Xu and Yeh (2014), Ika et al. (2012), McLeod et al.
performance (2012), Andersen et al. (2002), Turner et al. (2009),
Serrador and Turner (2015), Mazur et al. (2014), Dvir
et al. (2003), Chang et al. (2013), Sheffield and Lemétayer
(2013), McLeod et al. (2012), Andersen et al. (2002), Khan
et al. (2013), Rolstadås et al. (2014), Parfitt and Sanvido
(1993), Chan et al. (2002), Polydoropoulou and
Roumboutsos (2009), Wohlin and Andrews (2001),
Freeman and Beale (1992), Tayler (1992),
Riggs et al. (1992)
Business success Financial benefits Turner and Zolin (2012), Freeman and Beale (1992),
Increased market share Strategic benefits Tayler (1992), Parfitt and Sanvido (1993), Milosevic and
Increased parent Patanakul (2005), Shenhar and Dvir (2007), Chan et al.
company’s profit (2002), Milosevic and Patanakul (2005), Jugdev and
Muller (2005), Shenhar et al. (1997), Dvir et al. (2003),
McLeod et al. (2012), Mir and Pinnington (2014),
Hanisch and Wald (2014), Muller et al. (2012), Müller
and Turner (2007), Ika (2009), Chang et al. (2013), Shao
et al. (2012), Heravi and Ilbeigi (2012), Frinsdorf et al.
(2014), Henriksen and Røstad (2010), Chen (2015),
Menches and Hanna (2006), Albert et al. (2017),
Lenfle (2012)
Impact Stakeholder impact Muller et al. (2012), Naoum (1994), Müller and Turner
Stakeholder satisfaction Environmental and (2007), Liu and Walker (1998), Freeman and Beale
Project team social impact (1992), Chan et al. (2002), Shenhar and Dvir (2007),
empowerment Organizational impact Serrador and Pinto (2015), Westerveld (2003), Ika (2009),
Lim and Mohamed (1999), Parfitt and Sanvido (1993),
Bryde and Robinson (2005), Khan et al. (2013), Pinto and
Slevin (1988), Milosevic and Patanakul (2005), Chang
et al. (2013), Sheffield and Lemétayer (2013), Prakash
Table II. and Nandhini (2015), Mir and Pinnington (2014),
Project evaluation
criteria: a synthesis
from the literature (continued )
Criteria Sub-criteria Authors
Understanding
project
Shenhar et al. (1997), Turner and Zolin (2012), McLeod evaluation
et al. (2012), Sheffield and Lemétayer (2013), Ika (2009),
Dvir et al. (2003), Polydoropoulou and Roumboutsos
(2009), Dendena and Corsi (2015), Ngacho and Das
(2014), Prakash and Nandhini (2015), Turner and Zolin
(2012), Parfitt and Sanvido (1993), Ika et al. (2012), 581
Delarue and Cochet (2013), Pisarski et al. (2011), Turner
et al. (2009), Serrador and Turner (2015), Diallo and
Thuillier (2004), Chang et al. (2013), Shenhar and Dvir
(2007), Andersen et al. (2002), Parfitt and Sanvido
(1993), Chang et al. (2013), Wang and Huang (2006),
Jugdev and Muller (2005), Prakash and Nandhini (2015),
Bubshait and Almohawis (1994), Kumaraswamy and
Thorpe (1996), Chan et al. (2002), Shao et al. (2012)
Sustainability Environmental Kumaraswamy and Thorpe (1996), Liu and Walker
Social concerns sustainability (1998), Chan et al. (2002), Ngacho and Das (2014), Müller
Environmental concerns Social sustainability and Turner (2007), Arrow et al. (2003), Ika et al. (2012),
Economic Turner and Zolin (2012), Ngacho and Das (2014), Müller
sustainability and Turner (2007), Arrow et al. (2003), Ika et al. (2012),
Mishra et al. (2011), Mollaoglu-Korkmaz et al. (2011), Ika
et al. (2012), Bamberger (1989), Henriksen and Røstad
(2010), Cha and Kim (2011), Bamberger (1989), Masrom
et al. (2015), Bamberger (1989), Holvoet and Renard
(2003), Zvingule et al. (2013), Bueno Cadena and
Vassallo Magro (2015), Samset and Christensen (2017),
Van Wee (2012), Bueno Cadena and
Vassallo Magro (2015) Table II.
Differently, ex post evaluation is based on factual results at the end of the project, but
there is no definition on what to measure and how to measure. Furthermore, it occurs
under the umbrella of social time as the time to perform an ex post evaluation is socially
negotiated among key stakeholders. Timing of ex post evaluation is customized and
depends on stakeholders’ intentions, aims and resources to influence the timing and terms of
reference of ex post evaluation (Muller et al., 2012; Müller and Turner, 2007; Hanisch and
Wald, 2014).
In the case of ex-post evaluation of sustainability criterion, for example, actual data from
a concluded project are used to examine the project’s benefits (or harms) to the environment.
This criterion can be considered subjective as the definition of sustainability varies
significantly from author to author; the scope of available data can also vary significantly
from project to project and; uses social time since it is performed whenever main
stakeholders agree. Important to note, earlier or latter timing of sustainability evaluation
might affect evaluation outcomes since data at different timelines will picture diverse
outcomes for diverse stakeholders (Turner and Zolin, 2012; Ngacho and Das, 2014).
Table III maps project evaluation criteria highlighting timing of project evaluation.
Returning to Table III and taking a landscape view, it is possible to observe that project
evaluation practitioners have over emphasized their evaluation efforts in the efficiency and
efficacy category before (ex ante), during and after (ex post) the project. This is hardly a surprise
since objectivist criteria have been the landmark of project evaluation. The other aspect that
needs to be highlighted is the little emphasis of evaluation efforts on subjective criteria,
especially in the least tangible of groups: sustainability and impact across time – before, during
and after project. This denotes that long-term evaluation of the project’s impacts on society and
13,3
582
IJMPB
Table III.
across two
dimensions
epistemology
temporality and
evaluation criteria
Mapping the project
Temporality of the evaluation criteria
Subjectivity level Criteria Ex ante evaluation Interim evaluation Ex post evaluation
Subjective Sustainability Environmental sustainability (Holvoet and Environmental sustainability (Bamberger, Environmental sustainability
Renard, 2003; Zvingule et al., 2013; Bueno 1989; Henriksen and Røstad, 2010; Masrom (Kumaraswamy and Thorpe, 1996; Liu and
Cadena and Vassallo Magro, 2015) et al., 2015; Cha and Kim, 2011) Walker, 1998; Chan et al., 2002; Ngacho and
Social sustainability: the positive effects Social sustainability: public welfare Das, 2014; Müller and Turner, 2007; Arrow
persist after the conclusion of the project, (Bamberger, 1989; Henriksen and Røstad, et al., 2003; Ika et al., 2012)
ethics, health and safety (Samset and 2010; Masrom et al., 2015) Social sustainability: social costs, social
Christensen, 2017; Van Wee, 2012; Bueno Economic sustainability (Bamberger, 1989; benefits, ethics (Turner and Zolin, 2012;
Cadena and Vassallo Magro, 2015) Henriksen and Røstad, 2010) Ngacho and Das, 2014; Müller and Turner,
Economic sustainability: resource 2007; Arrow et al., 2003; Ika et al., 2012;
utilization (Bueno Cadena and Vassallo Mishra et al., 2011)
Magro, 2015) Economic sustainability: sustainable
project outcomes (Mollaoglu-Korkmaz et al.,
2011; Ika et al., 2012)
Impact Stakeholders impact: quality of stakeholder Stakeholders impact: customer satisfaction Stakeholders impact: client, customer,
participation in the planning process – (Kärnä and Junnonen, 2017; Willar, 2017; project team and end user satisfaction
openness, equity, dialogue, customer Albert et al., 2017; Zhang and Fan, 2013; (Muller et al., 2012; Naoum, 1994; Müller
satisfaction, end user satisfaction, benefits Masrom et al., 2015; Tohumcu and and Turner, 2007; Liu and Walker, 1998;
for external and internal users (Van Buuren Karasakal, 2010; Almahmoud et al., 2012; Freeman and Beale, 1992; Chan et al., 2002;
and Nooteboom, 2009; Raschke and Sen, Heravi and Ilbeigi, 2012) Shenhar and Dvir, 2007; Kumaraswamy
2013; Jukić et al., 2013) Environmental and social impact and Thorpe, 1996; Serrador and Pinto, 2015;
Organizational impact: top management, (Bamberger, 1989; Eriksen and Lensink, Westerveld, 2003; Ika, 2009)
subjective, risk, technological and 2015; Veillard et al., 2013) Environmental and social impact (Dendena
behavioral impact, integration of the Organizational impact: staff skills, and Corsi, 2015; Ngacho and Das, 2014;
project achievements with the existing cooperation, learning and innovation, Prakash and Nandhini, 2015; Turner and
knowledge, proximity and productivity relationship, benefit, knowledge Zolin, 2012; Parfitt and Sanvido, 1993; Ika
effects, investment and land use impacts accumulation, education, international et al., 2012; Delarue and Cochet, 2013;
and employment effects (Rosacker and collaboration (Kärnä and Junnonen, 2016, Pisarski et al., 2011)
Olson, 2008; Oviedo-García, 2016; Laird and 2017; Willar, 2017; Masrom et al., 2015; Organizational impact: project team
Venables, 2017) Park et al., 2013) empowerment, learning, good relationship
Environmental and social impact: what with stakeholders, esthetics, organization’s
other positive or negative effects may occur understanding of project management as a
because of the project, project impact, effect strategic asset, health and safety impact,
(continued )
Temporality of the evaluation criteria
Subjectivity level Criteria Ex ante evaluation Interim evaluation Ex post evaluation
(continued )
Understanding
583
project
evaluation
Table III.
13,3
584
IJMPB
Table III.
Temporality of the evaluation criteria
Subjectivity level Criteria Ex ante evaluation Interim evaluation Ex post evaluation
Effectiveness Project achievements: expectations are Project achievements: meeting customer Project achievements: meeting the explicit
fulfilled, operational effectiveness – needs, achievements of project’s goals, and implicit objectives of the project set by
increased revenue and market share, value added, credibility, feasibility, – possibly multiple different –
project relevance, validity, fairness, relevance (Zhang and Fan, 2013; stakeholders, meeting user requirements,
reliability, the need for the project relevance Bamberger, 1989; Makarova and Sokolova, creating value for the stakeholders,
(Samset and Christensen, 2017; Raschke 2014; Shek and Yu, 2012; Lauras et al., 2010; problem solving (Muller et al., 2012; Müller
and Sen, 2013; Holvoet and Renard, 2003; Ramezani and Lu, 2014; Kärnä and and Turner, 2007; Hanisch and Wald, 2014;
Zvingule et al., 2013; Van Wee and Roeser, Junnonen, 2016, 2017; Heravi and Ilbeigi, Tatikonda and Rosenthal, 2000; Alderman
2013) 2012; Zhang and Fan, 2013; Bower and and Ivory, 2011; Mollaoglu-Korkmaz et al.,
Product achievements: appropriateness, Finegan, 2009; Cheng et al., 2013; Bhalla 2011; Pisarski et al., 2011; Xu and Yeh, 2014;
level of attainability/achievement, et al., 2013; Baccarini, 1999; Lauras et al., Ika et al., 2012; McLeod et al., 2012;
requirement, technical specifications, 2010) Andersen et al., 2002; Turner et al., 2009;
quality of the solutions – usefulness, Product achievements: compatible products Serrador and Turner, 2015; Mazur et al.,
applicability, technical and business with international players, functionality, 2014; Dvir et al., 2003; Chang et al., 2013)
quality, merit (Makarova and Sokolova, usability, reliability, flexibility, technical Product achievements: addresses a need,
2014; Rosacker and Olson, 2008; Van Wee performance, simplicity of the design, product is used, functionality,
and Roeser, 2013; Van Buuren and appropriateness (Bamberger, 1989; maintainability, reliability, availability,
Nooteboom, 2009) Makarova and Sokolova, 2014; Shek and safety, meeting technical performance
Yu, 2012; Lauras et al., 2010; de Oliveira specifications (Sheffield and Lemétayer,
Lacerda et al., 2011; Olsson and Bull-Berg, 2013; McLeod et al., 2012; Andersen et al.,
2015; Tohumcu and Karasakal, 2010) 2002; Khan et al., 2013; Rolstadås et al.,
2014; Parfitt and Sanvido, 1993; Chan et al.,
2002; Khan et al., 2013; Polydoropoulou and
Roumboutsos, 2009)
Objective Efficiency Financial performance: net present value Financial performance: life cycle costs, cost Financial performance: meeting cost/
(NPV ), internal rate of return (IRR), return (de Oliveira Lacerda et al., 2011; budget goals (Sheffield and Lemétayer,
on investment (ROI), payback, budget Polydoropoulou and Roumboutsos, 2009; 2013; Muller et al., 2012; Ika, 2009;
constraint, probability assess, cost-benefit Almahmoud et al., 2012; Hwang et al., 2013; Söderlund, 2004; Chua et al., 1999;
analysis – CBA, net present worth – NPW, Najmi et al., 2009; Heravi and Ilbeigi, 2012; Atkinson, 1999; Davis, 2014; Naoum, 1994;
construction price level, domestic economic Albert et al., 2017; Cheng et al., 2012; Zhang Brown and Adams, 2000; Cheung et al.,
conditions, money market conditions, and Fan, 2013; Masrom et al., 2015; Cha and 2000; Wang and Huang, 2006; Diallo and
unemployment level, capital market Kim, 2011; Tohumcu and Karasakal, 2010; Thuillier, 2004; Mishra et al., 2011;
(continued )
Temporality of the evaluation criteria
Subjectivity level Criteria Ex ante evaluation Interim evaluation Ex post evaluation
conditions, population growth, and global Ikpe et al., 2014) Beringer et al., 2013; Milosevic and
economic climate, the costs of planning, Project management performance: project Patanakul, 2005; Andersen et al., 2002;
development, implementation and efficiency, time/schedule performance, Müller and Turner, 2007; Hanisch and
operation – project life cycle costing and quality, meeting technical specifications, Wald, 2014; Ngacho and Das, 2014)
risks, energy efficiency, benefits, cost, scope, payment, legal issues, delivery Project management performance: meeting
ranking, sensitivity analysis, realistic method, selection methods, stakeholder time, quality and scope goals, productivity,
analysis (Rosacker and Olson, 2008; commitments, changes and rework, project efficient use of (project) resources; (project)
Makarova and Sokolova, 2014; Arrow et al., management, safety, productivity, management success, project efficiency,
2003; Van Wee and Roeser, 2013; Van Wee, performance, value for money, project project manager’s efficiency, safety, site
2012; Holvoet and Renard, 2003; Zvingule management leadership, staff, policy and disputes, project peer rating, achieved level
et al., 2013) strategy, partnership and resources, project of quality targets, operational project life
Project management performance: the uses life cycle management processes and quality improvements, features,
of resources and time are reasonable, project management key performance performance, risk, safety, completed work
project life cost, time of project delivery, indicators, risk and security, (Sheffield and Lemétayer, 2013; Muller
cost, benefits, risk, value, business case, communication between team members, et al., 2012; Ika, 2009; Söderlund, 2004; Chua
operational efficiency – cost, time, product/ human resource management, et al., 1999; Atkinson, 1999; Davis, 2014;
service quality, internal quality assurance, subcontractor management, overseas Naoum, 1994; Brown and Adams, 2000;
accountability, supervision, procedural dependence (Shenhar et al., 1997; Cheung et al., 2000; Wang and Huang, 2006;
quality of the planning process – Bamberger, 1989; Lenfle, 2012; Makarova Diallo and Thuillier, 2004; Mishra et al.,
transparency, timeliness, risk analysis and Sokolova, 2014; Shek and Yu, 2012; 2011; Beringer et al., 2013; Milosevic and
(Samset and Christensen, 2017; Raschke Lauras et al., 2010; Olsson and Bull-Berg, Patanakul, 2005; Andersen et al., 2002;
and Sen, 2013; Holvoet and Renard, 2003; 2015; Baccarini, 1999; Cao and Hoffman, Müller and Turner, 2007; Hanisch and
Zvingule et al., 2013) 2011; Locatelli et al., 2014; Polydoropoulou Wald, 2014)
and Roumboutsos, 2009; Albert et al., 2017)
Understanding
585
project
evaluation
Table III.
IJMPB the environment are overlooked by most projects, despite the increasing governmental and
13,3 societal demands for considering environment impacts while managing projects (e.g. ISO 14000
in construction industry). The most important feature of the framework, however, is related to
its utility for helping project evaluators and practitioners to select the appropriate project
evaluation criteria for their projects.
Then, how can be interpreted the use of a wide range of project evaluation criteria by
586 practitioners? Next, we elaborate on why projects are evaluated in some ways and not in
another ones.
Discussion
The examination of the criteria used to evaluate projects indicates that wide variety of criteria,
following diverse world-view assumptions and focusing on particular time frames, does exist.
On one side, there seems to be agreement in the project evaluation literature regarding the use of
objective criteria to evaluate project efficiency, efficacy and partly, impact. On the other side,
there seems to be no agreement on how to evaluate subjective aspects, including what
indicators to use, how and when. This issue is connected to the predominant world-view used to
evaluate projects – positivistic. Shenhar and Dvir (2007) for example, proposed a conceptual
project evaluation framework combining objective and subjective criteria and, considering
short, medium and long-term impacts. Their framework links project outcomes with
competitive advantage, and includes: efficiency; impact on customers; business success; and
preparing for the future. While these approaches are multidimensional and constitute important
advances in the project evaluation literature, still they neither explicitly consider nor address
most of the challenges of evaluating subjective aspects on projects and, adopt a positivist bias.
This shows the fragmented character of the project evaluation field and, the unfeasibility
of a unified framework that accounts most of the challenges of project evaluation processes.
Instead, it is necessary to recognize the pluralistic voices emerging from project evaluation
practice that continuously (re)construct meaning of diverse forms of evaluation. This means
that evaluation is per se a reflexive practice that fosters and supports adaptive changes, as
well as learning through the continuous generation of feedback loops among the evaluator,
the management team and, other stakeholders (Anzoise and Sardo, 2016; Todorović et al.,
2015). Reflexivity is the “systematic exploration of the unthought categories of thought,
which delimit the thinkable and predetermine the thought” (Lessard, 2007, p. 1760).
Within this line of thinking it is necessary to question the underlying uses of project
evaluation approaches. While we have already pinpointed the important role played by the
context in which a project performs, to determine suitable project evaluation approach to
apply, still there is a need to understand why projects are evaluated in some ways and not in
other ways.
In order to elaborate in this crucial point, we use Barbara Czarniawska’s (2014) observation
regarding logic of statements made by practitioners when “organizing” projects. Drawing
from Czarniawska’s (2014), it is possible to suggest that projects are evaluated along three
logics: the logic of theory, in which rational arguments are deployed to sustain, for example,
the feasibility of a project (ex ante). The logic of practice, in which the situation drives actions
deployed. It is concrete (situated in time and space) and usually applies post-fact narrative for
evaluating projects. The logic of representation, as the name suggests, aims at constructing an
official representation of the outcomes of the project. It combines formal rationality (logic of
theory) with concrete examples (logic of practice) in order to construct an image that presents
projects outcomes in an acceptable way to the established institutional order – that is
acceptable for key stakeholders. Narrative methodological strategy (Czarniawska, 2004) is
likely to assists the project evaluation team to perceive project outcomes from different
perspectives and to make sense contradictory outcomes and interpretations that are likely to
emerge over time. In short, the logic of representation “demands a kind of imitation of the logic
of theory, legitimated by the claim that it originated in the logic of practice” (Czarniawska, Understanding
2014, p. 11). project
Then, we can say that that projects performing in stable context situations are evaluated evaluation
adhering to objectivist views (e.g. tactical and goal-oriented evaluation) and, follow a logic
of theory to produce their arguments. Differently, projects performing in dynamic context
situations are usually evaluated following subjectivist views (e.g. strategic and goal-free
evaluation) and, follow either a logic of practice or a logic of representation to back 587
their arguments.
In order to shed further light on the world of project evaluation, in the following
paragraphs we highlight three aspects that help to understand the character and challenges
of project evaluation practices. We also outline avenues to address some of those challenges.
Conclusion
In this study, the project evaluation literature was reviewed and reframed considering two
dimensions that permeate the wide range of criteria used to evaluate projects – the world
views associated to project evaluation criteria and, timing of criteria application. This
enabled the examination of the logics underlying diverse project evaluation approaches as
well as to highlight their strengths and challenges.
We categorized project evaluation criteria considering the degree of subjectivity of the
criteria and its timing. Five groups were derived based on degree of subjectivity: efficiency,
effectiveness, business success, impact and sustainability. The examination of evaluation
criteria considering time led us to note that project evaluation practices are used in three
main time frames: ex ante, execution and ex post evaluation.
This examination revealed that both project management theory and practice suffer
from the lack of frameworks that considers the emerging and evolving temporality,
dynamism, subjectivity and complexity of projects. Current project evaluation approaches
are bias toward objectivist short-term evaluation of projects, overlooking long-term impacts
of project on society and environment. The latter requires subjective lenses to describe and
understand them.
Furthermore, there are aspects of the evaluation process that have been overlooked by
the literature. The role of non-human actors, such as machines, materials, conceptual tools
and their interactions with humans, within the project environment is a significant gap in
the current project evaluation literature. We suggested that new conceptual approaches to
deal with some of the major challenges in the project evaluation field. Practice-based views,
narrative analysis and ANT are likely to be useful tools to better understand and cope with
the uncertainty and complexity of today’s projects.
This study contributes to advance theory on project evaluation. We have highlighted the
strengths and challenges of project evaluation. We concluded that current frameworks
developed for project evaluation fail to address the increasing complexity, temporality,
subjectivity and dynamism of today’s project environments. Moreover, the value of this research
is looking at the evaluation practice beyond dominant subjective-objective duality and
interpreting this process through a social construction lens. This new approach helps researchers
to better understand how really projects are being evaluated via ongoing sense-making and
sense-giving processes between key actors particularly evaluators and those who are evaluated
within this process.
Finally, it is important to return to the conspicuous question – why do projects fail? A
fundamental previous question that needs to be addressed is to examine whether or not the
conceptual tools used to evaluate projects are adequate for the complexity surrounding
contemporary evaluation of projects. Then, it is necessary to learn how do we customize
project evaluation tools to be aligned with the type of project we are evaluating? What are
IJMPB the implicit assumptions (logics) that underpin the project evaluation tool at hand? How do
13,3 those taken-for-granted assumptions influence the outcomes of the evaluation process? Only
after the development of more sophisticated concepts and tools to evaluate projects it will be
possible to better understand why projects fail. This task is work in progress.
Note
590 1. Organization for economic cooperation and development.
References
Albert, M., Balve, P. and Spang, K. (2017), “Evaluation of project success: a structured literature
review”, International Journal of Managing Projects in Business, Vol. 10 No. 4, pp. 796-821.
Alderman, N. and Ivory, C. (2011), “Translation and convergence in projects: an organizational
perspective on project success”, Project Management Journal, Vol. 42 No. 5, pp. 17-30.
Almahmoud, E.S., Doloi, H.K. and Panuwatwanich, K. (2012), “Linking project health to project
performance indicators: multiple case studies of construction projects in Saudi Arabia”,
International Journal of Project Management, Vol. 30 No. 3, pp. 296-307.
Ancona, D.G., Goodman, P.S., Lawrence, B.S. and Tushman, M.L. (2001), “Time: a new research lens”,
Academy of Management Review, Vol. 26 No. 4, pp. 645-663.
Andersen, E.S., Dyrhaug, Q.X. and Jessen, S.A. (2002), “Evaluation of Chinese projects and comparison
with Norwegian projects”, International Journal of Project Management, Vol. 20 No. 8,
pp. 601-609.
Anzoise, V. and Sardo, S. (2016), “Dynamic systems and the role of evaluation: the case of the green
communities project”, Evaluation and Program Planning, Vol. 54 No. 1, pp. 162-172.
Arrow, K.J., Dasgupta, P. and Mäler, K.-G. (2003), “Evaluating projects and assessing sustainable
development in imperfect economies”, Environmental and Resource Economics, Vol. 26 No. 4,
pp. 647-685.
Atkinson, R. (1999), “Project management: cost, time and quality, two best guesses and a phenomenon,
its time to accept other success criteria”, International Journal of Project Management, Vol. 17
No. 6, pp. 337-342.
Baccarini, D. (1999), “The logical framework method for defining project success”, Project Management
Journal, Vol. 30 No. 4, pp. 25-32.
Bakker, R.M. (2010), “Taking stock of temporary organizational forms: a systematic review and
research agenda”, International Journal of Management Reviews, Vol. 12 No. 4, pp. 466-486.
Bamberger, M. (1989), “The monitoring and evaluation of public sector programs in Asia: why are
development programs monitored but not evaluated?”, Evaluation Review, Vol. 13 No. 3,
pp. 223-242.
Beringer, C., Jonas, D. and Kock, A. (2013), “Behavior of internal stakeholders in project portfolio
management and its impact on success”, International Journal of Project Management, Vol. 31
No. 6, pp. 830-846.
Bhalla, K., Li, Q., Duan, L., Wang, Y., Bishai, D. and Hyder, A.A. (2013), “The prevalence of speeding
and drink driving in two cities in China: a mid project evaluation of ongoing road safety
interventions”, Injury, Vol. 44 No. 4, pp. S49-S56.
Boltanski, L. and Thévenot, L. (2000), “The reality of moral expectations: a sociology of situated
judgement”, Philosophical Explorations, Vol. 3 No. 3, pp. 208-231.
Bower, D.C. and Finegan, A.D. (2009), “New approaches in project performance evaluation techniques”,
International Journal of Managing Projects in Business, Vol. 2 No. 3, pp. 435-444.
Brown, A. and Adams, J. (2000), “Measuring the effect of project management on construction outputs:
a new approach”, International Journal of Project Management, Vol. 18 No. 5, pp. 327-335.
Bryde, D.J. and Robinson, L. (2005), “Client versus contractor perspectives on project success criteria”, Understanding
International Journal of Project Management, Vol. 23 No. 8, pp. 622-629. project
Bubshait, A.A. and Almohawis, S.A. (1994), “Evaluating the general conditions of a construction evaluation
contract”, International Journal of Project Management, Vol. 12 No. 3, pp. 133-136.
Bueno Cadena, P.C. and Vassallo Magro, J.M. (2015), “Setting the weights of sustainability criteria for
the appraisal of transport projects”, Transport, Vol. 30 No. 3, pp. 298-306.
Burrell, G. and Morgan, G. (2019), Sociological Paradigms and Organisational Analysis: Elements of the 591
Sociology of Corporate Life, Routledge, London.
Cao, Q. and Hoffman, J.J. (2011), “A case study approach for developing a project performance
evaluation system”, International Journal of Project Management, Vol. 29 No. 2, pp. 155-164.
Cetina, K.K., Schatzki, T.R. and Von Savigny, E. (Eds) (2005), The Practice Turn in Contemporary
Theory, Routledge, London.
Cha, H.S. and Kim, C.K. (2011), “Quantitative approach for project performance measurement on
building construction in South Korea”, KSCE Journal of Civil Engineering, Vol. 15 No. 8,
pp. 1319-1328.
Chan, A.P., Scott, D. and Lam, E.W. (2002), “Framework of success criteria for design/build projects”,
Journal of Management in Engineering, Vol. 18 No. 3, pp. 120-128.
Chang, A., Chih, Y.Y., Chew, E. and Pisarski, A. (2013), “Reconceptualising mega project success in
Australian defence: recognising the importance of value co-creation”, International Journal of
Project Management, Vol. 31 No. 8, pp. 1139-1153.
Chen, H.L. (2015), “Performance measurement and the prediction of capital project failure”,
International Journal of Project Management, Vol. 33 No. 6, pp. 1393-1404.
Cheng, M.Y., Huang, C.C. and Roy, A.F.V. (2013), “Predicting project success in construction using an
evolutionary Gaussian process inference model”, Journal of Civil Engineering and Management,
Vol. 19 No. S1, pp. S202-S211.
Cheng, M.Y., Tsai, H.C. and Sudjono, E. (2012), “Evolutionary fuzzy hybrid neural network for dynamic
project success assessment in construction industry”, Automation in Construction, Vol. 21 No. 1,
pp. 46-51.
Cheung, S.O., Tam, C.M., Ndekugri, I. and Harris, F.C. (2000), “Factors affecting clients’ project dispute
resolution satisfaction in Hong Kong”, Construction Management & Economics, Vol. 18 No. 3,
pp. 281-294.
Chianca, T. (2008), “The OECD/DAC criteria for international development evaluations: an assessment
and ideas for improvement”, Journal of Multidisciplinary Evaluation, Vol. 5 No. 9, pp. 41-51.
Chua, D.K.H., Kog, Y.C. and Loh, P.K. (1999), “Critical success factors for different project objectives”,
Journal of Construction Engineering and Management, Vol. 125 No. 3, pp. 142-150.
Cicmil, S. and Hodgson, D. (2006), “New possibilities for project management theory: a critical
engagement”, Project Management Journal, Vol. 37 No. 3, pp. 111-122.
Cicmil, S., Williams, T., Thomas, J. and Hodgson, D. (2006), “Rethinking project management: researching
the actuality of projects”, International Journal of Project Management, Vol. 24 No. 8, pp. 675-686.
Clegg, S. (2009), “Managing power in organizations: the hidden history of its constitution”, in Clegg, S.R.
and Haugaard, M. (Eds), The Sage Handbook of Power, Sage, Los Angeles, CA, pp. 310-331.
Corn, J.O., Byrom, E., Knestis, K., Matzen, N. and Thrift, B. (2012), “Lessons learned about collaborative
evaluation using the capacity for applying project evaluation (CAPE) framework with school
and district leaders”, Evaluation and Program Planning, Vol. 35 No. 4, pp. 535-542.
Courpasson, D., Golsorkhi, D. and Sallaz, J. (2012), “Rethinking power in organizations, institutions, and
markets: classical perspectives, current research and the future agenda”, Research in Sociology
of Organizations, Vol. 34 No. 1, pp. 1-20.
Crawford, P. and Bryce, P. (2003), “Project monitoring and evaluation: a method for enhancing the
efficiency and effectiveness of aid project implementation”, International Journal of Project
Management, Vol. 21 No. 5, pp. 363-373.
IJMPB Crossan, M., Cunha, M.P.E., Vera, D. and Cunha, J. (2005), “Time and organizational improvisation”,
13,3 Academy of Management Review, Vol. 30 No. 1, pp. 129-145.
Cubí, E., Ortiz, J. and Salom, J. (2014), “Potential impact evaluation: an ex ante evaluation of the
Mediterranean buildings energy efficiency strategy”, International Journal of Sustainable
Energy, Vol. 33 No. 5, pp. 1000-1016.
Czarniawska, B. (2004), Narratives in Social Science Research, Sage, Los Angeles, CA.
592 Czarniawska, B. (2014), Social Science Research: From Field to Desk, Sage Publications, London.
Czarniawska, B. (2017), “Actor-network theory”, in Langley, A. and Tsoukas, H. (Eds), The Sage
Handbook of Process Organization Studies, Sage, London, pp. 160-173.
Davis, K. (2014), “Different stakeholder groups and their perceptions of project success”, International
Journal of Project Management, Vol. 32 No. 2, pp. 189-201.
de Oliveira Lacerda, R.T., Ensslin, L. and Ensslin, S.R. (2011), “A performance measurement view of IT
project management”, International Journal of Productivity and Performance Management,
Vol. 60 No. 2, pp. 132-151.
Delarue, J. and Cochet, H. (2013), “Systemic impact evaluation: a methodology for complex agricultural
development projects. The case of a contract farming project in Guinea”, The European Journal
of Development Research, Vol. 25 No. 5, pp. 778-796.
Dendena, B. and Corsi, S. (2015), “The environmental and social impact assessment: a further step
towards an integrated assessment process”, Journal of Cleaner Production, Vol. 108 No. 1,
pp. 965-977.
Diallo, A. and Thuillier, D. (2004), “The success dimensions of international development projects: the
perceptions of African project coordinators”, International Journal of Project Management,
Vol. 22 No. 1, pp. 19-31.
Doloi, H., Iyer, K.C. and Sawhney, A. (2011), “Structural equation model for assessing impacts of
contractor’s performance on project success”, International Journal of Project Management,
Vol. 29 No. 6, pp. 687-695.
Dvir, D., Raz, T. and Shenhar, A.J. (2003), “An empirical analysis of the relationship between project
planning and project success”, International Journal of Project Management, Vol. 21 No. 2, pp. 89-95.
Dvir, D., Sadeh, A. and Malach-Pines, A. (2006), “Projects and project managers: the relationship
between project manager’s personality, project, project types, and project success”, Project
Management Journal, Vol. 37 No. 5, pp. 36-48.
Eder, B., Kang, D., Mathur, R., Yu, S. and Schere, K. (2006), “An operational evaluation of the Eta–
CMAQ air quality forecast model”, Atmospheric Environment, Vol. 40 No. 26, pp. 4894-4905.
Eduardo Yamasaki Sato, C. and de Freitas Chagas, M. Jr (2014), “When do megaprojects start and
finish? Redefining project lead time for megaproject success”, International Journal of Managing
Projects in Business, Vol. 7 No. 4, pp. 624-637.
Eriksen, S. and Lensink, R. (2015), “Measuring the impact of an ongoing microcredit project: evidence
from a study in Ghana”, Journal of Development Effectiveness, Vol. 7 No. 4, pp. 519-529.
Fleming, P. and Spicer, A. (2014), “Power in management and organization science”, The Academy of
Management Annals, Vol. 8, pp. 237-298.
Freeman, M. and Beale, P. (1992), “Measuring project success”, Project Management Institute,
Newtown Square, PA.
Frinsdorf, O., Zuo, J. and Xia, B. (2014), “Critical factors for project efficiency in a defence environment”,
International Journal of Project Management, Vol. 32 No. 5, pp. 803-814.
Gherardi, S. (2012), How to Conduct a Practice-based Study: Problems and Methods, Edward Elgar,
Cheltenham.
Guba, E.G. and Lincoln, Y.S. (1989), Fourth Generation Evaluation, Sage Publications, New Bury Park, CA.
Haass, O. (2018), “Resilience and emergence in contemporary project evaluation: an epic battle between
professional and political logics”, doctoral dissertation, Griffith University, Brisbane.
Hanisch, B. and Wald, A. (2014), “Effects of complexity on the success of temporary organizations: Understanding
relationship quality and transparency as substitutes for formal coordination mechanisms”, project
Scandinavian Journal of Management, Vol. 30 No. 2, pp. 197-213.
evaluation
Henriksen, B. and Røstad, C.C. (2010), “Evaluating and prioritizing projects–setting targets”,
International Journal of Managing Projects in Business, Vol. 3 No. 2, pp. 275-291.
Heravi, G. and Ilbeigi, M. (2012), “Development of a comprehensive model for construction project
success evaluation by contractors”, Engineering, Construction and Architectural Management, 593
Vol. 19 No. 5, pp. 526-542.
Holvoet, N. and Renard, R. (2003), “Desk screening of development projects: is it effective?”, Evaluation,
Vol. 9 No. 2, pp. 173-191.
Huber, G.P. and Daft, R.L. (1987), “The information environments of organizations”, in Jablin, F.M.,
Putnam, L.L., Roberts, K.H. and Porter, L.W. (Eds), Handbook of Organizational Communication:
An Interdisciplinary Perspective, Sage Publications, Thousand Oaks, CA, pp. 130-164.
Hwang, B.G., Zhao, X. and Ng, S.Y. (2013), “Identifying the critical factors affecting schedule
performance of public housing projects”, Habitat International, Vol. 38 No. 1, pp. 214-221.
Ika, L.A. (2009), “Project success as a topic in project management journals”, Project Management
Journal, Vol. 40 No. 4, pp. 6-19.
Ika, L.A., Diallo, A. and Thuillier, D. (2012), “Critical success factors for World Bank projects: an
empirical investigation”, International Journal of Project Management, Vol. 30 No. 1, pp. 105-116.
Ikpe, E., Kumar, J. and Jergeas, G. (2014), “Comparison of Alberta Industrial and Pipeline projects and
US projects performance”, American Journal of Industrial and Business Management, Vol. 4
No. 9, pp. 474-481.
Im, U., Bianconi, R., Solazzo, E., Kioutsioukis, I., Badia, A., Balzarini, A., Baró, R., Bellasio, R.,
Brunner, D., Chemel, C. and Curci, G. (2015), “Evaluation of operational on-line-coupled regional
air quality models over Europe and North America in the context of AQMEII phase 2. Part I:
Ozone”, Atmospheric Environment, Vol. 115 No. 1, pp. 404-420.
Irani, Z. and Love, P.E. (2002), “Developing a frame of reference for ex-ante IT/IS investment
evaluation”, European Journal of Information Systems, Vol. 11 No. 1, pp. 74-82.
Jugdev, K. and Muller, R. (2005), “A retrospective look at our evolving understanding of project
success”, Project Management Journal, Vol. 36 No. 4, pp. 19-31.
Jukić, T., Vintar, M. and Benčina, J. (2013), “Ex-ante evaluation: towards an assessment model of its
impact on the success of e-government projects”, Information Polity, Vol. 18 No. 4, pp. 343-361.
Kärnä, S. and Junnonen, J.M. (2016), “Benchmarking construction industry, company and project
performance by participants’ evaluation”, Benchmarking: An International Journal, Vol. 23 No. 7,
pp. 2092-2108.
Kärnä, S. and Junnonen, J.M. (2017), “Designers’ performance evaluation in construction projects”,
Engineering, Construction and Architectural Management, Vol. 24 No. 1, pp. 154-169.
Khan, Z., Ludlow, D. and Caceres, S. (2013), “Evaluating a collaborative IT based research and
development project”, Evaluation and Program Planning, Vol. 40 No. 1, pp. 27-41.
Koskinen, K.U. (2013), “Observation’s role in technically complex project implementation: the social
autopoietic system view”, International Journal of Managing Projects in Business, Vol. 6 No. 2,
pp. 349-364.
Krippendorff, K. (2004), Content Analysis: An Introduction to its Methodology, Sage, London.
Kumaraswamy, M.M. and Thorpe, A. (1996), “Systematizing construction project evaluations”, Journal
of Management in Engineering, Vol. 12 No. 1, pp. 34-39.
Laird, J.J. and Venables, A.J. (2017), “Transport investment and economic performance: a framework
for project appraisal”, Transport Policy, Vol. 56 No. 1, pp. 1-11.
Latour, B. (2005), Reassembling the Social: An Introduction to Actor-Network-Theory, Oxford University
Press, Oxford.
IJMPB Latour, B. (2012), We have never been Modern, Harvard University Press, Cambridge, MA.
13,3 Lauras, M., Marques, G. and Gourc, D. (2010), “Towards a multi-dimensional project performance
measurement system”, Decision Support Systems, Vol. 48 No. 2, pp. 342-353.
Lenfle, S. (2012), “Exploration, project evaluation and design theory: a rereading of the Manhattan
case”, International Journal of Managing Projects in Business, Vol. 5 No. 3, pp. 486-507.
Lessard, C. (2007), “Complexity and reflexivity: two important issues for economic evaluation in health
594 care”, Social Science and Medicine, Vol. 64 No. 8, pp. 1754-1765.
Lim, C.S. and Mohamed, M.Z. (1999), “Criteria of project success: an exploratory re-examination”,
International Journal of Project Management, Vol. 17 No. 4, pp. 243-248.
Linde, A. and Linderoth, H. (2006), “An actor network theory perspective on IT projects”, in Hodgson, D.
and Cicmil, S. (Eds), Making Projects Critical, Palgrave Macmillan, New York, NY, pp. 155-170.
Liu, A.M.M. and Walker, A. (1998), “Evaluation of project outcomes”, Construction Management and
Economics, Vol. 16 No. 2, pp. 209-219.
Liu, L., Borman, M. and Gao, J. (2014), “Delivering complex engineering projects: reexamining
organizational control theory”, International Journal of Project Management, Vol. 32 No. 5,
pp. 791-802.
Locatelli, G., Mancini, M. and Romano, E. (2014), “Systems engineering to improve the governance in
complex project environments”, International Journal of Project Management, Vol. 32 No. 8,
pp. 1395-1410.
Loch, C.H., DeMeyer, A. and Pich, M. (2011), Managing the Unknown: A new Approach to Managing
High Uncertainty and Risk in Projects, John Wiley & Sons, Hoboken, NJ.
McLeod, L., Doolin, B. and MacDonell, S.G. (2012), “A perspective-based understanding of project
success”, Project Management Journal, Vol. 43 No. 5, pp. 68-86.
Makarova, E. and Sokolova, A. (2014), “Foresight evaluation: lessons from project management”,
Foresight, Vol. 16 No. 1, pp. 75-91.
Maloney, C. (1990), “Environmental and project displacement of population in India. Part 1: development
and deracination”, Reports Universities Field Staff International, No. 14, San Francisco, CA.
Marsh, J. (1978), “The goal-oriented approach to evaluation: critique and case study from drug abuse
treatment”, Evaluation and Program Planning, Vol. 1 No. 1, pp. 41-49.
Masrom, M.A.N., Rahim, M.H.I.A., Mohamed, S., Chen, G.K. and Yunus, R. (2015), “Successful criteria
for large infrastructure projects in Malaysia”, Procedia Engineering, Vol. 125 No. 1, pp. 143-149.
Maylor, H., Brady, T., Cooke-Davies, T. and Hodgson, D. (2006), “From projectification to
programmification”, International Journal of Project Management, Vol. 24 No. 8, pp. 663-674.
Mazur, A., Pisarski, A., Chang, A. and Ashkanasy, N.M. (2014), “Rating defence major project success:
the role of personal attributes and stakeholder relationships”, International Journal of Project
Management, Vol. 32 No. 6, pp. 944-957.
Menches, C.L. and Hanna, A.S. (2006), “Quantitative measurement of successful performance from the
project manager’s perspective”, Journal of Construction Engineering and Management, Vol. 132
No. 12, pp. 1284-1293.
Milosevic, D. and Patanakul, P. (2005), “Standardized project management may increase development
projects success”, International Journal of Project Management, Vol. 23 No. 3, pp. 181-192.
Mir, F.A. and Pinnington, A.H. (2014), “Exploring the value of project management: linking project
management performance and project success”, International Journal of Project Management,
Vol. 32 No. 2, pp. 202-217.
Mishra, P., Dangayach, G.S. and Mittal, M.L. (2011), “An ethical approach towards sustainable project
success”, Procedia-Social and Behavioral Sciences, Vol. 25 No. 1, pp. 338-344.
Molenaar, K.R., Javernick-Will, A., Bastias, A.G., Wardwell, M.A. and Saller, K. (2012), “Construction
project peer reviews as an early indicator of project success”, Journal of Management in
Engineering, Vol. 29 No. 4, pp. 327-333.
Mollaoglu-Korkmaz, S., Swarup, L. and Riley, D. (2011), “Delivering sustainable, high-performance Understanding
buildings: influence of project delivery methods on integration and project outcomes”, Journal of project
Management in Engineering, Vol. 29 No. 1, pp. 71-78.
evaluation
Müller, R. and Turner, R. (2007), “The influence of project managers on project success criteria and
project success by type of project”, European Management Journal, Vol. 25 No. 4, pp. 298-309.
Muller, R., Geraldi, J. and Turner, J.R. (2012), “Relationships between leadership and success in different
types of project complexities”, IEEE Transactions on Engineering Management, Vol. 59 No. 1, 595
pp. 77-90.
Najmi, M., Ehsani, R., Sharbatoghlie, A. and Saidi-Mehrabad, M. (2009), “Developing an integrated
dynamic model for evaluating the performance of research projects based on multiple attribute
utility theory”, Journal of Modelling in Management, Vol. 4 No. 2, pp. 114-133.
Naoum, S.G. (1994), “Critical analysis of time and cost of management and traditional contracts”,
Journal of Construction Engineering and Management, Vol. 120 No. 4, pp. 687-705.
Ngacho, C. and Das, D. (2014), “A performance evaluation framework of development projects: an
empirical study of constituency development fund (CDF) construction projects in Kenya”,
International Journal of Project Management, Vol. 32 No. 3, pp. 492-507.
Nicolini, D. (2012), Practice Theory, Work, and Organization: An Introduction, Oxford University Press,
Oxford.
Olsson, N.O. and Bull-Berg, H. (2015), “Use of big data in project evaluations”, International Journal of
Managing Projects in Business, Vol. 8 No. 3, pp. 491-512.
Oviedo-García, M.Á. (2016), “Ex ante evaluation of interdisciplinary research projects: a literature
review”, Social Science Information, Vol. 55 No. 4, pp. 568-588.
Palmer, I. and Hardy, C. (1999), Thinking about Management: Implications of Organizational Debates
for Practice, Sage, London.
Parfitt, M.K. and Sanvido, V.E. (1993), “Checklist of critical success factors for building projects”,
Journal of Management in Engineering, Vol. 9 No. 3, pp. 243-249.
Parra-López, C., Groot, J.C., Carmona-Torres, C. and Rossing, W.A. (2009), “An integrated approach for
ex-ante evaluation of public policies for sustainable agriculture at landscape level”, Land Use
Policy, Vol. 26 No. 4, pp. 1020-1030.
Pinto, J.K. and Slevin, D.P. (1988), “Project success: definitions and measurement techniques”, Project
Management Journal, Vol. 19 No. 1, pp. 67-73.
Pisarski, A., Chang, A., Ashkanasy, N., Zolin, R., Mazur, A.K., Jordan, P. and Hatcher, C.A. (2011), “The
contribution of leadership attributes to large scale, complex project success”, 2011 Academy of
Management Annual Meeting Proceedings, Academy of Management, San Antonio, TX, August.
Polydoropoulou, A. and Roumboutsos, A. (2009), “Evaluating the impact of decision making during
construction on transport project outcome”, Evaluation and Program Planning, Vol. 32 No. 4,
pp. 369-380.
Prakash, K. and Nandhini, N. (2015), “Evaluation of factors affecting construction project performance
management”, Evaluation, Vol. 3 No. 4, pp. 1-5.
Ramezani, F. and Lu, J. (2014), “An intelligent group decision-support system and its application for
project performance evaluation”, Journal of Enterprise Information Management, Vol. 27 No. 3,
pp. 278-291.
Raschke, R.L. and Sen, S. (2013), “A value-based approach to the ex-ante evaluation of IT enabled
business process improvement projects”, Information & Management, Vol. 50 No. 7, pp. 446-456.
Riggs, J.L., Goodman, M., Finley, R. and Miller, T. (1992), “A decision support system for predicting
project success”, Project Management Journal, Vol. 22 No. 3, pp. 37-43.
Rolstadås, A., Tommelein, I., Morten Schiefloe, P. and Ballard, G. (2014), “Understanding project
success through analysis of project management approach”, International Journal of Managing
Projects in Business, Vol. 7 No. 4, pp. 638-660.
IJMPB Rosacker, K.M. and Olson, D.L. (2008), “An empirical assessment of IT project selection and evaluation
13,3 methods in state government”, Project Management Journal, Vol. 39 No. 1, pp. 49-58.
Samset, K. and Christensen, T. (2017), “Ex ante project evaluation and the complexity of early decision-
making”, Public Organization Review, Vol. 17 No. 1, pp. 1-17.
Sato, E.Y.C. and Chagas, F.C.M. (2014), “When do megaprojects start and finish? Redefining project
lead time for megaproject success”, International Journal of Managing Projects in Business,
Vol. 7 No. 4, pp. 624-637.
596
Serrador, P. and Pinto, J.K. (2015), “Does Agile work? – A quantitative analysis of agile project success”,
International Journal of Project Management, Vol. 33 No. 5, pp. 1040-1051.
Serrador, P. and Turner, R. (2015), “The relationship between project success and project efficiency”,
Project Management Journal, Vol. 46 No. 1, pp. 30-39.
Shao, J., Müller, R. and Turner, J.R. (2012), “Measuring program success”, Project Management Journal,
Vol. 43 No. 1, pp. 37-49.
Sheffield, J. and Lemétayer, J. (2013), “Factors associated with the software development agility of
successful projects”, International Journal of Project Management, Vol. 31 No. 3, pp. 459-472.
Shek, D.T. and Yu, L. (2012), “Longitudinal impact of the project PATHS on adolescent risk behavior:
what happened after five years?”, The Scientific World Journal, Vol. 2012 No. 1, pp. 1-13.
Shenhar, A.J. and Dvir, D. (2007), Reinventing Project Management: The Diamond Approach to
Successful Growth and Innovation, Harvard Business Review Press, Boston, MA.
Shenhar, A.J., Levy, O. and Dvir, D. (1997), “Mapping the dimensions of project success”, Project
Management Journal, Vol. 28 No. 2, pp. 5-13.
Snowden, D.J. and Boone, M.E. (2007), “A leader’s framework for decision making”, Harvard Business
Review, Vol. 85 No. 11, pp. 68-77.
Söderlund, J. (2004), “Building theories of project management: past research, questions for the future”,
International Journal of Project Management, Vol. 22 No. 3, pp. 183-191.
Spender, J.C. and Grant, R.M. (1996), “Knowledge and the firm: overview”, Strategic Management
Journal, Vol. 17 No. S2, pp. 5-9.
Tatikonda, M.V. and Rosenthal, S.R. (2000), “Technology novelty, project complexity, and product
development project execution success: a deeper look at task uncertainty in product innovation”,
IEEE Transactions on Engineering Management, Vol. 47 No. 1, pp. 74-87.
Tayler, C.J. (1992), “Ethyl Benzene project: the client’s perspective”, International Journal of Project
Management, Vol. 10 No. 3, pp. 175-178.
Todorović, M.L., Petrović, D.Č., Mihić, M.M., Obradović, V.L. and Bushuyev, S.D. (2015), “Project
success analysis framework: a knowledge-based approach in project management”,
International Journal of Project Management, Vol. 33 No. 4, pp. 772-783.
Tohumcu, Z. and Karasakal, E. (2010), “R&D project performance evaluation with multiple and
interdependent criteria”, IEEE Transactions on Engineering Management, Vol. 57 No. 4, pp. 620-633.
Turner, J.R. and Müller, R. (2006), Choosing Appropriate Project Managers: Matching their Leadership
Style to the Type of Project, Project Management Institute, Newtown Square, PA.
Turner, R. and Zolin, R. (2012), “Forecasting success on large projects: developing reliable scales to
predict multiple perspectives by multiple stakeholders over multiple time frames”, Project
Management Journal, Vol. 43 No. 5, pp. 87-99.
Turner, R., Zolin, R. and Remington, K. (2009), “Monitoring the performance of complex projects from
multiple perspectives over multiple time frames”, Proceedings of the 9th International Research
Network of Project Management Conference, Berlin, December.
Van Buuren, A. and Nooteboom, S. (2009), “Evaluating strategic environmental assessment in the
Netherlands: content, process and procedure as indissoluble criteria for effectiveness”, Impact
Assessment and Project Appraisal, Vol. 27 No. 2, pp. 145-154.
Van Wee, B. (2012), “How suitable is CBA for the ex-ante evaluation of transport projects and policies?
A discussion from the perspective of ethics”, Transport Policy, Vol. 19 No. 1, pp. 1-7.
Van Wee, B. and Roeser, S. (2013), “Ethical theories and the cost–benefit analysis-based ex ante Understanding
evaluation of transport policies and plans”, Transport Reviews, Vol. 33 No. 6, pp. 743-760. project
Veillard, J.H.M., Schiøtz, M.L., Guisset, A.L., Brown, A.D. and Klazinga, N.S. (2013), “The PATH project evaluation
in eight European countries: an evaluation”, International Journal of Health Care Quality
Assurance, Vol. 26 No. 8, pp. 703-713.
Wang, X. and Huang, J. (2006), “The relationships between key stakeholders’ project performance and
project success: perceptions of Chinese construction supervising engineers”, International
Journal of Project Management, Vol. 24 No. 3, pp. 253-260.
597
Weick, K.E. and Sutcliffe, K.M. (2007), Managing the Unexpected: Resilient Performance in an Age of
Uncertainty, John Wiley and Sons, San Francisco, CA.
Weick, K.E., Sutcliffe, K.M. and Obstfeld, D. (2005), “Organizing and the process of sensemaking”,
Organization Science, Vol. 16 No. 4, pp. 409-421.
Westerveld, E. (2003), “The project excellence model®: linking success criteria and critical success
factors”, International Journal of Project Management, Vol. 21 No. 6, pp. 411-418.
Whitty, S.J. and Maylor, H. (2009), “And then came complex project management (revised)”,
International Journal of Project Management, Vol. 27 No. 3, pp. 304-310.
Willar, D. (2017), “Developing attributes for evaluating construction project-based performance”, The
TQM Journal, Vol. 29 No. 2, pp. 369-384.
Wohlin, C. and Andrews, A.A. (2001), “Assessing project success using subjective evaluation factors”,
Software Quality Journal, Vol. 9 No. 1, pp. 43-70.
Xu, Y. and Yeh, C.H. (2014), “A performance-based approach to project assignment and performance
evaluation”, International Journal of Project Management, Vol. 32 No. 2, pp. 218-228.
Zhang, L. and Fan, W. (2013), “Improving performance of construction projects: a project manager’s
emotional intelligence approach”, Engineering, Construction and Architectural Management,
Vol. 20 No. 2, pp. 195-207.
Zvingule, L., Kalnins, S.N., Blumberga, D., Gusca, J., Bogdanova, M. and Muizniece, I. (2013), “Improved
project management via advancement in evaluation methodology of regional cooperation
environmental projects. Scientific Journal of Riga Technical University”, Environmental and
Climate Technologies, Vol. 11 No. 1, pp. 57-67.
Further reading
Antoniadis, D.N., Edum-Fotwe, F.T. and Thorpe, A. (2011), “Socio-organo complexity and project
performance”, International Journal of Project Management, Vol. 29 No. 7, pp. 808-816.
Bredillet, C.N. (2010), “Blowing hot and cold on project management”, Project Management Journal,
Vol. 41 No. 3, pp. 4-20.
Cooke-Davies, T. (2002), “The ‘real’ success factors on projects”, International Journal of Project
Management, Vol. 20 No. 3, pp. 185-190.
Davies, A. and Mackenzie, I. (2014), “Project complexity and systems integration: constructing the
London 2012 Olympics and Paralympics games”, International Journal of Project Management,
Vol. 32 No. 5, pp. 773-790.
Davies, A., Gann, D. and Douglas, T. (2009), “Innovation in megaprojects: systems integration at
London Heathrow Terminal 5”, California Management Review, Vol. 51 No. 2, pp. 101-125.
Hällgren, M. and Wilson, T.L. (2008), “The nature and management of crises in construction projects:
projects-as-practice observations”, International Journal of Project Management, Vol. 26 No. 8,
pp. 830-838.
Hanisch, B. and Wald, A. (2011), “A project management research framework integrating multiple
theoretical perspectives and influencing factors”, Project Management Journal, Vol. 42 No. 3,
pp. 4-22.
Jaafari, A. (2003), “Project management in the age of complexity and change”, Project Management
Journal, Vol. 34 No. 4, pp. 47-58.
IJMPB Jepsen, A.L. and Eskerod, P. (2009), “Stakeholder analysis in projects: challenges in using current
13,3 guidelines in the real world”, International Journal of Project Management, Vol. 27 No. 4,
pp. 335-343.
Lebcir, M. and Choudrie, J. (2011), “A dynamic model of the effects of project complexity on time to
complete construction projects”, International Journal of Innovation, Management and
Technology, Vol. 2 No. 6, pp. 477-483.
598 Maamar, Z., Mostefaoui, S.K. and Yahyaoui, H. (2005), “Toward an agent-based and context-oriented
approach for web services composition”, IEEE Transactions on Knowledge and Data
Engineering, Vol. 17 No. 5, pp. 686-697.
PMI (2013), A Guide to the Project Management Body of Knowledge (PMBOK® Guide), 5th ed., Project
Management Institute Incorporated, Newtown Square, PA.
Pollack, J. (2007), “The changing paradigms of project management”, International Journal of Project
Management, Vol. 25 No. 3, pp. 266-274.
Power, M. (1997), The Audit Society: Rituals of Verification, Oxford University Press, Oxford.
Radcliff, D.H. (2003), “Project evaluation and remediation”, Cost Engineering, Vol. 45 No. 1, pp. 18-23.
Saynisch, M. (2010a), “Beyond frontiers of traditional project management: an approach to
evolutionary, self-organizational principles and the complexity theory – results of the research
program”, Project Management Journal, Vol. 41 No. 2, pp. 21-37.
Saynisch, M. (2010b), “Mastering complexity and changes in projects, economy, and society via project
management second order (PM-2)”, Project Management Journal, Vol. 41 No. 5, pp. 4-20.
Schatzki, T. (2001), “Practice mind-ed orders”, in Schatzki, T., Knorr-Cetina, K. and von Savigny, E.
(Eds), The Practice Turn in Contemporary Theory, Routedge, London, pp. 42-55.
Shenhar, A.J. (2001), “One size does not fit all projects: exploring classical contingency domains”,
Management Science, Vol. 47 No. 3, pp. 394-414.
Shenhar, A.J. and Dvir, D. (1996), “Toward a typological theory of project management”, Research
Policy, Vol. 25 No. 4, pp. 607-632.
Singh, H. and Singh, A. (2002), “Principles of complexity and chaos theory in project execution: a new
approach to management”, Cost Engineering, Vol. 44 No. 12, pp. 23-33.
Söderholm, A. (2008), “Project management of unexpected events”, International Journal of Project
Management, Vol. 26 No. 1, pp. 80-86.
Thompson, R.J. (1991), “Facilitating commitment, consensus, credibility, and visibility through
collaborative foreign assistance project evaluations”, Evaluation and Program Planning, Vol. 14
No. 4, pp. 341-350.
Turner, J.R. (2009), The Handbook of Project-based Management, 3rd ed., McGraw-Hill Publishing,
Glasgow.
Vidal, L.-A., Marle, F. and Bocquet, J.-C. (2011a), “Measuring project complexity using the analytic
hierarchy process”, International Journal of Project Management, Vol. 29 No. 6, pp. 718-727.
Vidal, L.-A., Marle, F. and Bocquet, J.-C. (2011b), “Using a Delphi process and the analytic hierarchy
process (AHP) to evaluate the complexity of projects”, Expert Systems with Applications, Vol. 38
No. 5, pp. 5388-5405.
Williams, T. (1999), “The need for new paradigms for complex projects”, International Journal of
Project Management, Vol. 17 No. 5, pp. 269-273.
Williams, T. (2002), Modelling Complex Projects, John Wiley & Sons, Chichester.
Williams, T. (2005), “Assessing and moving on from the dominant project management discourse in the
light of project overruns”, IEEE Transactions on Engineering Management, Vol. 52 No. 4,
pp. 497-508.
Williams, T., Jonny Klakegg, O., Walker, D.H.T., Andersen, B. and Morten Magnussen, O. (2012),
“Identifying and acting on early warning signs in complex projects”, Project Management
Journal, Vol. 43 No. 2, pp. 37-53.
Xia, B. and Chan, A.P.C. (2012), “Measuring complexity for building projects: a Delphi study”, Understanding
Engineering, Construction and Architectural Management, Vol. 19 No. 1, pp. 7-24. project
Xia, W. and Lee, G. (2004), “Grasping the complexity of IS development projects”, Communications of
the ACM, Vol. 47 No. 5, pp. 68-74.
evaluation
Xia, W. and Lee, G. (2005), “Complexity of information systems development projects:
conceptualization and measurement development”, Journal of Management Information
Systems, Vol. 22 No. 1, pp. 45-83.
599
Corresponding author
Omid Haass can be contacted at: [email protected]
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: [email protected]