Using The Delphi Technique in Normative Planning Research
Using The Delphi Technique in Normative Planning Research
doi:10.1068/a39267
Nick Novakowski
Department of Geography/Environmental Studies, Memorial University of Newfoundland,
Corner Brook, NL A2H 6P9, Canada; e-mail: [email protected]
Barry Wellar
Department of Geography, University of Ottawa, Ottawa, ON K1N 6N5, Canada;
e-mail: [email protected]
Received 15 August 2006; in revised form 25 September 2006; published online 3 April 2008
Introduction
Delphi is the site where the most revered oracles of ancient Greece formulated their
predictions about the future (De Boer and Hale, 2000). In contrast to the predictions
provided by the Pythia or priestesses, however, the research results derived from
use of the Delphi technique are driven by methodological design. While instances
of the use of the Delphi technique are evident in many disciplines, details regarding
the employment of the technique in the planning literature have been scant since the
1980s. To address this void, we briefly outline the steps of the Delphi technique, and
present methodological design findings from a real-world application of the techni-
que used to derive the normative characteristics of high-quality plans generated
within the framework of ecosystem-based planning. The paper concludes with lessons
learned from that real-world exercise, which will be the subject of a companion
paper.
explored the benefits and costs of the program in terms of the improved management
of hazard areas, protection of significant environmental features, higher costs for
developers, and other policy elements (De Loe« and Wojtanowski, 2001).
As a research process, each Delphi category shares the same overall structure in
terms of group communication, anonymity, iteration, and central tendency. However,
due to differences in research objectives and/or application domains (identification of
preferred futures, forecasting, or policy alternatives), a choice must be made about
which type of Delphi exercise is appropriate for a particular planning research issue
or problem.
3
Step 2 Pretest
Ensure that the Delphi is the most
appropriate research instrument
3
Step 3
Preparation of draft
Background report 3
Survey
3
Step 4
Identification of potential participants
Identify potential trial-run participants
Identify potential Delphi panel members
3
Step 5
Telephone or e-mail contact and interviews
Select trial run candidates
Select Delphi panel members
3
Step 6
Trial run
Investigate viability of the draft
background report and draft survey
3
Step 8
Round 1
Initial distribution of background report and survey to panel
3
Step 9
Incorporation of feedback from round 1
Incorporate new variables from open-ended questions
Tabulation of round-1 results
Rewording; refinement of survey
Stabilization of conceptual hierarchy
3
Step 10
Round 2: redistribution of the survey 3
3
Step 12
Final tabulation of responses
3
Step 14
Anonymous post-Delphi survey
3
Step 15
Dissemination of research results
Provide results to client
Send results to panel members
with a diagram that details how the stages of the process are intended to unfold was
substantiated as very helpful in our Delphi exercise (see figure 1). The draft survey
(which contains the actual propositions or statements being investigated) accompany-
ing the draft background report can later be treated as either a full-blown round or as
an information-seeking round called a round 0 (Helmer, 1983). A round 0 is employed
to expose the Delphi subject matter to expert brainstorming and discourse. After panel
selection in the next step, a decision about whether or not to use a round 0 is required.
Due to the complexity of some planning issues and the large number of details that
can be involved in each round, a round 0 (which does not count as a formal round
since responses are not counted or tabulated) can constitute an extra time imposition
on panel members and therefore contribute to panel fatigue. This is a particularly
important consideration if panelists are not being paid. If the exercise is straight-
forward, and/or the panelists are being paid, then a round 0 is in order. Conversely,
if the exercise is likely to be time-consuming and arduous, then a round 0 should likely
be eliminated. Alternative means of achieving the outcome of a round 1 without
engaging the panel include the use of expert interviews, focus groups, walk-through
and think-aloud exercises, and directed or targeted keyword-based literature searches
that contribute to expanding the survey instrument to include the full envelope of
policy items, problem solutions, or design features that need to be explored (Turoff,
personal communication, 28 August, 2006; Wellar and Vandermeulen, 2000).
Meanwhile, the wording of the survey questions or propositions follows standard
procedural guidelines for survey design. Variables are presented in plain language with
minimal complexity so that ambiguity is minimized. Brevity is important in order
to optimize clarity and speed. On the one hand, if the wording of the questions or
propositions is too concise, then excessive freestyle interpretation by the panelists can
result. On the other hand, if the questions or propositions are too lengthy, they may
require the assimilation and/or interpretation of too many dimensions, and this can lead
to confusion (Linstone and Turoff, 1975b, page 232). This concern is particularly relevant
in two situations: when selecting an international panel, as cultural uses of professional
terms and the idiomatic use of language may differ; and when designing a Delphi exercise
that involves participants with very different backgrounds, such as elected officials,
professional staff, and `regular citizens' (Wellar and Vandermeulen, 2000).
The challenge, therefore, is finding the compromise between all that could be said
and that which needs to be said to ensure that participants have a shared understand-
ing of the core meaning of the variables involved. Preparation of the draft background
report and initial survey brings the researcher to the point where state-of-the-art issues
have been identified and consideration of panel member selection can begin. Simulta-
neously, since a heterogeneous panel is to be selected, the removal of ambiguity in
the draft background report and survey is crucial as the Delphi team will perceive the
subject matter from various perspectives.
Step 4: identification of potential participants in the Delphi panel
The first question concerns the size of the panel, which in turn will affect the strin-
gency of the selection criteria. Panel size can vary widely. In a general sense, Turoff
(personal communication, 1 June, 2006) writes that,
``The use of from three to five experts usually resulted in overlapping explanations.
Therefore most of us practising the technique take the topic and ask how many
different types of experts do we need to examine it from all relevant perspectives?
Multiply this by five and you have the total number that should be in the panel and
after you invite them if you have at least three in each category that have agreed
you might go with that.''
Using the Delphi technique in normative planning research 1491
A critical consideration arising from Turoff 's observation is the establishment of ``all the
relevant perspectives'', which in turn indicates the scope of the Delphi and the size of
the panel. More specifically, for planning-related issues or missions, the literature suggests
that a ``typical Delphi panel has about 8 to 12 members'' (Cavalli-Sforza and Ortolano,
1984, page 325). A similar guideline is suggested by Richey et al (1985, page 142), who state
that ``a small panel (e.g., eight) would be sufficient to develop appropriate consensus views.''
As demonstrated by Wellar's work on the Walking Security Index project, however,
the numbers for a Delphi transportation planning project can justifiably reach well
into double figures. Based on his client-driven study, the mix of potential panelists
could include elected officials, community association leaders, an array of citizens
(seniors, adults, youth, children, pedestrians, cyclists, and so on), and professionals
from planning, engineering, public health, and law enforcement (Wellar, 1997; Wellar
and Vandermeulen, 2000). The research undertaken by Wellar underscores Turoff 's
suggestion that the first priority is to ensure that all the perspectives on the issue are
addressed through panel composition; the second priority is panel size.
The actual identification of potential participants for the trial run and final run
requires consideration of the term `expert', viz what are the attributes that characterize
an expert? The legal aspects of the term `expert' can be employed as a starting point.
The legal view of `expert' and what it is constituted by represents `the stock and trade'
of expert witnesses used in planning-related jurisprudence. In a legal context, opinion-
based evidence (as opposed to observation-based evidence) is normally admissible in
court only when it is provided by an expert. We hasten to add, however, that in a policy
Delphi `ordinary citizens'öthat is, votersöcould be regarded as experts on municipal
issues. The point being emphasized is that there is room and a need to be open-minded
about the knowledge or expertise pertinent to the Delphi exercise under consideration.
With the preceding remarks as context, the following criteria can be used to guide
the selection process for the expert panel: an advanced degree in disciplines related
to the research domain; a relevant publication record demonstrating professional
or academic interest; extensive related work experience in the research domain; pro-
fessional affiliation (eg the Canadian Institute of Planners, the Royal Town Planning
Institute, the American Planning Association); and gender, ethnicity, life-cycle stage, or
other factors to the extent that these characteristics are relevant a priori to the research
topic. Overall, Delphi monitors ``seek to create a panel that reflects a wide range of
experience and a diversity of opinions on the subjects that are being considered''
(Masser and Foley, 1987, page 218).
At the same time, knowledgeable people rather than subject matter or methods/
techniques experts are also pertinent to normative preference probes. Planning issues
affect and are affected by the people living within the purview of planning decisions
(eg downwind from a new toxic waste incinerator, or park users facing the loss of local
green space). As Ziglio (1996, page 14) remarks, ``the definition of `experts' varies
according to the context and field of interest.'' As such, expertise can be both lay and
professional in a normative preference probe.
Step 5: telephone and e-mail contact and interviews
The original list of potential participants can be derived from the literature review,
from profiles of community activity, from membership directories, and/or from the
public record. Potential panel members can be contacted either by telephone or by
e-mail, starting with potential participants who appear to have made a significant
contribution to one or more of the selection criteria. The monitor can also ask
each contact to suggest other suitable participants, a practice known as snowballing.
Assurances about panel anonymity need to be provided and stressed in this stage.
1492 N Novakowski, B Wellar
Turoff is to include a `no judgment' choice among the options when panelists do
not seem ready to address a particular variable. Turoff (personal communication,
28 August, 2006) finds this to be an excellent way to deal with highly heterogeneous
panels that have wide-ranging areas of expertise and experience. In a related vein, one
of us (Wellar) has been a member of two national funding panels in the past year
in which in a rating system was used to ascertain the ability of experts on the panel
to assess research proposals, and to then assign first, second, and third reader respon-
sibilities for the proposals accordingly. This is a variation on the Turoff experience, and
in combination they illustrate an important avenue available to monitors wanting to
increase the validity of responses.
Step 9: incorporation of feedback from round 1
When all panel members have responded to round 1 and have returned their results to
the Delphi monitor, the survey is revised in three primary ways:
(1) a measure of central tendency is provided to identify the dominant response category
along the response continuum for each statement or proposition in the survey;
(2) new questions, propositions, or variables that have been suggested by the panelists
in response to the open-ended question(s) are added. Not only does this demonstrate to
panelists that their feedback has a substantive impact on the process, but also it
permits the panelists to test their own ideas and hypotheses, and thereby increases
their sense of ownership in the process; and,
(3) individual questions and variables can be refined (rephrased, reworded) as a result
of suggestions from the panel, but without altering the core meaning of the variable
being tested.
Since planning-based Delphi exercises are often normative preference probes, a
useful measure of central tendency to employ is the mode. Since the mode is located
in the response category (in the distribution) with the largest number of observations, it
can be regarded as the preferred parameter/statistic for identifying the panel's position
on the response continuum.
If the distribution of votes along the continuum of responses is evenly dispersed,
this may indicate a situation where there is a need for clarification or additional
information to help respondents to reach consensus (Turoff, personal communication,
28 August, 2006). Another situation could be panel polarization at either end of the
response continuum. In the latter case, soliciting additional information on the variable
from the panel members is needed in order to gain insight into the polarization.
Step 10: redistribution of the survey
Round 2 is sent as soon as the survey has been revised and the centrality measures
have been calculated. If a response more than one interval away from the modal
response is to be chosen in round 2, then panelists can be asked to specify why. This
approach worked well in our municipal plan quality Delphi, and helped to express the
full array of perspectives involved in the discussion. Further, it promoted gaining
insight into residual dissensus by (a) exploring the differences of opinion and (b) iden-
tifying factors that were considered by some experts but not by others (Widstrand and
Kruus, 1996, page 61). As we discovered, dissensus can arise due to differences in
perspectives, knowledge bases, interpretations of variables, theoretical views, and/or
disciplinary bias (Helmer, 1983, page 134), and regard for different factors to take into
consideration can arise for similar reasons.
In round 2 the members of the Delphi panel can be asked to reconsider the
variables in light of the identified measure of central tendency or anchor (in our
exercise, the mode was employed). Normally, there are three ways in which panelists
respond to the measure of central tendency:
1494 N Novakowski, B Wellar
Methodology-related research results from our specific application of the Delphi process
According to Masser and Foley (1987, page 217), the Delphi is ``the most widely used
technique for eliciting expert opinion.'' However, recent (July 2006) Internet searches
show that there is proliferating confusion about the differences between the Delphi and
other consensus-building techniques, and in particular the methodological design
aspect. In the interest of improving the methodological design considerations that
underlie and direct Delphi exercises, and contributing to a better understanding
of how Delphi differs from other associated techniques, we next present a selection of
methodological lessons learned from a completed Delphi application involving the
evaluation criteria of municipal plans.
Some of the potential pitfalls and obstacles to be overcome when implementing the
Delphi technique have been discussed previously (Linstone, 1975; Linstone and Turoff,
2002; Regier, 1986). Additional lessons learned that arose from the Delphi application
underlying this paper include the following:
1. Differentiation between the pretest and the trial run is important. The pretest pro-
vided the opportunity to elicit which expert-based technique would work best and
actually yield the sort of results that we needed. In the pretest, cost considerations
were paramount and this precluded any technique that involved convening the
experts at a specific location. A smaller panel was desired, so a conventional rather
than conference Delphi was selected. At the same time, it is important to remember
that Delphi exercises are time consuming and tend to take longer to execute than
other expert-based techniques.
2. Doing a trial run. The trial run actually takes a full-blown draft Delphi survey and
tests it on a sample panel (as a test run to fine-tune the survey). In our case we used
a proxy panel that met all of the selection criteria except one: a publishing record on
the topic. Consequently, we engaged an entirely different set of highly qualified people
for the trial run.
3. Panel size. We concur with Cavalli-Sforza and Ortolano (1984) that using a panel size of
eight to twelve may be appropriate in many cases. The panel size of ten that we used in our
municipal plan evaluation Delphi worked well, provided a diversity of expert opinion, and
permitted us to engage the very best North American experts on the topic without being
overly onerous for a single Delphi monitor. In retrospect, however, it is apparent that using
an odd number of panelists (eg nine or eleven) would have eliminated the possibility of
bimodal responses and therefore made interpretation more concise. In a different project,
however, panel size was a secondary consideration, with the driving priority being that
of ensuring that all perspectives that warranted consideration were represented.
4. Strict panel selection criteria. As stated, the panel selection criteria that we employed
were strict. Using strict selection criteria not only served to provide high-quality
responses, but also meant that the panelists felt validated by the experience after the
(unexpected but later agreed upon) `reveal' regarding panel identities.
5. Design to prevent intentional response polarization. As mentioned, reactions in
round 2 to the mode by panelists can result in responses that move away from the
mode in an extreme fashion in order to displace the dispersion, make a point, and/or
bring the response mode closer to the panelist's true view. This possibility can be
reduced by providing fewer response categories than the traditional Likert scale using
five or seven options.
6. Maintain panel anonymity until the post-Delphi survey is completed. During the execu-
tion of the process, it was our experience that there was much guessing among panelists
regarding the identity of the other panelists. It is vital not to engage with any panelists
on this matter. Further, we believe it is preferable that panelists are not told that there
would be an option to reveal their identities once the process was completed.
Using the Delphi technique in normative planning research 1497
should be allowed for three rounds to run their course. If the survey is relatively simple
in design and e-mail can be used, then that three-month guideline can be dramatically
contracted.
13. Survey length. As survey designers recognize, the number of people who will
respond to a questionnaire is inversely related to the length of the questionnaire.
However, in our research, it was impossible to avoid the length issue since a compre-
hensive inventory of applicable evaluation criteria for municipal plans was the desired
end result. In all, more than 100 variables were listed in the preliminary rounds of the
questionnaire. Nevertheless, by round 3, stable responses were achieved for nearly 30%
of the survey variables, so the survey length actually contracted over time. Although no
panel members specifically mentioned that the survey was too long, two panelists
consistently commented on the time imposition involved. These observations were
acted upon by minimizing the amount of supporting documentation that panelists
were required to read, and by identifying some survey tasks as strongly recommended
rather than mandatory.
14. Establishing working relationships between the monitor and the panelists. Individual
attention to panel members contributes to satisfaction with the process. As well, the
establishment of solid working relationships between the monitor and panel members
can result in faster responses, and engender communications about future collaborations
on subsequent research projects.
15. Employment of a post-Delphi survey. The post-Delphi survey is intended to provide
insight into both the effectiveness and the delivery of the process, as well as to
provide panel members with an opportunity to contribute their personal opinions
regarding how the process was conducted. Panel satisfaction with results between
rounds, the communication of information pertaining to the process, facilitation of
learning, facilitation of participation, and overall performance of the Delphi monitor
are all variables that can be addressed in the post-Delphi survey. The post-Delphi
survey can accompany the final round, as was done in the case of our project.
Concluding remarks
Consensus-building techniques that employ information derived from public consulta-
tion can be used in many planning situations. However, if there is an unknown element
to be explored, then the Delphi technique warrants consideration as the decision
support instrument to use in the deliberation process. This paper contributes to the
literature on Delphi methodology by discussing our experience in specifying and
implementing a practical research design for a normative Delphi application.
Operational improvements to the Delphi process that were validated by our project
include the following: the use of a pretest stage, the use of the precautionary principle
as a stopping criterion for bimodality, the application of extremely strict panel selec-
tion criteria, and the employment of a graphic representation to visually demonstrate
to panel members how the process is intended to progress. As well, lessons learned
regarding the implementation of the technique are provided to promote and support
the increased application of the Delphi in planning.
Planning as a profession is primed for the accelerated adoption of the Delphi
technique, as the professional conduct and value of expertise components are already
entrenched. Furthermore, urban planning issues are fraught with value-based complexities,
and sometimes it is only the consideration of experts that can resolve the concern. While
the Pythia of Delphi provided guidance induced by toxic fumes for war and agricultural
scheduling, the scientific basis of the Delphi technique is grounded in methodological
design. The continued viability of the technique in planning depends on a transparent
substantiation of research design decisions, which was the focus of this paper.
Using the Delphi technique in normative planning research 1499
References
Ackoff R, 1953 The Design of Social Research (University of Chicago Press, Chicago, IL)
Albright R, 2002, ``What can past technology forecasts tell us about the future?'' Technological
Forecasting and Social Change 69 443 ^ 464
Cavalli-Sforza V, Ortolano L, 1984, ``Delphi forecasts of land use: transportation interactions''
Journal of Transportation Engineering 110 324 ^ 339
Chaffin W, Talley W, 1980, ``Individual stability in Delphi studies'' Technological Forecasting and
Social Change 16 67 ^ 73
Coates J, 1999, ``Boom time in forecasting'' Technological Forecasting and Social Change 62 37 ^ 40
Coates V, Faroque M, Klavans R, Lapid K, Linstone H, Pistorius C, Porter A, 2001, ``On the
future of technological forecasting'' Technological Forecasting and Social Change 67 1 ^ 17
Critcher C, Gladstone B, 1998, ``Utilizing the Delphi technique in policy discussion: a case study
of a privatized utility in Britain'' Public Administration 76 431 ^ 449
Dajani J, Sincoff M, Talley W, 1979, ``Stability and agreement criteria for the termination of Delphi
studies'' Technological Forecasting and Social Change 13 83 ^ 90
Dalkey N, 1969 The Delphi Method: An Experimental Study of Group Opinion (RM-5888-PR)
(Rand Corporation, New York)
De Boer J, Hale J, 2000,``The geological origins of the Oracle at Delphi, Greece'', in The Archaeology
of Geological Catastrophes Eds W J McGuire, D R Griffiths, P L Hancock, I Stewart (Geological
Society, London) pp 399 ^ 412
De Loe« R, Wojtanowski D, 2001, ``Associated benefits and costs of the Canadian Flood Damage
Reduction Program'' Applied Geography 21 1 ^ 21
Helmer O, 1983 Looking Forward: A Guide to Futures Research (Sage, Thousand Oaks, CA)
Ilbery B, Maye D, Kneafsey M, Jenkins T, Walkley C, 2004, ``Forecasting food supply chain
development in lagging rural regions: evidence from the UK'' Journal of Rural Studies 20
331 ^ 344
Jones C, 1975, ``A Delphi evaluation of agreement between organizations'', in The Delphi Method:
Techniques and Applications Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON)
pp 160 ^ 167
Linstone H, 1975, ``Eight basic pitfalls: a checklist'', in The Delphi Method: Techniques and
Applications Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON) pp 573 ^ 586
Linstone H, Turoff M, 1975a, ``Introduction'', in The Delphi Method: Techniques and Applications
Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON) pp 3 ^ 12
Linstone H, Turoff M, 1975b, ``Evaluation: introduction'', in The Delphi Method: Techniques and
Applications Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON) pp 229 ^ 235
Linstone H, Turoff M, 2002, ``The Delphi method: techniques and applications'',
http://www.is.njit.edu/pubs/delphibook/index.html
Martino J, 1999, ``Thirty years of change and stability'' Technological Forecasting and Social Change
62 13 ^ 18
Masser I, Foley P, 1987, ``Delphi revisited: expert opinion in urban analysis'' Urban Studies 24
217 ^ 225
Nelson B, 1978, ``Statistical manipulation of Delphi statements: its success and effects on
convergence and stability'' Technological Forecasting and Social Change 12 41 ^ 60
Novakowski N, 1999 Derivation of an Evaluation Instrument for Judging the Quality of Ecosystem-
based Municipal Plans unpublished PhD dissertation, Department of Geography, University
of Ottawa
Parentë R, Hio«b T, Silver R, Jenkins C, Poe M, Mullins R, 2005,``The Delphi method, impeachment
and terrorism: accuracies of short-range forecasts for volatile world events'' Technological
Forecasting and Social Change 72 401 ^ 411
Regier W, 1986, ``Directions in Delphi developments: dissertations and their quality'' Technological
Forecasting and Social Change 29 195 ^ 204
Richey J, Mar B, Horner R, 1985, ``The Delphi technique in environmental assessment:
1. Implementation and effectiveness'' Journal of Environmental Management 21 135 ^ 146
Rowe G, Wright G, 1999, ``The Delphi technique as a forecasting tool: issues and analysis''
International Journal of Forecasting 15 353 ^ 375
Scheibe M, Skutsch M, Scofer J, 1975,``Experiments in Delphi methodology'', in The Delphi Method:
Techniques and Applications Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON)
pp 262 ^ 287
Sharma H, Gupta A, 1993, ``Present and future status of system waste: a national-level Delphi in
India'' Technological Forecasting and Social Change 44 199 ^ 218
1500 N Novakowski, B Wellar
Turoff M, 1975, ``The policy Delphi'', in The Delphi Method: Techniques and Applications
Eds H Linstone, M Turoff (Addison-Wesley, Don Mills, ON) http://is.njit.edu/turoff
Turoff M, Hiltz S, 1996, ``Computer-based Delphi processes'', in Gazing into the Oracle: The Delphi
Method and its Application to Social Policy and Public Health Eds M Adler, E Ziglio (Jessica
Kingsley, London) pp 56 ^ 85
Turoff, M, Li Z, Wang Y, Cho H, 2004, ``Online collaborative learning enhancement through the
Delphi method'' Proceedings of the OZCHI 2004 Conference, 22 ^ 24 November University of
Wollongong, Australia; http://web.njit.edu/turoff/Papers/ozchi2004.htm
Wellar, B, 1997,``Walking security index variables: initial specification'', Interim Report 5, University
of Ottawa and Region of Ottawa ^ Carleton, Ottawa
Wellar B, 2005 Geography and the Media: Strengthening the Relationship (Canadian Royal
Geographical Society and Canadian Council on Geographic Education, Ottawa)
Wellar B, Vandermeulen C, 2000, ``Field tests of the Driver Behaviour Index (DBI) survey forms:
initial findings from an applied project involving selected regional intersections in Ottawa-
Carleton'', in Papers and Proceedings of the Applied Geography Conferences 23 206 ^ 214
Widstrand C, Kruus P, 1996 Methods and Uses of Forecasting 2nd draft edition (Carleton
University, Ottawa)
Wilhelm W, 2001, ``Alchemy of the Oracle: the Delphi technique'' The Delta Pi Epsilon Journal
43 6 ^ 26
Ying L, Kung J, 2000, ``Forecasting up to year 2000 on Shanghai's environmental quality''
Environmental Monitoring and Assessment 63 297 ^ 312
Ziglio E, 1996, ``The Delphi method and its contribution to decision-making'', in Gazing into the
Oracle: The Delphi Method and its Application to Social Policy and Public Health Eds M Adler,
E Ziglio (Jessica Kingsley, London) pp 3 ^ 33