Delphi Method Research Tool-Example Design Consideration Applications PDF
Delphi Method Research Tool-Example Design Consideration Applications PDF
Chitu Okoli1
John Molson School of Business
Concordia University, Montral, Canada
Suzanne D. Pawlowski
Department of Information Systems and Decision Sciences
Louisiana State University, Baton Rouge, USA
Original paper sent 21 May 2003; Request for change 12 July 2003;
Accepted with modification 8 November 2003
Section: Techniques
Abstract
The Delphi method has proven a popular tool in information systems research for identifying and prioritizing
issues for managerial decision-making. However, many past studies have not adopted a systematic approach to
conducting a Delphi study. This article provides rigorous guidelines for the process of selecting appropriate experts
for the study and gives detailed principles for making design choices during the process that ensure a valid study. A
detailed example of a study to identify key factors affecting the diffusion of e-commerce in Sub-Saharan Africa
illustrates the design choices that may be involved. We conclude with suggestions for theoretical applications.
Keywords: Delphi method; Group decision making; Research design; Strategic planning; Electronic
commerce
Corresponding author.
Mailing: 1455 boulevard de Maisonneuve Ouest, Montral, Qubec H3G 1M8, Canada
E-mail address: [email protected]; Phone: +1 (514) 848-2424
1 Introduction
The Delphi method has proven a popular tool in information systems (IS) research [4,6,13,14,16,24,25,35].
Citing a lack of a definitive method for conducting the research and a lack of statistical support for the conclusions
drawn, Schmidt [34] presented a stepwise methodology for conducting such studies. Building on the framework
that Schmidt developed, we offer two contributions towards increasing the value of Delphi studies in investigating
research questions. First, we fill in many details in the context of Schmidts framework by providing guidelines on
how to conduct a rigorous Delphi study that identifies the most important issues of interest by soliciting qualified
experts. Second, we demonstrate how to use a Delphi survey as a research tool to serve a variety of different
purposes in the theorizing process. Increasing the rigor will increase the confidence with which researchers can use
the results in subsequent studies and managers can make decisions based on information gathered using these
methods.
Bricolage is a French term that means to use whatever resources and repertoire one has to perform whatever
task one faces [40]. Characterizations of the research process as bricolage and the researcher as bricoleur [10]
serve to remind us of the improvisation and opportunism inherent in the research process and the need to put our
research tools to multiple use. A third goal, then, is to encourage researchers to incorporate the Delphi method into
their research repertoire and to suggest some of the various ways they could apply the method in the theorizing
process.
technique. While most forecasting studies use Delphi to surface a consensus opinion, others such as the study by
Kendall et al. [15] emphasize differences of opinion in order to develop a set of alternative future scenarios.
Concept/framework development represents a second type of application of the Delphi method. These study designs
typically involve a two-step process beginning with identification/elaboration of a set of concepts followed by
classification/taxonomy development.
Table 1
Applications of the Delphi method in information systems research
Application of
the Delphi Method
Forecasting and Issue
Identification/
Prioritization
Concept/Framework
Development
Example Studies
Hayne and Pollard [13]Purpose: Identify the critical issues in IS in the coming
5 years perceived by Canadian IS executives and non-management IS
personnel and compare to global study rankings. Participants: IS personnel
Kendall et al. [15]Purpose: Forecast the role of the systems analyst in the 21
century.
Schmidt et al. [35]Purpose: Develop a ranked list of common risk factors for
software projects as a foundation for theory building about IS project risk
management. Participants: 3 panels of experienced software project managers
from Hong Kong, Finland and the United States
st
Table 2
Comparison of traditional survey with Delphi method
Evaluation criteria
Summary of
procedure
Representativeness
of sample
Traditional survey
Delphi study
All the questionnaire design issues of a survey also apply to a Delphi study.
After the researchers design the questionnaire, they select an appropriate
group of experts who are qualified to answer the questions. The researchers
then administer the survey and analyze the responses. Next, they design
another survey based on the responses to the first one and readministers it,
asking respondents to revise their original responses and/or answer other
questions based on group feedback from the first survey. The researchers
reiterate this process until the respondents reach a satisfactory degree of
consensus. The respondents are kept anonymous to each other (though not to
the researcher) throughout the process.
The questions that a Delphi study investigates are those of high uncertainty
and speculation. Thus a general population, or even a narrow subset of a
general population, might not be sufficiently knowledgeable to answer the
questions accurately. A Delphi study is a virtual panel of experts gathered to
arrive at an answer to a difficult question. Thus, a Delphi study could be
considered a type of virtual meeting or as a group decision technique, though
it appears to be a complicated survey.
The Delphi group size does not depend on statistical power, but rather on
group dynamics for arriving at consensus among experts. Thus, the literature
recommends 10 to 18 experts on a Delphi panel.
Anonymity
Non-response issues
Attrition effects
Richness of data
Studies have consistently shown that for questions requiring expert judgment,
the average of individual responses is inferior to the averages produced by
group decision processes; research has explicitly shown that the Delphi
method bears this out.
Pretesting is also an important reliability assurance for the Delphi method.
However, test-retest reliability is not relevant, since researchers expect
respondents to revise their responses.
In addition to what is required of a survey, the Delphi method can employ
further construct validation by asking experts to validate the researchers
interpretation and categorization of the variables. The fact that Delphi is not
anonymous (to the researcher) permits this validation step, unlike many
surveys.
Respondents are always anonymous to each other, but never anonymous to
the researcher. This gives the researchers more opportunity to follow up for
clarifications and further qualitative data.
Non-response is typically very low in Delphi surveys, since most researchers
have personally obtained assurances of participation.
Similar to non-response, attrition tends to be low in Delphi studies, and the
researchers usually can easily ascertain the cause by talking with the
dropouts.
In addition to the richness issues of traditional surveys, Delphi studies
inherently provide richer data because of their multiple iterations and their
response revision due to feedback. Moreover, Delphi participants tend to be
open to follow-up interviews.
3.2 Methodology
3.2.1 Selection of the Delphi methodology
Although we could conduct a traditional survey to gather input from members of the
major stakeholder groups concerning e-commerce infrastructure and practices in SSA, we judged
the Delphi method to be a stronger methodology for a rigorous query of experts and
stakeholders. Table 2 compares and contrasts the strengths and weaknesses of a Delphi study
versus the traditional survey approach as a research strategy. In light of this comparison, we
select the Delphi method for the following reasons:
1. This study is an investigation of factors that would support e-commerce in SSA. This
complex issue requires knowledge from people who understand the different economic,
social, and political issues there. Thus, a Delphi study answers the study questions more
appropriately.
2. A panel study most appropriately answers the research questions, rather than any individual
experts responses. Delphi is an appropriate group method. Among other high-performing
group decision analysis methods (such as nominal group technique and social judgment
analysis [32]), Delphi is desirable in that it does not require the experts to meet physically,
which could be impractical for international experts.
3. Although there may be a relatively limited number of experts with knowledge about the
research questions, the Delphi panel size requirements are modest, and it would be practical
to solicit up to four panels from 10 to 18 members in size [29].
4. The Delphi study is flexible in its design, and amenable to follow-up interviews. This permits
the collection of richer data leading to a deeper understanding of the fundamental research
questions.
5. We select the procedure for conducting Delphi studies outlined by Schmidt for the study
because it would serve the dual purpose of soliciting opinions from experts and having them
rank them according to their importance.
of the different stakeholder groups. Following recommendations from Delphi literature, there
will be 10 to 18 people in each panel. Within each panel, the goal is that at least half the
members actually work in SSA. This structure will obtain a sufficient number of perspectives
from the inside, and we could perform analyses to see if there are differences in perspectives
between respondents inside and outside. Figure 1 outlines the steps of our procedure for selecting
experts.
Invite experts for each panel, with the panels corresponding to each
discipline
Invite experts in the order of their ranking within their discipline sublist
Target size is 10-18
Stop soliciting experts when each panel size is reached
Step 3: Nominate
additional experts
that would be most fruitful in identifying the world-class experts on the Internet and e-commerce
in SSA. Delbecq et al. emphasized that it is important not to write down any specific names of
experts at this stage. It is important to stay at a high level, first identifying classes of experts.
Table 3
Sample Knowledge Resource Nomination Worksheet
Disciplines or Skills
1. Academic
- Journals list
2. Practitioner
- Internet Societies
- ITU sector members
and associates
3. Government official
- ITU Global Directory
4. NGO official
- Organizations list
Organizations
1. World Bank
2. United Nations Economic
Commission for Africa
3. United Nations University
4. Internet Societies in Africa
5. African governmental
ministries of
telecommunications
6. AFRIK-IT listserv
Related Literature
Academic:
1. Review of African Political
Economy
2. Journal of Management
Information Systems
3. European Journal of
Information Systems
4. Journal of Global Information
Management
5. Journal of Global Information
Technology Management
6. Electronic Journal of
Information Systems in
Developing Countries
Practitioner:
1. Communications of the ACM
2. Africa Business
3. Proceedings of ITU Telecom
countries have chapters. The Internet Society chapters are expected to be a place where
the local movers and shakers of Internet development congregate in their countries.
3. Government: We will use the International Telecommunication Union Global Directory
to locate contact persons in the national ministries of telecommunications for each SubSaharan country. Ministers (secretaries) of telecommunications are considered very
qualified experts, being in close contact with the most active and knowledgeable experts
on the Internet in their countries.
4. NGOs: Numerous NGOs focus on the development of the Internet and ICTs in Africa.
Two key places to begin looking for lists (other than Web searches) are with the United
Nations Economic Commission for Africa and the World Banks Information for
Development program. These are ideal central resources, both because they are active
NGOs themselves, they provide grants and other funding opportunities, and they sponsor
conferences. Thus, they are in touch with the various NGOs that approach them for
resources and networking. In addition, the ISWorld Developing Countries website has an
extensive list of relevant NGOs.
Organizations. The Web, e-mail, phone, fax, or other pertinent methods will be the
means of contacting the identified organizations. The objective is to contact people in these
organizations who are experts themselves, and who can provide additional contacts within and
outside their own organizations.
Related literature. We will review the academic and practitioner literature to locate all
articles concerning the Internet or ICTs in SSA.
3.2.2.3 Step 3. First-round contactsNominations for additional experts
At this point, we will contact the identified experts and ask them to nominate others for
inclusion on the list. We will provide a brief description of the Delphi study and explain that we
have identified them as international experts on electronic commerce in SSA. Since this step
does not solicit panelists for the final study, we will not invite individuals to participate in the
study. Rather, we will tell the experts and contacts that the researchers are currently gathering
biographical information on experts. We will obtain as much biographical information as
possible about their qualifications. We will also track information about which contact refers
which other experts, to facilitate the follow-up needed later. The first round of contacts aims
primarily at extending the KRNW to ensure that it will include as many experts as can possibly
be accessed.
For the organizations and literature that the contacts provide, we will follow the
procedures detailed in Step 2 to identify specific names to place in the KRNW. Furthermore, we
will need to obtain basic biographical information for every expert on the list in order to
determine what qualifications they possess to make them experts. For example, the type of data
recorded will include the number of papers published and presentations made, the length of years
of e-commerce practice in SSA, number of years of tenure in government or NGO positions, etc.
Adequate information on each expert is needed in order to rank their expertise for the next step.
At the end of this step, we expect to have a list of about 200 experts.
3.2.2.4 Step 4. Ranking experts by qualifications
At this step, we will compare the qualifications of those on the large list of experts and
rank them in priority for invitation to the study. First, we will create the four sub-lists:
9
practitioners, government officials, NGO officials, and academics. Based on their qualifications,
we will categorize experts into the sub-lists. As experts take on multiple roles, we may place an
expert on more than one list. Next, each member of the research team will independently rank
each sub-list, according to the persons degree of qualification. In these independent rankings,
the rankers will use ties. We will then come together to reconcile the lists and create the four
sub-lists as ranked by qualifications. Again, we will permit ties, since we will invite multiple
panelists. Finally, we will invite the experts to participate in the study, stopping when we reach
the required number.
3.2.2.5 Step 5. Inviting experts to the study
Based on the rankings, we will create one panel for each of the four categories. Again,
the target panel size is 18 (10 minimum) with at least half the members having worked within
SSA. Choosing the maximum number provides a buffer in case of attrition, even though
participant drop-out tends to be very low when respondents have verbally assured their
participation (contrast this with Brancheau et al.).
We will contact each panelist and explain the subject of the study and the procedures
required for it, including the commitment required. For this study, we will ask panelists to
commit to completing up to six 15-minute questionnaires and returning them within three days of
receipt, for a total of one and a half hours over a period of one to three months. We will impose a
limit of six questionnaires so as not to tax the participants, and yet give them an honest appraisal
of their time commitment.
In this study, we will require participants to have access to e-mail, fax, or the Web for
receiving and returning questionnaires. Normally, this might be a serious biasing factor.
However, for a study employing experts on the use of electronic commerce, this is not an
unreasonable requirement. Following the recommendation of Delbecq et al., the first
questionnaire will be sent to each expert the same day they confirm their desire to participate.
For each sub-list, panelist solicitation will begin by inviting the top nine experts who
work within SSA (half of our target of 18). Note that many of the experts will be based in North
Africa and in South Africa, but these experts will not be included in this target quota. We will
invite the experts one at a time until they reach the quota of nine. Next, we will invite the top
nine experts in the remaining sub-list, whether or not they work in SSA.
The design of the five-step process will ensure the identification and invitation of the
most qualified experts available.
There are several incentives that may lead experts to participate in a Delphi study where
they might decline to participate in other studies: (1) being chosen in a diverse but selective
group; (2) the opportunity to learn from the consensus building; and (3) increasing their own
visibility in their organization and outside. These incentives can provide the strong inducements
needed to attract busy experts.
estimated that the average Delphi study could take 45 days to five months. This assumes a
scenario where the panelists are all in one country, and the researchers rely on the postal system
to deliver and return the questionnaires. However, in this case, the administration of the
questionnaires is international. Assuming that a panelist in SSA filled out and returned a
questionnaire immediately (probably an overly optimistic assumption), it would take about a
month to receive the completed questionnaire for analysis, before the next one could be sent out.
Considering that the researchers cannot send out the next questionnaire until all the results for a
panel are in, such a lag time would be unreasonably long.
We will design three versions of each questionnaire: for e-mail, fax (which would be the
most similar to a printed version), and the Web. As the questionnaires will be designed carefully
following the principles of survey design (see Dillman [11]), the results are not expected to be
significantly different from that by mail.
Administration procedure. Administration of the questionnaires will follow the
procedure for ranking-type Delphi studies outlined by Schmidt. This will involve three general
steps: 1) brainstorming for important factors; 2) narrowing down the original list to the most
important ones; and 3) ranking the list of important factors.
Phase 1:
Brainstorming
Phase 2: Narrowing
down
Phase 3: Ranking
critical that the researchers carefully design the surveys to ensure that these three formats were
equivalent.
Adapted from Schmidt et al., Figure 2 outlines the process of administering the study.
3.2.3.1 Administration Phase 1: Brainstorming
Questionnaire 1: Initial collection of factors. We will send the first questionnaire on the
same day that an expert agrees to serve on a Delphi panel using the experts preference of e-mail,
fax, or Web. The initial questionnaire for a Delphi survey is very simple, since it consists of an
open-ended solicitation of ideas. The questionnaire will ask three basic questions, each
corresponding to one of the research questions.
To address the first research question (RQ1), the questionnaire will ask experts to list at
least six important factors (see Schmidt) affecting the establishing and growth of business use of
the Internet in the countries of Sub-Saharan Africa. This question seeks to generate a list of
infrastructure factors, which we refer to as the infrastructure list. To address the second research
question (RQ2), the questionnaire will ask experts to list at least six e-commerce applications,
practices, or features that practitioners could feasibly implement with beneficial effect within the
next ten years in SSA. This question seeks to generate a second list of e-commerce practices,
which we term the expediency list. Unlike the first two questions, simply asking the experts their
recommendations as items, then ranking them will not appropriately address the third research
question (RQ3). Recommendations are complex items that result as a composite and synergistic
conclusion from the findings of the other questions. Thus, to answer this question at this stage, a
third Delphi question that is closely related to the other two will be used: experts will be asked to
offer a brief explanation (in two or three sentences for each factor) of the importance of each
factor they have listed for the first two questions. These explanations will serve the dual purpose
of providing a qualitative empirical basis for answering the third research question and helping
us to understand and reconcile the various experts factors. Moreover, the explanations will help
to classify the factors into categories and will provide clarification for the next questionnaire,
which renames and consolidates the factors.
We will send this questionnaire to all the experts without considering their panel at this
phase and analyze the results from all experts together. Various studies have found that in group
decision making, heterogeneous groups are more creative than homogeneous ones. The study
design purposefully groups experts based on their homogeneity. However, by dividing the study
into three phases, we will bypass this possibility of stifled creativity. The creative stage of the
Delphi study occurs when soliciting factors from participants, while the ranking and weighting
stages involve mainly judgmental opinion. This design does not use panels during the initial
brainstorming stage, thus generating two lists of factors that represent the additive creativity of
all the participants in the study, irrespective of their panels.
In analyzing the responses from the first questionnaire, we will first remove identical
responses. At this time, we will record on the consolidated lists the number of panelists that
initially suggests each item, and then group these factors conceptually into categories to make it
easier for panelists to comprehend each list when returned for the next step. The grouping will be
simply for presentation purposes and not for analysis, and we will base the categorizations on our
knowledge of the issues concerning e-commerce in SSA (such as political, cultural, and
economic issues).
Questionnaire 2: Validation of categorized list of factors. Since the researchers, rather
than the experts, will perform the consolidation of the lists and grouping into categories, before
12
proceeding, we will send a second questionnaire to validate the consolidated lists of factors. This
questionnaire will list all the consolidated factors obtained from the first questionnaire, grouped
into categories. In addition to a brief, one-sentence explanation of each factor, an explanatory
glossary will be included to define and explain each factor, based on information provided by the
experts in the first questionnaire. Furthermore, we will give experts an exact copy of their
responses to the first questionnaire. The second questionnaire will ask experts to (a) verify that
we have correctly interpreted their responses and placed them in an appropriate category; and (b)
verify and refine the categorizations of the factors. According to Schmidt, without this step,
there is no basis to claim that a valid, consolidated list has been produced. At this time, experts
will be able to suggest additional items that they might not have considered initially. Based on
their responses, the two lists and categorizations will be further refined.
3.2.3.2 Phase 2: Narrowing down factors
The next two phases treat the experts as four distinct panels. In brief, panels will narrow
down factors that reflect the perspectives of the constituent stakeholders (phase 2), and they then
facilitate consensus (phase 3). In the second phase that narrows down the two lists of factors, our
goal will be to understand the rating of importance of the factors based on the differing
perspectives of various stakeholder groups. Certain groups of experts might assess the problems
and opportunities for e-commerce in SSA somewhat differently, and these differences might
have important implications for government policy and managerial action. Thus, the strategy is
to have groups that think similarly decide among themselves which factors are the most
important, rather than trying to reconcile significantly different perspectives.
Questionnaire 3: Choosing most important factors. We will then present the complete
consolidated lists of items to each expert within each panel. The third questionnaire will be
randomly arranged to cancel out bias in the order of listing of the items. Each panelist will be
asked to select (not rank) at least ten factors on each list that they consider important to ecommerce in SSA. When all of the panelists have returned their responses, we will analyze each
panel separately to identify the factors selected by over 50% of the experts in the panel; we will
retain these factors for that panel. This process will reduce the lists to a manageable size. The
target size for ranking will be no more than 20 to 23 items.
3.2.3.3 Phase 3: Ranking relevant factors
The goal of the final phase is to reach a consensus in the ranking of the relevant factors
within each panel. Studies have consistently found that it is more difficult to reach consensus
with Delphi groups than with ones that involve direct interaction between participants. However,
with a panel design it is less difficult to attain consensus because the researchers deliberately
select panel members for their homogeneity.
Questionnaire 4: Ranking the chosen factors. This phase of the procedure will involve
each panel separately ranking the factors on each of their distinct pared-down lists. Each ranked
list will reflect the priority order for the specific panel. In this phase, each expert will
individually submit a rank ordering of the items: one ordering for each of the two lists
infrastructure and expediency. The questionnaire also will ask experts to submit comments
explaining or justifying their rankings.
A review of the literature had not identified such an explanation of rankings in any
Delphi study, although Schmidt suggests soliciting helpful comments. Rohrbaugh compared the
13
Delphi methodology with a group decision method based on social judgment analysis (SJA),
which uses a formal graphical method to present the reasoning behind a panelists decisions to
the other members of the panel. Although the results using SJA were not significantly more
accurate, the groups reached a higher degree of consensus, since the members understood one
anothers reasoning. Based on Rohrbaughs study, the panels should arrive at consensus more
quickly if provided with some sort of feedback about the panelists reasonings.
When it comes to quantitatively determining the ranks of the items in the lists, Schmidt
provided an excellent and detailed guideline of principles to follow, which we will use as the
basis of our methodology here. There are a number of different metrics for measuring
nonparametric rankings [36], but Kendalls W coefficient of concordance is widely recognized as
the best. The value of W ranges from 0 to 1, with 0 indicating no consensus, and 1 indicating
perfect consensus between lists. Schmidt provided a table for interpreting different values of W,
with 0.7 indicating strong agreement. After calculating the concordance within each panel, the W
value suggests how to proceed in the ranking. A W value of 0.7 or greater would indicate
satisfactory agreement, and we would consider the ranking phase completed. We could use mean
rankings for each item to compute the final ranking for a completed panel.
However, if W is less than 0.7, the ranking questionnaire must be resent to the members
of that panel. Each reiteration would return the items for the panel, listed in order of mean ranks.
For each item, we will give the panelists the following information to help them revise their
rankings: (1) the mean rank of the item for the panel; (2) the panelists ranking of the item in the
former round; (3) an indication of the current level of consensus, based on the value of W (for
example, weak agreement); and (4) a paragraph summarizing the other panelists comments on
why they ranked that item as they did. Based on this, we will ask the panelists to revise their
rankings for each item, again asking them to explain their rankings and revisions.
We will reiterate this ranking process until we reach one of three stopping criteria: (1) W
reaches a value of 0.7, indicating a satisfactory level of concordance. (2) We reach the third
iteration, which would be the sixth questionnaire that a panelist received for this study, per our
original promise. However, on the third iteration, we will ask panelists if they were willing to
continue iterating until they reached consensus. If enough panelists agree, we will continue the
process until W rises to the desired level. (3) Following Schmidts suggestion, we will stop
iterating if the mean rankings for two successive rounds is not significantly different. We could
measure this difference using the McNamar test, which is typically used in a repeated measures
situation in which each subject's response is elicited twice (pre-post test) [3].
At the end of this ranking phase, we will have eight ranked liststwo from each of the
four panelsrepresenting the priorities that each of the panels placed on various factors in
affecting the practice of e-commerce in SSA. This rigorous process assures that the factors in the
list are the most important, and that the rankings are a valid indicator of the relative importance
of the various factors. Based on these results, we will be able to reassess our theoretical
observations from the literature and offer propositions on expected relationships between factors
in affecting e-commerce in SSA.
Table 4
Applications of the Delphi method in the research process
Use of the Delphi method for forecasting and issue identification/prioritization can be
valuable in the early stages, particularly in selecting the topic and defining the research
question(s). A major contribution of the MISRC/SIM studies of key managerial IS concerns,
published in MIS Quarterly, [5], for example, was to provide IS researchers with an
understanding of the most critical issues in IS management as seen by practicing IS executives,
thus contributing to the relevance of IS research to practitioners. The ranked issues in these
studies represented broad topic areas (e.g., Building a responsive IT infrastructure) for future
research. In Delphi studies where the questions posed to experts are narrower and more specific
in terms of subject, the resulting ranked lists can guide the framing of specific questions. For
example, the top ranked software project risk factor identified in the study by Schmidt et al.
Lack of top management commitmentcould stimulate the formation of specific research
questions related to this particular issue. Another risk factor identified in the Schmidt et al. study,
Conflict between user departments, points to the body of theoretical work related to situations
of conflict that arise when groups seek to preserve their vested interests (e.g., [26,31]) as a useful
lens for investigation of this issue.
Researchers can use the Delphi method in a number of ways related directly to theory
building. First, the ranking of a Delphi study can be of value in the initial stages of theory
developmenthelping researchers to identify the variables of interest and generate propositions.
For example, the study described in the previous section would not only identify what factors
experts perceive are important for e-commerce in Africa, but which ones are viewed as more
important than others. In any study, researchers must select a parsimonious list of relevant
variables. The experts rankings prioritize the perceived effects of factors, and help the
researchers to select the factors with the strongest effects.
Second, additional advantages are possible up to the extent of generalizability of
resulting theory. Because a Delphi study solicits information from experts who have a wide
range of experience, by inquiring about their experiences and opinions researchers significantly
extend the empirical observations upon which their initial theory is basedthus strengthening
the grounding of the theory and increasing the likelihood that the resulting theory will hold
across multiple contexts and settings.
A third benefit to theory building derives from asking experts to justify their reasoning.
This is an optional feature of Delphi studies; the very first Delphi study conducted by Dalkey
used it. Although not many recent Delphi studies have taken advantage of this option, asking
respondents to justify their responses can be valuable aid to understanding the causal
relationships between factors, an understanding that is necessary to build theory.
15
Fourth, a Delphi study can contribute to construct validity. Construct validity relies on a
clear definition of the construct. Delphi study designs, such as the example study, that ask
participants to validate their initial responses to make sure that that the researchers understand
the meanings of the list items submitted could contribute towards this goal. In addition, the
framing of construct definitions in alignment with definitions in common use by practitioners
also contributes towards consistency in the understandings of the construct by participants in
future studies as well as understandability by practitioners of the resulting theory. One of the
contributions of the Delphi study by Holsapple and Joshi, for example, is that it provides a
common language for discourse about knowledge management.
Although theory-building has not been the main focus of much of the Delphi research, a
carefully designed study can not only be valuable for developing theory, but it can produce
relevant theoretical research. Delphi studies, then, can contribute directly and immediately to
both theory and practice. They build theory by the design and rigor of the study, whereas
practitioners will immediately have available to them lists of prioritized critical factors,
generated by experts, which they could apply to their individual situations.
The discussion in this section highlights the versatility of the Delphi method as a research
toola tool particularly well suited to new research areas and exploratory studies. Through this
discussion and detailed example of a Delphi study design, we hope to heighten awareness of the
utility of the method for different purposes the theory-building process. In conclusion, we
encourage researchers to consider incorporating this tool in their personal repertoire of research
methods so that it is available to them to use as needed to accomplish their research objectives.
Acknowledgements
We thank Victor Mbarika for his help in the e-commerce study that we used as an
illustrative framework for this paper. We also thank Casey Cegielski for his valuable references
that gave us a literature base to study the Delphi method.
16
References
[1]
R.H. Ament, Comparison of Delphi forecasting studies in 1964 and 1969, Futures March,
1970, p. 43.
[2]
[3]
[4]
J.C. Brancheau, B.D. Janz, J.C. Wetherbe, Key issues in information systems management:
1994-95 SIM Delphi results, MIS Quarterly 20 (2), 1996, pp. 225-242.
[5]
J.C. Brancheau, J.C. Wetherbe, Key issues in information systems management, MIS
Quarterly 11 (1), 1987, pp. 23-45.
[6]
C.G. Cegielski, A Model of the Factors that Affect the Integration of Emerging Information
Technology Into Corporate Strategy, unpublished Doctoral Dissertation, University of
Mississippi, 2001.
[7]
M.R. Czinkota, I.A. Ronkainen, International business and trade in the next decade: Report
from a Delphi study, Journal of International Business Studies 28 (4), 1997, pp. 827-844.
[8]
[9]
A.L. Delbecq, A.H. Van de Ven, D.H. Gustafson, Group techniques for program planning:
A guide to nominal group and Delphi processes, Scott, Foresman and Company, Glenview,
Illinois, 1975.
[10] N. Denzin, Y. Lincoln, Entering the field of qualitative research, in: N. Denzin, Y. Lincoln
(Eds.), The Landscape of Qualitative Research, SAGE, Thousand Oaks, California, 1998,
pp. 1-34.
[11] D.A. Dillman, Mail and Internet surveys: The Tailored Design Method, John Wiley &
Sons, New York, 2000.
[12] A. Dutta, The physical infrastructure for electronic commerce in developing nations:
Historical trends and the impact of privatization, International Journal of Electronic
Commerce 2 (1), 1997, pp. 61-83.
[13] S. Hayne, C. Pollard, A comparative analysis of critical issues facing Canadian information
systems personnel: A national and global perspective, Information & Management 38 (2),
2000, pp. 73-86.
[14] P. Holsapple, K. Joshi, Knowledge manipulation activities: Results of a Delphi study,
Information & Management 39 (6), 2002, pp. 477-490.
[15] J.E. Kendall, K.E. Kendall, S. Smithson, I.O. Angell, SEER: A divergent methodology
applied to forecasting the future roles of the systems analyst, Human Systems Management
11 (3), 1992, pp. 123-135.
17
[34] R.C. Schmidt, Managing Delphi surveys using nonparametric statistical techniques,
Decision Sciences 28 (3), 1997, pp. 763-774.
[35] R.C. Schmidt, K. Lyytinen, M. Keil, P. Cule, Identifying software project risks: An
international Delphi study, Journal of Management Information Systems 17 (4), 2001, pp.
5-36.
[36] S. Siegel, N.J. Castellan, Jr., Nonparametric Statistics for the Behavioral Sciences,
McGraw Hill, New York, 1988.
[37] B. Travica, Diffusion of electronic commerce in developing countries: The case of Costa
Rica, Journal of Global Information Technology Management 5 (1), 2002, pp. 4-24.
[38] UNECA, The process of developing national information and communications
infrastructure (NICI) in Africa, http://www.uneca.org/adf99/nici.htm, 1999.
[39] D. Viehland, J. Hughes, The future of the wireless application protocol, Proceedings of the
Eighth Americas Conference on Information Systems, Dallas, 2002, pp. 1883-1891.
[40] K. Weick, Organizational redesign as improvisation, in G. Huber, W. Glick, eds.,
Organizational Change and Redesign: Ideas and Insights for Improving Performance,
Oxford University Press, New York, NY, 1993, pp. 346-379.
[41] P. Wolcott, L. Press, W. McHenry, S.E. Goodman, W. Foster, A framework for assessing
the global diffusion of the Internet, Journal of the Association for Information Systems 2
(6), 2001.
19
20