The Post-Positivist Approach To Theory Testing Using Case Studies e
The Post-Positivist Approach To Theory Testing Using Case Studies e
The Post-Positivist Approach To Theory Testing Using Case Studies e
Guy Gable
Queensland University of Technology
Y704, QUT Gardens Point Campus
Brisbane, QLD, Australia
[email protected]
Abstract
Extensive theory testing refers to the joint conduct of theory testing and extension in a
single empirical study. One of the types of extensive theory testing is using case study
approach. While quantitative approach to theory testing, such as survey method,
focuses on corroborating or falsifying theory, extensive theory testing aims to both
corroborate and extend theory at the same time. By employing extensive theory testing,
researchers can simultaneously develop deeper understandings of an existing theory
situated within a context and generate new insights based on the effort to test the
existing theory. Thus, extensive theory testing offers a useful alternative,
complementing quantitative approach to theory testing. Although case study has been
used for theory testing, the rationale and procedures of extensive theory testing have
not been adequately understood. In this paper, we clarify the rationale of extensive
theory testing and propose a framework for the conduct of extensive theory testing.
Keywords: Theory testing, qualitative research method, case study research, post-
positivism.
Introduction
The importance of theory testing in information systems (IS) research cannot be overstated. Whereas
researchers build theory to identify relationships between concepts and why such relationships exist (e.g.
Barratt et al. 2011; Ketokivi and Choi 2014), theory testing corroborates or falsifies the relationships by
evaluating what works, where, for whom, and under what conditions (e.g. Acton et al. 1991; Alexander
1958; Jenson et al. 2016; Miller and Tsang 2011; Rushmer et al. 2014; Ulriksen and Dadalauri 2016).
When relationships are not supported in theory testing, researchers can also learn from the empirical
investigation experience (Ridder 2017)
Theory can be tested using both quantitative (e.g. survey) and qualitative (e.g. case study) approaches.
However, most theory testing studies employ quantitative methods and as a consequence selectively
emphasize quantitative relationships in theory (Bitektine 2008; Ketokivi and Choi 2014). This dogma has
engendered scepticism toward qualitative approaches to theory testing (Burton-Jones and Lee 2017).
Overreliance on quantitative approaches tend to encourage a narrow focus on relationships that can only
be verified with respect to quantity, magnitude, or degree (Bitektine 2008).
This ‘overreliance’ on quantitative methods for testing IS theories has faced criticisms. Lee and
Baskerville (2003) argued that quantitative approaches to theory testing often emphasize statistical
generalisations but dismiss generalisations across contexts. Similarly, Argyris (1979) and Bitektine (2008)
posited that contextual factors, such as group dynamics and inter-group relations among members within
an organisation, are difficult to ascertain using quantitative methods. Markus and Robey (1988) noted
that testing emergent relationships requires examining dynamic processes and human intensions in
natural settings.
The case study approach has been used for theory testing in the IS discipline (Benbasat et al. 1987; Cavaye
1996; Lee 1989). It offers researchers opportunities to examine the phenomenon of interest in situ (Løkke
and Sørensen 2014; Miller and Tsang 2011). Further, case studies enable researchers to establish causal
claims that otherwise might not be possible through quantitative methods (e.g. Benbasat et al. 1987;
Cavaye 1996; Lee 1989; Paré 2001; Yin 2014) and provide additional insights into theoretical
relationships specific to contexts (Maxwell 2004; Miller and Tsang 2011).
Despite the use of case studies for theory testing, few guidelines have been provided on how to test
theories using case studies (Cavaye 1996; Miller and Tsang 2011; Paré 2001). A lack of explicit
understanding of the theory testing process makes it difficult to recognize and justify its merits and
limitations (Pan and Tan 2011). Further, existing studies that employ case studies for theory testing are
often from a positivist perspective (Bhattacherjee 2012; Paré 2001; Sarker and Lee 2002). Case study
theory testing from a post-positivist perspective is less understood. Positivism assumes the existence of
objective reality and that there can be law-like generalizable theories across contexts (Guba and Lincoln
1994; Maxwell 2004). By contrast, post-positivism assumes theories are influenced by cultural and social
contexts. By adopting a post-positivist perspective, researchers are more attuned to the socio-technical
systems in which humans and information technologies interact (Markus and Robey 1988).
We propose an extensive theory testing approach using case study. The goal of this paper is to clarify the
rationale and procedures of extensive theory testing using case study. Extensive theory testing aims to
both corroborate (or falsify) and extend theory in a single empirical study. Thus, extensive theory testing
entails corroborating or falsifying theoretical relationships (Irwin 2004; Johnston et al. 1999; Miller and
Tsang 2011), identifying new concepts or theoretical relationships (Barratt et al. 2011; Colquitt and
Zapata-Phelan 2007; Tsang 2013; Zhang and Gable 2017), and uncovering potential areas for future
theory development (Crowe et al. 2011; Pereira et al. 2013). By employing extensive theory testing,
researchers can simultaneously develop deeper understandings of an existing theory situated within a
context and generate new insights based on efforts to test the theory.
Extensive theory testing employs a post-positivist lens. Post-positivism offers a researcher the ability to
test theories in their natural context. Ontologically, post-positivism relaxes the rigid objectivity of
positivism and the subjectivity of interpretivism. Further, post-positivism ensures the determination of
causal relationships between the independent concept(s) and dependent concept(s) in a given theory and
enables a researcher to study problems by examining the causes that affect results (Denzin and Lincoln
2011; Guba and Lincoln 1994). Thus, by applying a post-positivist lens on extensive theory testing, we
recognise an objective reality, but such reality cannot be certain given the contextual nature of IS
research. Thus, there are mechanisms responsible for the validity (or lack thereof) of a theory’s
proposition that exist beyond what can be observed from a positivist viewpoint (Miller and Tsang 2011;
Ryan 2016).
By synthesizing the literature (Bitektine 2008; Johnston et al. 1999; Løkke and Sørensen 2014; Yin 2014),
we outline the rationale of extensive theory testing using case study, elucidate its main procedures, and
discuss the criteria for evaluating extensive theory testing research.
establishing the validity (or lack thereof) of the relationship between the independent concepts and
dependent concepts in a theory’s proposition within a selected sample or population (Argyris 1979; Dul
and Hak 2007; Sarker and Lee 2002). A theory’s proposition is the causal relational statements between
the independent and dependent concepts in the theory (Dul and Hak 2007). For example, a typical
proposition can be stated in the form: if there is ‘A’ then there must be ‘B’; where A and B are the
independent and dependent concepts respectively.
Qualitative theory testing using cases study involves the process of ascertaining whether empirical
evidence in a case or group of cases supports (or does not support) a theory’s proposition(s) (Dul and Hak
2007). One of the key determinants for the use of case study for theory testing is investigating phenomena
in their natural context and where the researcher has little or no control (Cavaye 1996; Shanks and Parr
2003; Yin 2014). Using case studies for theory testing starts with identifying a set of testable propositions,
and operationalising the identified propositions by translating the abstract concepts into observable
concepts (hypotheses creation) (Bhattacherjee 2012; Johnston et al. 1999). A testable proposition is a
statement that describes the relationship between observable concepts. The existence or absence of the
concepts, and the causes that affect their existence are then compared against empirical data (Dul and
Hak 2007; Johnston et al. 1999).
In qualitative theory testing using case studies, theoretical conditions or outcomes are not only considered
rather, causal mechanisms proposed by the theories are also examined in their different contexts (Miller
and Tsang 2011). Causal mechanisms are pathways through which an outcome is achieved. A causal
mechanism can be a sequence of events or conditions that relate the independent concept(s) to the
dependent concept(s) (Lewis-Beck et al. 2004). Causal mechanisms may differ from one context to
another. The identification of causal mechanisms is central to theory testing using case studies (Goertz
2012; Miller and Tsang 2011) because it provides a stronger argument into the relationship between the
independent and dependent concept. This is also important given the post-positivist lens adopted in this
paper (the theory testing process is discussed later in this paper).
Qualitative theory testing using case study can go beyond just confirming or falsifying a theory. Rather, it
can form the basis for theory expansion or refinement (Voss et al. 2002). Theory refinement is concerned
with the process of updating a domain knowledge in light of new knowledge or cases (Buntine 1991).
During the course of theory testing, qualitative case study research offers a researcher the opportunity to
further investigate a phenomenon of interest in the event that the conditions specified in the existing
theory are not supported by empirical data. This process could result in the development of alternative
theories or explanations as to why the original theory was not supported. For example, consider the work
of Orlikowski (1992). Having started with an existing theory – which assumed that technology has a
deterministic impact on organisational properties (e.g. structure), Orlikowski (1992) went further to
develop three new theoretical models to examine the interaction between technology and organisation.
Qualitative theory testing using case studies can be conducted from both the positivist and post-positivist
perspective. Much theory testing within IS has been dominated by the positivist philosophical paradigm
(Paré 2001; Sarker and Lee 2002; Sarker and Lee 2003). A positivist approach to theory testing using case
study advocates: (a) formulating testable propositions, (b) selecting cases that align with the theoretical
propositions, (c) collecting data based on the selected case(s) and (d) comparing the observed pattern
with available data (Johnston et al. 1999; Sarker and Lee 2002). Thus, the positivist theory testing
approach aligns with the natural science approach where theoretical elements are considered to be
discrete or dichotomous (i.e. having two possible values such as ‘present’ or ‘absent’) (Dul and Hak 2007)
and reality is considered to be absolute.
A positivist view at theory testing using case studies aims only at verifying or falsifying a theory. As an
example, consider the work of Sarker and Lee (2002) who used a single used a single, positivist case study
to test three competing theories of business process redesign (i.e. technocentric, sociocentric and
sociotechnical theories). The result of their research indicates that only one of the theories (the
sociotechnical theory of business process redesign) was supported by empirical data while the
technocentric and sociocentric theories were not supported. Thus, criticisms such as (a) selection bias and
(b) ambiguity of inferred hypothesis (Bitektine 2008, p. 161) are common in such studies. Selection bias
arises from selecting cases which the researcher feels are most likely to confirm or refute the existing
theory; while ambiguity in inferred hypothesis might arise due to the fact that positivism does not
consider contextual factors when validating the relationship between concepts in a theory. Thus, given
that IS research focuses on studying information technology (IT) within organisations (Benbasat et al.
1987), the neutrality of IT has shown that socio-technical systems can rarely be separated from their social
and technical contexts (Markus and Robey 1988; Orlikowski 1992).
The post-positivist approach to theory testing using case studies emanated as a reaction to the failures of
positivism. As mentioned, positivist researchers seem to conceive theory testing as involving ‘a specified
and near-conclusive procedure for falsification or verification’ (Løkke and Sørensen 2014, p. 68).
However, a post-positivist lens on theory testing using case studies conceives theory testing as a more
inclusive approach – which involves not only identifying causal relationships, but also identifying
circumstances where such relationships are active (i.e. the context of the relationships) (Smith 2010). By
adopting a post-positivist perspective, a researcher acknowledges the fallibility of human judgement and
recognises that theoretical propositions in the IS discipline cannot be separated from their context
(Maxwell 2004; Miller and Tsang 2011; Paré 2001).
Post-positivism associates a level of uncertainty to what can be known about a theory’s proposition
especially in the association between independent and dependent concepts (Guba and Lincoln 1994;
Shanks and Parr 2003). A post-positivist lens on theory testing emphasises that researchers properly
identify and understand the research perspectives and directions. It also ensures that researchers
acknowledge in their data analysis the uncertainty associated with the knowledge of theoretical elements.
Another highpoint of the post-positivist approach is its consideration of human factors in theory testing
using case study. To ascertain causal effects in any explanation for the validity (or lack thereof) of a
theory, human factors have to be accounted for because IS processes and outcomes are likely to be
influenced by the interests or goals of the people who operate the systems.
The post-positivist approach to theory testing using case study can be used to study patterns of behaviours
of individuals at the highest level of social organisation. By studying behavioural patterns, researchers can
be able to identify intensions or unintended consequences of human actions that affects the outcome of a
giving theoretical relationship and can lead to the explanation of causality. As an example of the
interaction between causal relationships and explanations, consider the case of the people-determined
theory (Markus 1983). The people-determined theory predicted that replacing individuals who are
resistant to the financial information system (FIS) or co-opting them with other users who are less
resistant to the system, would reduce resistance to the FIS (Markus 1983). Markus reported that GTC (a
pseudonym) implemented a rotation policy (with the hope of reducing resistance to the FIS) where
accountants are moved from and to resistant divisions. However, Markus further noted that an
accountant – who happens to be one of the system’s designers and an advocate of the FIS, started
resisting the system after he was moved from the corporate accounting division to the divisional
controller’s office. By resisting the system after being moved to another division, the accountant
demonstrated the impact of context and the influence of behavioural patterns to theoretical relationships.
technology), there are bound to be misconceptions of the dynamics of socio-technical relationships within
organizations.
The role of case study in extensive theory testing can be better understood by considering the theory
continuum. The theory continuum involves the different stages of the life cycle of a theory. While there
are continuing efforts to better define the theory continuum (Colquitt and Zapata-Phelan 2007), the
various stages include: theory building, theory development, theory testing and theory
extension/refinements (Ridder 2017; Voss et al. 2002). Within the theory continuum, theory can serve as
both a final product and as a continuum. A theory plays the role of a ‘final product’ when the aim of the
researcher is to identify and describe relationships between independent concepts and dependent
concepts (e.g. in theory building). This process is accomplished by examining empirical data within a
context. As a continuum, the aim of the researcher is not just to identify relationships between concepts,
but to identify failures in the relationships that cannot be explained by the existing theory. Thus, as a
continuum, theory serves as an input to the research process (e.g. in theory testing) – which could
subsequently lead to theory extension.
The implications of extensive theory testing using case study can be explained using the generativity of
digital technology artefacts within the digital innovation literature. Generativity refers to the overall
capacity of a system to produce unprompted changes driven by large, varied and uncoordinated audiences
(Zittrain 2006). By system we mean technical systems (a subset of the socio-technical system). Digital
innovation has become a house-hold concept within information systems as physical artefacts are
augmented with or in some cases replaced with digital components (Lyytinen et al. 2016). For instance,
the creation by Nikon engineers of digital cameras integrated with google maps, exemplifies the
generative properties of physical artefacts. Such cameras in addition to their usage for photographic
purposes, can also be used as navigation systems. While the digital innovation theory has linked
generativity with physical artefacts and with digital innovation, examining the digital properties of
artefacts does not paint a complete picture of the influence of generativity on digital innovation. A system
(e.g. a digital camera) consists of actors who contribute with their creativity and skills, and of suitable
artefacts that help those actors accomplish their goals. Thus, the generative properties that influence
digital innovation cannot only be associated with the physical artefacts, but also with the human actors
that interface with the physical artefacts to bring about their generativity.
scenario, the aim could be to test such theories to ascertain if the theory could be generalized beyond its
immediate boundary.
Testable propositions are statements that can be investigated empirically (Gregor 2006). According to
Goode and Hatt (1952), testable propositions are imaginative ideas, a statement of a solvable problem or
any thought that can be subjected to empirical tests to ascertain its validity. Propositions are defined at
the theory level (i.e. they operate at the realm of the theory) (Dul and Hak 2007). Khan (2011) identified
four characteristics that a testable proposition should have: (a) the concepts should be observable and
measurable; (b) should provide solution to the identified problem in the problem space and be
conceptually comprehensible; (c) should relate to existing body of knowledge; (d) should be logically
comprehensive. In using case studies for theory testing, the testable propositions should be converted into
relational statements – in the form: ‘If not A, then not B’ – where A and B are the antecedents and
outcomes respectively. These relational statements which define the ‘necessary conditions’ and the
expected outcomes are known as the hypothesis (Acton et al. 1991; Dul and Hak 2007). A hypothesis
should indicate the expected outcome in the chosen research problem (Rajasekar et al. 2006). The
hypothesis operates at the realm of the study – which takes into consideration the context and boundary
where the theory operates.
This stage of the research process involves the identification of the causal mechanisms which might affect
the hypothesized relationship in the theory under study (Miller and Tsang 2011). Personal actions or
behavioral patterns to phenomena can be key indicators of causal mechanisms. Given the possibility of
having different causes for different hypothesized outcomes, care must be taken at this stage to select
mechanisms that are related to the context in which the hypothesis is considered.
One of the properties of qualitative data is that they cannot be used in correlational analysis. Thus, for
hypotheses to be tested using qualitative data, such hypotheses would have to be stated such that they
propose the existence of a condition or an observable characteristic of the object of study (Bitektine
2008). Consider the technocentric theory of business process redesign (Sarker and Lee 2002). Sarker and
Lee proposed that: successful design (and installation) of enabling IT guarantees the effectiveness of
business process redesign (and the effectiveness of the implementation of redesigned business processes)
(Sarker and Lee 2002, p. 9). To operationalize the theoretical proposition, they translated abstract
concepts contained in the proposition (e.g. enabling IT) into more concrete and directly observable
concepts. For instance, enabling IT were taken to mean the installation of computerized BPR tools for
process mapping such as: ‘Process Maker’, ‘Process Charter’, ‘COSMO’ and Pen Analysis Intelligent
Whiteboard (Sarker and Lee 2002, p. 12). Table 1 Summarizes the guidelines for establishing the
theoretical base.
and attendance to events (Voss et al. 2002; Yin 2014). To prepare for data collection, a researcher needs
to first design a case study protocol. The case study protocol is a formal document capturing the entire set
of procedures involved in the data collection and analysis phase of the case study research project (Mills et
al. 2010). A well-designed case study protocol will enhance the validity and reliability of the study (Yin
2014). Similarly, the case study protocol could serve as criteria for evaluating the research process. During
theory testing using case studies, an explicit description of the protocol elements is important. This is to
ensure the rigour of the process, and to give the readers and reviewers of the research a better sense of the
logic that supports the data collection and analysis of the study (Ketokivi and Choi 2014; Mills et al.
2010).
The different elements that should be contained in the case study protocol includes a set of questions that
are expected to be addressed with respect to the stated hypotheses (Johnston et al. 1999). Each question is
expected to inform the researcher on what form of data is needed to address the aims of the study
(Johnston et al. 1999; Voss et al. 2002). When developing the protocol, the researcher should also specify
how the data will be collected and what appropriate techniques are to be used to analyze the data (Yin
2014).
Theory Based or Theory based data sampling involves the selection of data sources (e.g.
Operational interviewees, types of document) to directly test a particular theory of interest
Construct Sampling (Patton 2002). In this type of sampling, the researcher simultaneously collects
data, codes the data and analyses the data in order to decide the next data to be
collected. Although theory-based sampling is mostly used in theory building
studies, it is a perfect fit for extensive theory testing. Theory based sampling
offers the researcher the opportunity to identify anomalies in the existing
theory and then decides what further data needs to be collected to explain such
anomalies. This process leads to theory extension.
Chain Sampling Involves the selection of additional data sources that might have been
identified by prior data sources. For example, an interviewee could identify
another employee as a useful source of information on a subject area of
interest (Patton 2002; Sarker and Lee 2002).
Opportunistic Involves the use of emergent opportunities to collect additional data. For
Sampling example, casual discussion with employees during lunch breaks, and other
recreational periods (Sarker and Lee 1998).
One of the qualitative analytical techniques for testing theories is pattern matching (Pereira et al. 2013;
Yin 2014). Pattern matching is a technique used to compare expected patterns from theory with observed
patterns from empirical data (Almutairi et al. 2014). The pattern matching technique uses the
confirmation/falsification logic (Almutairi et al. 2014; Johnston et al. 1999; Yin 2014). The
confirmation/falsification logic involves a researcher articulating the theories into testable hypotheses
which is later compared against empirical data to confirm their fitness (Barratt et al. 2011; Dul and Hak
2007).
When testing theories using qualitative data, an expected pattern is formulated prior to data collection
based on the hypothesis to be tested (Hak and Dul 2010). The formulated pattern could be in the form of
an expected phenomena, an activity or a set of activities that must be carried out for an outcome to occur
(Bitektine 2008). Likewise, in using the pattern matching technique, competing theories are put forward.
The competing theories are intended to predict pattern(s) that are contrary to the dominant theory of
interest (Hyde 2000; Yin 2014). Data is collected from the selected case(s) and compared to predictions
from the dominant theory as well as predictions from the competing theory (Hyde 2000; Yin 2014).
If empirical data matches the predictions of the dominant theory more than it matches that of the
competing theory, support is demonstrated for the dominant theory (Almutairi et al. 2014) otherwise, the
propositions of the dominant theory might be modified (Yin 2014). A theory cannot be completely
rejected or supported based on a single case. However, the confirmation of a theory in a single case
enhances confidence in the validity of the theory. Subsequent confirmations or disconfirmation of the
theoretical propositions could lead to generalizing the theory to a different context or rejecting the theory
based on empirical evidence (Almutairi et al. 2014; Hyde 2000). Table 5 outlines the procedures for
conducting pattern matching.
In theory testing using qualitative data, one of the best ways to learn about your data is by classifying it
based on identifiable concepts or characteristics of individual cases considered during data collection.
Originally developed for use in quantitative analysis, the classification and clustering analysis technique
involves identifying observable concepts (specified in the testable proposition) and mechanisms that
influence the relationship between independent and dependent concepts in a theory. The identified
concepts are then matched with individual cases examined in the theory testing study. Further, the
concepts are transformed into binary data (i.e. 0 or 1) – where zero (0) and one (1) indicate the absence
Process tracing has become one of the fundamental tools of qualitative data analysis especially in ‘within-
case’ study analysis (Collier and Politics 2011). Process tracing is defined as the systematic examination of
evidence selected and analyzed considering research questions and hypotheses posed by the researcher
(Collier and Politics 2011, p. 823). The use of process tracing has been argued to contribute to causal
inference in qualitative theory testing studies through the discovery of causal process observations (CPO)
(George and Bennett 2005). CPO’s are non-comparable observations related to the link between cause
and outcome. CPO’s contribute an insight or piece of data that provides information about context,
process or causes that affect theoretical relationships. Gerring (2007) noted that the aim of process
tracing is to understand the processes linking the different factors (independent concepts) to the expected
outcome (dependent concepts) (Bailey and Jackson 2003; Gerring 2007).
The first thing to do when trying to extend existing theory whose hypothesized relationship cannot be
explained by the empirical data is to detect and document the anomaly. This involves identifying things of
interest that the existing theory failed to explain when compared with the empirical data (Burawoy 1998).
We recommend that this process should follow from the data analysis during theory testing. In detecting
the anomaly in the existing theory, the researcher should look at the existing theory through the lens of
new developments within the bounds of the theory. For instance, consider the case of modularity of
physical products. Researchers have argued that modularity influences product innovation (Baldwin and
Clark 2014; Ethiraj et al. 2008; Sanchez and Shibata 2018). However, the digitization of physical products
and the subsequent generativity of digital artefacts have added a new dimension to the theory of
modularity as it relates to product innovation. Innovation of physical products is not only influenced by
their modular architecture but also by the generative properties of the digital artefacts attached to the
physical products (Henfridsson and Bygstad 2013; Lyytinen et al. 2017; Yoo et al. 2012; Zittrain 2006).
Thus, anomalies within the modularity theory could be identified by considering new developments
within digital artefacts.
The researcher at this point should engage in further dialogue and consultation with stakeholders within
the empirical setting. By using qualitative sources of data collection – such as interviews, the researcher
aims at discovering further knowledge and relationships. Interviews are one of the recommended
approaches for establishing the relationship between the observed outcomes and the antecedents within
the empirical context. Grunig (2002) recommends starting off by asking interview participants high level
questions about their knowledge of the phenomena under investigation without mentioning specific
characteristics of the phenomena. This approach will help the researcher to assess why people assess a
given relationship the way they do (Grunig 2002). The interview data can also be complemented by
documents and observations. If the researcher’s access to new evidence is limited or terminated, the
researcher can draw from the existing knowledge/theory base. This could include revisiting the already
collected data for insights or patterns that might have been overlooked during the theory testing phase.
The additional data collected through interviews, participant observations and in some cases documents,
is then analyzed and triangulated to ensure the validity of the information. The pattern matching and
process tracing methods (discussed earlier) can also be employed to identify the relationship between
antecedents and outcomes. This gives the researcher the opportunity to provide a better understanding
and explanation of the relationship which could not be accounted for in the original theory.
Based on the outcome of the data analysis, the theory’s proposition can then be reformulated to reflect the
causes and relationships that better explain the anomalies not considered in the original theory (Burawoy
1998; Ridder 2017). The focus of the researcher at this stage is aimed at further understanding the
phenomena of interest within a context and to extend and codify the theory considering the patterns
observed in the new evidence.
Discussion
This discussion section is aimed at reflecting what we believed to be some of the rationale for testing
theories with case studies and the need for extensive theory testing in information systems.
Jones et al. 2011; Shanks et al. 2012). While ‘variance theories (which are mostly aligned with quantitative
methods) focus on the relationship between the values of attributes of constructs’ (analyzing correlations
between the attribute values) (Shanks et al. 2012, p. 2), process theories focuses on event chains and
understanding the processes linking the relevant factors to the outcomes (Gerring 2007; Shanks et al.
2012; Ulriksen and Dadalauri 2016). Thus, the evaluation of process theories is closely associated with
qualitative case studies (e.g. longitudinal case studies) as it is problematic to try to evaluate such theories
using quantitative approaches (e.g. survey, experiment) (Shanks et al. 2012; Ulriksen and Dadalauri
2016).
Conclusion
The outcome of a theory testing does not often result in a rejection or confirmation of the theory. A
hypothesized relationship or causal effect may fail to be explained by the empirical data contained within
a context. The case study approach at this point offers the researcher an opportunity to further investigate
the phenomena by collecting context-specific information through interviews and other sources of data
collection (observation, documents). This process could then result in a better understanding of the
phenomena under investigation.
Extensive theory testing using case study have not been discussed in the extant IS literature. In this study,
we explored the potential values of extensive theory testing using case studies in the IS discipline. We
argued that a post-positivist approach to extensive theory testing using case studies offers a promising
opportunity to ascertaining the causal relationship and context dependencies in the relationship between
independent and dependent variable in a theory. We also argued that case studies can be used to test
complex socio-technical phenomena especially when the phenomena involved are context dependent and
cannot be readily explained using quantitative relationships. Further, we explored the procedures for
conducting extensive theory testing using case studies in information systems. We provided guidelines on
how to proceed when conducting extensive theory testing. Finally, we clarified the rationale for testing
theories with case studies and presented actionable steps for implementing qualitative case study design
for theory testing. This study will help researchers to better understand and implement extensive theory
testing using case studies within the IS discipline.
References
Acton, G. J., Irvin, B. L., and Hopkins, B. A. 1991. "Theory-Testing Research: Building the Science," ANS.
Advances in nursing science (14:1), pp. 52-61.
Alexander, P. 1958. "Theory-Construction and Theory-Testing," The British Journal for the Philosophy of
Science (9:33), pp. 29-38.
Almutairi, A. F., Gardner, G. E., and McCarthy, A. 2014. "Practical Guidance for the Use of a Pattern‐
Matching Technique in Case‐Study Research: A Case Presentation," Nursing & Health Sciences
(16:2), pp. 239-244.
Argyris, C. 1979. "Using Qualitative Data to Test Theories," Administrative Science Quarterly (24:4), pp.
672-679.
Bacharach, S. B. 1989. "Organizational Theories: Some Criteria for Evaluation," The Academy of
Management Review (14:4), pp. 496-515.
Bailey, D. M., and Jackson, J. M. 2003. "Qualitative Data Analysis: Challenges and Dilemmas Related to
Theory and Method," American Journal of Occupational Therapy (57:1), pp. 57-65.
Baldwin, C. Y., and Clark, K. B. 2014. Design Rules the Power of Modularity. Cambridge: MIT Press.
Barratt, M., Choi, T. Y., and Li, M. 2011. "Qualitative Case Studies in Operations Management: Trends,
Research Outcomes, and Future Research Implications," Journal of Operations Management
(29:4), pp. 329-342.
Benbasat, I., Goldstein, D. K., and Mead, M. 1987. "The Case Research Strategy in Studies of Information
Systems," MIS Quarterly (11:3), pp. 369-386.
Bhattacherjee, A. 2012. "Social Science Research Principles, Methods, and Practices," A. Bhattacherjee
(ed.). Minneapolis: Open Textbook Library.
Bitektine, A. 2008. "Prospective Case Study Design - Qualitative Method for Deductive Theory Testing,"
Organizational Research Methods (11:1), pp. 160-180.
Buntine, W. 1991. "Theory Refinement on Bayesian Networks," Proceedings of the Seventh conference on
Uncertainty in Artificial Intelligence: Morgan Kaufmann Publishers Inc., pp. 52-60.
Burawoy, M. 1998. "The Extended Case Method," Sociological Theory (16:1), pp. 4-33.
Burton-Jones, A., and Lee, A. S. 2017. "Thinking About Measures and Measurement in Positivist
Research: A Proposal for Refocusing on Fundamentals," Information Systems Research (28:3),
pp. 451-467.
Burton-Jones, A., McLean, E. R., and Monod, E. 2011. "On Approaches to Building Theories: Process,
Variance and Systems," Working Paper, Sauda School of Business, University of British
Colombia, Canada).
Cavaye, A. L. M. 1996. "Case Study Research: A Multi-Faceted Research Approach for Is," Information
Systems Journal (6:3), pp. 227-242.
Chinn, L. P. 1986. "What Does “Theory Testing Research” Mean?," Advances in Nursing Science (9:1), pp.
viii-viii.
Collier, D., and Politics. 2011. "Understanding Process Tracing," PS: Political Science (44:4), pp. 823-830.
Colquitt, J. A., and Zapata-Phelan, C. P. 2007. "Trends in Theory Building and Theory Testing: A Five-
Decade Study of the Academy of Management Journal," Academy of Management Journal
(50:6), pp. 1281-1303.
Crowe, S., Cresswell, K., Robertson, A., Huby, G., Avery, A., and Sheikh, A. 2011. "The Case Study
Approach," BMC Medical Research Methodology (11:1), pp. 100-100.
Denzin, N. K., and Lincoln, Y. S. 2011. The Sage Handbook of Qualitative Research, (4th ed.). Thousand
Oaks: Sage.
Dubé, L., and Paré, G. 2003. "Rigor in Information Systems Positivist Case Research: Current Practices,
Trends, and Recommendations," MIS Quarterly (27:4), pp. 597-636.
Dubin, R. 1978. Theory Building, (Revised ed.). New York: Free Press.
Dul, J., and Hak, T. 2007. Case Study Methodology in Business Research. Routledge.
Ethiraj, S. K., Levinthal, D., and Roy, R. R. 2008. "The Dual Role of Modularity: Innovation and
Imitation," Management Science (54:5), pp. 939-955.
George, A. L., and Bennett, A. 2005. Case Studies and Theory Development in the Social Sciences. MIT
Press.
Gerring, J. 2007. Case Study Research: Principles and Practices. New York: Cambridge University Press.
Gerring, J. 2013. "The Case Study: What It Is and What It Does," in The Oxford Handbook of
Comparative Politics. Oxford University Press.
Gioia, D. A., and Pitre, E. 1990. "Multiparadigm Perspectives on Theory Building," Academy of
management review (15:4), pp. 584-602.
Goertz, G. 2012. "Case Studies, Causal Mechanisms, and Selecting Cases," Unpublished manuscript,
Version (5).
Goode, W., and Hatt, P. 1952. Methods in Social Science. New York: Mc-Gray Hill.
Gregor, S. 2006. "The Nature of Theory in Information Systems," MIS Quarterly (30:3), p. 611.
Grunig, J. E. 2002. Qualitative Methods for Assessing Relationships between Organizations and Publics.
Gainesville, FL: The Institute for Public Relations, Commission on Public Relations Measurement
and Evaluation.
Guba, E. G., and Lincoln, Y. S. 1994. "Competing Paradigms in Qualitative Research," in Handbook of
Qualitative Research. Thousand Oaks, CA, US: Sage Publications, Inc, pp. 105-117.
Hak, T., and Dul, J. 2010. "Theory-Testing with Cases," Encyclopedia of case study research), pp. 938-
943.
Henfridsson, O., and Bygstad, B. 2013. "The Generative Mechanisms of Digital Infrastructure Evolution,"
MIS Quarterly (37:3), p. 907.
Hirschman, E. C. J. J. o. m. R. 1986. "Humanistic Inquiry in Marketing Research: Philosophy, Method,
and Criteria," (23:3), pp. 237-249.
Houghton, C., Murphy, K., Shaw, D., and Casey, D. 2015. "Qualitative Case Study Data Analysis: An
Example from Practice," Nurse Researcher (22:5), pp. 8-12.
Hyde, K. F. 2000. "Recognising Deductive Processes in Qualitative Research," Qualitative Market
Research: An International Journal (3:2), pp. 82-90.
Irwin, T. J. B. 2004. "Testing and Extending Theory in Strategic Information Systems Planning through
Literature Analysis," Information Resources Management Journal (IRMJ) (17:4), pp. 20-48.
Jenson, I., Leith, P., Doyle, R., West, J., and Miles, M. P. 2016. "Testing Innovation Systems Theory Using
Qualitative Comparative Analysis," Journal of Business Research (69:4), pp. 1283-1287.
Johnston, W. J., Leach, M. P., and Liu, A. H. 1999. "Theory Testing Using Case Studies in Business-to-
Business Research," Industrial Marketing Management (28:3), pp. 201-213.
Ketokivi, M., and Choi, T. 2014. "Renaissance of Case Research as a Scientific Method," Journal of
Operations Management (32:5), pp. 232-240.
Keutel, M., Michalik, B., and Richter, J. 2014. "Towards Mindful Case Study Research in Is: A Critical
Analysis of the Past Ten Years," European Journal of Information Systems (23:3), pp. 256-272.
Khan, J. A. 2011. Research Methodology. New Delhi: APH Publishing Corporation.
Kuhn, T. S. 2012. The Structure of Scientific Revolutions, (4th ed.). Chicago, IL: University of Chicago
Pres.
Lee, A. S. 1989. "A Scientific Methodology for Mis Case Studies," MIS Quarterly (13:1), pp. 33-50.
Lee, A. S., and Baskerville, R. L. 2003. "Generalizing Generalizability in Information Systems Research,"
Information Systems Research (14:3), pp. 221-243.
Lewis-Beck, M. S., Bryman, A., and Liao, T. F. 2004. The Sage Encyclopedia of Social Science Research
Methods. Thousand Oaks, Calif: Sage.
Løkke, A.-K., and Sørensen, P. D. 2014. "Theory Testing Using Case Studies," Electronic Journal of
Business Research Methods (12:1), pp. 66-74.
Lyytinen, K., Sørensen, C., and Tilson, D. 2017. "Generativity in Digital Infrastructures: A Research Note,"
in The Routledge Companion to Management Information Systems. Taylor and Francis, pp. 253-
275.
Lyytinen, K., Yoo, Y., and Boland Jr, R. J. 2016. "Digital Product Innovation within Four Classes of
Innovation Networks," Information Systems Journal (26:1), pp. 47-75.
Markus, M. 1983. "Power, Politics, and Mis Implementation," Communicartions of the ACM (26:6), pp.
430-444.
Markus, M. L., and Robey, D. 1988. "Information Technology and Organizational Change: Causal
Structure in Theory and Research," Management Science (34:5), pp. 583-598.
Maxwell, J. A. 2004. "Using Qualitative Methods for Causal Explanation," Field Methods (16:3), pp. 243-
264.
Meyer, C. B. 2001. "A Case in Case Study Methodology," Field Methods (13:4), pp. 329-352.
Miller, K. D., and Tsang, E. W. K. 2011. "Testing Management Theories: Critical Realist Philosophy and
Research Methods," Strategic Management Journal (32:2), pp. 139-158.
Mills, A., Durepos, G., and Wiebe, E. 2010. Encyclopedia of Case Study Research. Thousand Oaks,
California: SAGE Publications.
Neuman, L. W. 2014. Social Research Methods: Qualitative and Quantitative Approaches, (7th ed.). New
Delhi: Pearson Education.
Offermann, P., Blom, S., Schönherr, M., and Bub, U. 2010. "Artifact Types in Information Systems Design
Science – a Literature Review," in Global Perspectives on Design Science Research: 5th
International Conference, Desrist 2010, St. Gallen, Switzerland, June 4-5, 2010. Proceedings., R.
Winter, J.L. Zhao and S. Aier (eds.). Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 77-92.
Orlikowski, W. J. 1992. "The Duality of Technology: Rethinking the Concept of Technology in
Organizations," (3:3), pp. 398-427.
Pan, S. L., and Tan, B. 2011. "Demystifying Case Research: A Structured–Pragmatic–Situational (Sps)
Approach to Conducting Case Studies," Information and Organization (21:3), pp. 161-176.
Paré, G. 2001. "Using a Positivist Case Study Methodology to Build and Test Theories in Information
Systems: Illustrations from Four Exemplary Studies,").
Paré, G., and Elam, J. J. 1997. "Using Case Study Research to Build Theories of It Implementation," in
Information Systems and Qualitative Research. Springer, pp. 542-568.
Patton, M. Q. 2002. Qualitative Research and Evaluation Methods, (3rd ed.). Thousand Oaks, Calif:
Sage.
Pereira, R., Almeida, R., and da Silva, M. M. 2013. "How to Generalize an Information Technology Case
Study," Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 150-164.
Popper, K. R. 2002. The Logic of Scientific Discovery. London: Routledge Classics.
Rajasekar, S., Philominathan, P., and Chinnathambi, V. 2006. "Research Methodology," available at:
https://arxiv.org/abs/physics/0601009 (accessed 01 September 2017).
Ridder, H.-G. 2017. "The Theory Contribution of Case Study Research Designs," Business Research
(10:2), pp. 281-305.
Rushmer, R. K., Hunter, D. J., and Steven, A. 2014. "Using Interactive Workshops to Prompt Knowledge
Exchange: A Realist Evaluation of a Knowledge to Action Initiative," Public Health (128:6), pp.
552-560.
Ryan, G. S. 2016. "Enhancing Nursing Student Success: A Critical Realist Framework of Modifiable
Factors," Archives of Nursing Practice Care (2:1), pp. 57-70.
Ryan, G. S., and Rutty, J. 2018. "Post-Positivist, Critical Realism: Philosophy, Methodology and Method
for Nursing Research," Nurse Researcher), p. (In Press).
Sanchez, R., and Shibata, T. 2018. "Modularity Design Rules for Architecture Development: Theory,
Implementation, and Evidence from Development of the Renault-Nissan Alliance "Common
Module Family" Architecture." St. Louis: Federal Reserve Bank of St Louis.
Sarker, S., and Lee, A. S. 1998. "Using a Positivist Case Research Methodology to Test a Theory About It-
Enabled Business Process Redesign," Proceedings of the international conference on
Information systems, Helsinki, Finland: Association for Information Systems, pp. 237-252.
Sarker, S., and Lee, A. S. 2002. "Using a Positivist Case Research Methodology to Test Three Competing
Theories-in-Use of Business Process Redesign," Journal of the Association for Information
Systems (2:1), p. 7.
Sarker, S., and Lee, A. S. 2003. "Using a Case Study to Test the Role of Three Key Social Enablers in Erp
Implementation," Information & Management (40:8), pp. 813-829.
Seale, C. 1997. "Ensuring Rigour in Qualitative Research," European Journal of Public Health (7:4), pp.
379-384.
Selene Xia, B., and Gong, P. 2014. "Review of Business Intelligence through Data Analysis,"
Benchmarking: An International Journal (21:2), pp. 300-311.
Shanks, G., Bekmamedova, N., and Johnston, R. 2012. "Exploring Process Theory in Information Systems
Research," Proceedings of the Information Systems Foundations Workshop, ANU, Canberra,
ANU, Canberra.
Shanks, G. G., and Parr, A. N. 2003. "Positivist, Single Case Study Research in Information Systems: A
Critical Analysis," 11th European Conference on Information Systems (ECIS, 2003), Naples,
Italy, pp. 16-21.
Smith, M. 2010. "Testable Theory Development for Small-N Studies: Critical Realism and Middle-Range
Theory," International Journal of Information Technologies and Systems Approach (IJITSA)
(3:1), pp. 41-56.
Tsang, E. W. K. 2013. "Case Study Methodology: Causal Explanation, Contextualization, and Theorizing,"
Journal of International Management (19:2), pp. 195-202.
Ulriksen, M. S., and Dadalauri, N. 2016. "Single Case Studies and Theory-Testing: The Knots and Dots of
the Process-Tracing Method," International Journal of Social Research Methodology (19:2), pp.
223-239.
Voss, C., Tsikriktsis, N., and Frohlich, M. 2002. "Case Research in Operations Management,"
International Journal of Operations & Production Management (22:2), pp. 195-219.
Walsham, G. 2006. "Doing Interpretive Research," European journal of information systems (15:3), pp.
320-330.
Weber, R. 2003. "Theoretically Speaking1," MIS Quarterly (27:3), pp. III-XII.
Weber, R. 2012. "Evaluating and Developing Theories in the Information Systems Discipline," Journal of
the Association for Information Systems (13:1), pp. 1-30.
Yin, R. K. 2011. "Case Study Research: Design and Methods," Modern Language Journal (95), p. 474.
Yin, R. K. 2014. Case Study Research: Design and Methods, (Fifth ed.). Los Angeles: Sage.
Yoo, Y., Boland, R., Lyytinen, K., and Majchrzak, A. 2012. "Organizing for Innovation in the Digitized
World," Organization Science (23:5), pp. 1398-1408.
Zhang, M., and Gable, G. G. 2017. "A Systematic Framework for Multilevel Theorizing in Information
Systems Research," Information Systems Research (28:2), pp. 203-224.
Zittrain, J. L. 2006. "The Generative Internet," Harvard Law Review (119:7), pp. 1974-2040.