Assignment of Mam 1
Assignment of Mam 1
Coding
A code is simply a name or label that the researcher gives to a piece of text which contains an idea
or a piece of information. catches the nature of a code neatly when he writes that the same code is
given to an item of text that says the same thing or is about the same thing (Miles and Huberman,
1994; Gläser and Laudel, 2013). Gibbs (2007, p. 38)
Seidel and Kelle (1995) suggest that
codes can denote a text, passage or fact, and can be used to construct data networks. Coding text means
that data from non-textual sources (e.g. audio, visual images, videos, graphs, numerical files, charts and
graphics) have had textual material (e.g. annotations, commentaries, notes, memos) added, and the coding
works with and on that textual material.
Coding is the process of breaking down segments of text data into smaller units (based on whatever
criteria are relevant), and then examining, comparing, conceptualizing and categorizing the data.
The code name might derive from the researcher’s own creation, or it may derive from the words used in
the text or spoken by one of the participants in the transcribed data (e.g. if the participant remarks that
she is bored with the science lesson, the code may be ‘bored’: a short term that catches the essence
of the text in question).
According to (Gibbs, 2007, p. 41)
By coding the data, the researcher is able to detect frequencies (which codes occur most commonly) and
patterns (which codes occur together), and the researcher can retrieve all the data that have the same code,
both within and across files.
Codes can be at different levels of specificity and generality when defining content and concepts. Some
codes may subsume others, thereby creating a hierarchy of subordination and super ordination, in effect
creating a tree diagram of codes (and software can present this in a graphic).
Miles and Huberman (1984, 1994) advise researchers to keep codes as discrete as possible and to start
coding earlier rather than later as late coding enfeebles the analysis, though there is a risk that early
coding might influence too strongly any later codes. It is possible, they suggest, for as many as ninety
codes to be held in the working memory whilst going through data, though clearly there is a back-and-
forth process whereby some codes that are used in the early stages of coding might be modified
subsequently and vice versa, necessitating the researcher to go through a data set more than once to
ensure consistency, refinement, modification and exhaustiveness of coding (some codes might become
redundant whilst others might need to be broken down into finer codes).
There are different kinds of code: an open code, an analytic code, an axial code, selective code and a
theoretical code; we discuss these below. Though there is a suggestion in what follows that there is a
sequence in coding, and indeed Strauss and Corbin (1990) suggest a sequence of three stages: open
coding to axial coding to selective coding, this need not be the case, as different codes operate at
different levels, and these are not necessarily driven by a pre-arranged sequence (Flick, 2009, p. 307)
Open coding
An open code is simply a new label that the researcher attaches to a piece of text to describe and
categorize that piece of text. Open coding generates categories and defines their properties (the
characteristics or attributes of a category or phenomenon) and dimensions (the location of a property
along a given continuum) (Strauss and Corbin, 1990,)
The authors give an example of the category/code ‘colour’, which has properties of hue, shade and
intensity. These properties, in turn, have dimensions: hue can be light to dark; shade can be light to dark;
and intensity from high to low. Each category can have several properties, each of which has its own
dimensional continuum.
The authors give an example of properties and dimensions for the category/ code/label ‘watching’ such
as:
(a) property: ‘frequency’; dimension: often to never;
(b) property: ‘extent’; dimension: more to less;
(c) property: ‘intensity’: dimension: high to low;
(d) property: ‘duration’; dimension: long to short
Open coding can be performed on a line-by-line, phrase-by-phrase, sentence-by-sentence,
paragraph-by paragraph, unit-of-text-by-unit-of-text (e.g. section) basis or a semantic unit. Then the
codes can be grouped into categories, giving the categories a title or name, based on criteria that the
researcher decides (e.g. concerning a specific theme, based on similar words, similar concepts, similar
meanings etc.)
Analytic coding
an analytic code is more than a descriptive code. It is more interpretive. For example, whereas
‘experimenting’, ‘controlling variables’, ‘testing’ and ‘measuring’ are descriptive codes (e.g. in
describing science activities), an analytic code here could be ‘working like a scientist’, ‘doing science’ or
‘active science’; it draws together and gives more explanatory and analytic meaning to a group of
descriptive codes. An analytic code might derive from the theme or topic of the literature or,
responsively, from the data themselves.
Axial coding
An axial code is a category label ascribed to a group of open codes whose referents (the phenomena
being described) are similar in meaning (e.g. concern the same concept). Axial coding is that set of
procedures which the researcher follows whereby the data that were originally segmented into small units
or fractions of a whole text through open coding are recombined in new ways. (Strauss and Corbin,
1990)
An axial code refers to:
causal conditions: events, activities, behaviors or incidents that lead to the occurrence of a phenomenon
a phenomenon: an event, idea, activity, action, behavior etc.
context: a specific set of properties or conditions that obtain in a phenomenon, action or interaction.
intervening conditions: the broad, general conditions that have a bearing on the action or interaction in
question.
actions and interactions: purposeful, goal-oriented processes, strategies or behaviors obtaining in an
action.
consequences: outcomes for people, events, places etc., which may or may not have been predicted and
which, in turn, may become the causes or conditions of further actions and interactions.
Selective coding
Selective coding identifies the core categories of text data and integrates them to form a theory. It is the
process of identifying the core category in a text, i.e. that central category or phenomenon around which
all the other categories identified and created are integrated. (Strauss and Corbin)
Theoretical coding
In theoretical coding, researchers see how codes and categories are integrated and fit together to create a
theory or hypothesis. Here theoretical codes are the ‘underlying logics (Thornberg, 2012a, p. 89) that
come from pre-existing or emergent theories, together with the core category: that which has the greatest
explanatory potential and to which the other categories and sub-categories relate most closely, repeatedly
and consistently.
content analysis
Anderson and Arsenault suggest that
content analysis can describe the relative frequency and importance of certain topics as well as evaluate
bias, prejudice or propaganda in print materials.
Weber sees the purposes of content analysis as including that:
(A) coding of open-ended questions in surveys;
(b) revelation of the focus of individual, group, institutional and societal matters.
(c) description of patterns and trends in communicative content.
Content analysis takes texts and analyses, reduces and interrogates them into summary form through the
use of both pre-existing categories and emergent themes in order to generate or test a theory. It uses
systematic, replicable, observable and rule-governed forms of analysis in a theory-dependent system for
the application of those categories.
Krippendorp says that:
there are several features of texts that inform content analysis, including the fact that texts have no
objective reader-independent qualities; rather they have multiple meanings and can sustain multiple
readings and interpretations. There is no unitary meaning waiting to be discovered or described in them.
Indeed, the meanings in texts may be personal and are located in specific contexts, discourses and
purposes, and, hence, meanings have to be drawn in context.
How does content analysis work?
Ezzy suggest that:
content analysis starts with a sample of texts, defines the units of analysis (e.g. words, sentences) and the
categories to be used for analysis, reviews the texts in order to code them and place them into categories,
and then counts and logs the occurrences of words, codes and categories. From here statistical analysis
and quantitative methods are applied, leading to an interpretation of the results. Put simply, content
analysis involves coding, categorizing (creating meaningful categories into which the units of analysis –
words, phrases, sentences etc. – can be placed), comparing (categories and making links between them)
and concluding – drawing theoretical conclusions from the text.
Anderson and Arsenault indicate that
the quantitative nature of content analysis when they state that ‘at its simplest level, content analysis
involves counting concepts, words or occurrences in documents and reporting them in tabular form’.
This succinct statement catches essential features of the process of content analysis: breaking down text
into units of analysis
• undertaking a statistical analysis of the units;
• presenting the analysis in as economical a form as possible.
Denscombe, echoing Anderson and Atsenault, sets out a six-stage process of content analysis:
1 Choosing an appropriate sample of data
2: Breaking down text into smaller component units of analysis.
3: Developing appropriate categories for analyzing the data.
4: Coding the units to fit the categories.
5: Conducting frequency counts of the occurrence of the units.
6: Analyzing the text from the basis of the unit frequencies and how they relate to other units in the text.
Software can easily provide the word frequency counts, and this can be useful.
For example,
in analyzing inaugural speeches of high-profile people, the frequency of the word ‘I’ in the inaugural
speech of 2,095 words by former American president Obama was only two (0.1 per cent), whilst for the
word ‘we’ it was sixty-one (2.9 per cent).
Flick summarizes several stages of content analysis:
1 Defining the units of analysis
. 2 Paraphrasing the relevant passages of text.
3 Defining the level of abstraction required of the paraphrasing.
4 Data reduction and deletion (e.g. removing paraphrases that duplicate meaning).
5 Data reduction by combining and integrating paraphrases at the level of abstraction required.
6 Putting together the new statements into a category system.
7 Reviewing the new category system against the original data.
Grounded theory
Strauss and Corbin (1994, p. 273) remark that
Grounded theory is a methodology which seeks to develop theory which is rooted in – grounded in – data
which have been col- lected systematically and analysed systematically; it is an orderly, methodical and
partially controlled way of moving from data to theory, with ‘clearly specified Ana- lytical procedures’
Moghaddam avers says that:
Grounded theory is a set of relationships among data and categories that pro- poses a plausible and
reasonable explanation of the phenomenon under study, i.e. it explains by drawing on the data generated
It is a method or set of procedures for the generation of theory or for the pro- duction of a certain kind of
knowledge.
Glaser, one of the key writers in grounded theory, focuses on the methods of grounded theory, and Birks
and Mills add to this its philosophical roots in prag- matism and symbolic interactionism. Grounded
theory uses systematized methods (dis- cussed below) of theoretical sampling, coding and categorization,
constant comparison, memoing, the identification of a core variable, and saturation, all of which lead to
theory generation.
Grounded theory starts with data which are then analyzed and reviewed to enable the theory to be
generated from them; it is rooted in the data and little else. Theory derives from the data; it is grounded in
the data and emerges from it.
Versions of the grounded theory
There are many versions of grounded theory (Hutchison et al., 2010), and indeed Greckhamer and
Koro-Ljung- berg (2005) comment that what grounded theory actually is has become a contested issue
(p. 731). Three widely referenced versions are:
the original, emergent model by Glaser and Strauss (1967) and Glaser (1978, 1992);
the revised, systematic model by Strauss and Corbin (1990, 1998) and Corbin and Strauss (2008, 2015);
the constructivist model by Charmaz (2006)
• the original, emergent model by Glaser and Strauss (1967) and Glaser
(1978, 1992);
the revised, systematic model by Strauss and Corbin (1990, 1998) and Corbin and Strauss (2008, 2015);
the constructivist model by Charmaz (2006), , the theory emerges from the data, using various tools to
facilitate such emergence and the ‘discovery of the theory that is embedded in the data. In this process
there are two main types of coding: substantive and theoretical; substantive coding (with open coding)
precedes theoretical coding (with selective coding and theoretical sampling).
The revised, systematic model
The revised grounded theory model from Strauss and Corbin (1990, 1998) is much more systematic
and pre- scriptive than the original model, so much so that there was a well-documented split between
Strauss and Glaser, with Glaser arguing that the revised model of Strauss and Corbin was formulaic
and too prescriptive, ‘forcing’ a theory onto data and forcing data into a theory, whereas the essence of
grounded theory was its aversion to forcing. The original model emphasized an inductive approach to data
analy- sis whereas the revised model by Strauss and Corbin included deductive approaches and theory
verification by the data. The Strauss and Corbin model differs from the orig- inal model as they argue
that:
(a) sampling proceeds on theoretical grounds (a point which the original model rejects as introducing
bias into the research and that the research should commence with data alone);
(b) hypotheses can be developed and verified (whereas the original model abjures prior theory
generation, testing and verification);
(c) induction, deduction and abduc- tion can be used, whereas the original model advocates induction
alone and rejects deduction; and
(d) attention must be given to broader structural contents and influences (whereas the original model
argues that these, if they are present in the research, would be man- ifested in the data alone and,
otherwise, should not be considered).
The constructivist models:
In the constructivist model from Charmaz (2006), sub- jective meanings are attributed to the
data by partici- pants and researchers, and there might be multiple interpretations of what these
meanings are. This moves beyond ‘facts’ and descriptions of acts to interpreta- tions and
perspectives. Charmaz holds that concepts are not so much revealed or ‘discovered’ (the title of
Glaser’s and Strauss’s original book (1967)) as ‘constructed’, for example through
interactions and involvements, both past and present, with people, ways of looking,
interpretations and meanings, leading to one or more ‘constructions of reality’
Charmaz (2002) sets out six analytical steps in her grounded theory:
(i) data collection and analysis, simultaneous and ongoing;
(ii) early data analysis to identify emergent themes;
(iii) identification of basic social processes from and within the data;
(iv) inductive construction and co-construction of abstract explanatory categories for those
processes;
(v) constant comparison to refine the categories; and
(vi) integrating the categories into a theoretical framework that identifies causes, consequences
and conditions.
The tools of grounded theory
There are several common tools that researchers use in grounded theory: theoretical sampling; coding ;
memoing; constant comparison; identification of the core variable(s); ‘saturation’; and theoretical
sensitivity.
Theoretical sampling
Theoretical sampling, as Glaser and Strauss (1967) write, is a process for generating theory. In
this the researcher collects the data, processes data with coding and analyses the results, and this
analysis informs where to go next in collecting data in order to develop the emerging theory i.e.
the emerging theory controls the process of data collection and is the criterion – theoretical
relevance – for proceeding further with data collection rather than, for example, conventional
sampling approaches. Data are collected which are useful to the generation of theory
(Creswell, 2012, p. 433), i.e. purposive sampling takes place.
Theoretical sampling is that kind of sampling which is based on the concepts which have shown
themselves to be theoretically relevant to the evolving or emerging theory (Strauss and Corbin,
1990, p. 176; Birks and Mills, 2015, pp. 68–71)
Coding
Coding is the process of disassembling/fracturing and then reassembling the data. Data are
disassembled when they are broken apart into lines, paragraphs or sections, subsequent to which
these fragments are rear- ranged, usually through coding, to produce an organized and structured
thematization and theory (cf. Ezzy, 2002, p. 94)
In grounded theory there are three main types of coding: open, axial and selective, the
intention of which is to disassemble the data into manageable chunks in order to facilitate an
understanding of the phenomenon in question.
Open coding involves exploring the data and identifying units of analysis to code for meanings,
feelings, actions, events and so on. The researcher codes the data, creating new codes and
categories and sub-categories where necessary, and integrating codes where relevant until the
coding is complete. Axial coding seeks to make links between categories and codes, ‘to integrate
codes around the axes of central categories’ (Ezzy, 2002, p. 91);
the essence of axial coding is the interconnectedness of categories (Creswell, 2012).
their interrelationships are examined and codes and categories are compared to existing theory.
In selective coding a core code or cate- gory is identified, the relationship between that core
code/category and other codes/categories is made clear (Ezzy, 2002, p. 93)
and the coding scheme is compared with pre-existing theory. Creswell (1998, p. 57) writes that
here the researcher identifies a ‘story line’ and proceeds to construct the story that draws together
all the axial codes.
Memoing
Memos are simply notes written to oneself, logging ideas, abstract thoughts, insights,
observations, conjectures and possibilities etc. (cf. Waring, 2012, p. 302; Denscombe, 2014, p.
285; Birks and Mills, 2015).
They can contain notes on a wide field of matters; they can be short, long, detailed, general,
focused, wide-ranging etc. They can be, for example, conceptual, theoretical, operational,
reflexive and coding-related; they are what the researcher wants them to be.
In memoing, the researcher writes ideas, notes, comments, notes on surprising matters, themes or
metaphors, reminders, hunches, draft hypotheses, references to literature, diagrams questions,
draft theories, methodological points, personal points, suggestions for further enquiry etc. that
occur to him/her during the process of constant comparison and data analysis (Lempert, 2007, p.
245; Flick, 2009, p. 434)
Waring (2012) suggests three main types of memo:
• code notes (e.g. containing the names of codes and how these were derived);
• theoretical notes (extensions of code notes, e.g. containing the products of inductive and
deductive thinking in relation to properties of, and relationships between, data, codes and
theorizing);
• operational notes (e.g. concerning the conduct of the data collection, research and data
analysis) (p. 302)
Memos cover many aspects of the research and data analysis. They can address many
matters, for example, those set out here in alphabetical order:
analytical notes and ideas;
codes, categories and the products of coding;
• comments on sampling;
• comments on the saturation;
• concepts and key concepts;
• conditions and contingencies;
• conjectures and speculations;
• core category;
• cross-references and relationships;
• decisions taken;
• descriptive details;
• diagrams; directions and suggestions;
• emerging theory
• explanatory ideas
• feelings about the research;
• grounded theory;
• ideas
• impressions;
• inductive and deductive material;
• issues and ideas arising in the research
• models
• observations;
• operational matters;
• philosophical matters;
• procedural matters;
• reflections on the research;
• relationships and comparisons;
• reminders
• suggestions for further directions of investigation;
• summaries;
• themes;
• theoretical sampling;
• theoretical sensitivity;
• theoretical suggestions;
• theory.
Memos can be written at any stage of the data collection and analysis; they can vary in length
and format, from informal to more formal. They may contain verbatim quotations, notes,
jottings, key points underlined, diagrams (Strauss and Corbin, 1990, pp. 202–3); in short, nothing
is ruled out.
Constant comparison
Constant comparison is the process ‘by which the properties and categories across the data are
compared continuously until no more variation occurs’ (Glaser, 1996), i.e. saturation is reached.
In constant comparison, data are compared across a range of situations, times and groups of
people, and through a range of methods. The process resonates with the methodological the
notion of triangulation.
Constant comparison is the process ‘by which the properties and categories across the data are
compared continuously until no more variation occurs’ (Glaser, 1996), i.e. saturation is reached.
In constant comparison, data are compared across a range of situations, times and groups of
people, and through a range of methods. The process resonates with the methodological notion
of triangulation
Glaser and Strauss (1967, pp. 105–13)
suggest that the constant comparison method involves four stages:
(i) comparing incidents and data which are applicable to each category;
(ii) integrating these categories and their properties;
(iii) bounding the theory;
(iv) setting out the theory.
The first stage here involves coding of incidents and comparing them with previous incidents
in the same and different groups and with other data that are in the same category. For this to
happen they suggest unitizing – dividing the narrative into the smallest pieces of information or
text that are meaningful in themselves
, for example, phrases, words, paragraphs. It also involves categorizing: bringing together those
unitized texts which relate to each other and that can be put into the same category, plus devising
rules to describe the properties of these categories and checking that there is internal consistency
within the unitized text contained in those categories
. The second stage involves memoing and further coding. Here the method of constant
comparison involves moving beyond comparing one incident with another to comparison of one
incident with the proper- ties of the category which emerged after comparing incident with
incident (Glaser and Strauss, 1967, p. 108).
The third stage – delimitation – occurs at the levels of the theory and the categories (p. 110), in
which the major modifications reduce as underlying uniformities and properties are discovered
and in which theoretical saturation takes place.
The final stage (writing theory) occurs when the researcher has gathered and generated coded
data, memos and a theory which is then written in full.
The core variable
Through the use of constant comparison, a core variable (or core category) is identified:that
variable category which accounts for most of the data and to which as much as possible is
related; that variable or category around which most data are focused and to which they relate
(Strauss and Corbin, 1990, p. 116) As Flick et al. (2004, p. 19) suggest:
‘the successive integration of concepts leads to one or more key categories and thereby to the
core of the emerging theory’. The core variable is that variable that integrates the greatest
number of codes, categories and concepts, and to which most of them are related and with which
they are connected. It has the greatest explanatory power; as Glaser (1996) remarks:
a concept has to ‘earn its way into the theory by pulling its weight’ without forcing.
Saturation
Saturation is reached when no new insights, properties, dimensions, relationships, codes or
categories are produced even when new data are added, when all the data are accounted for in the
core categories and sub-categories and when the coding, categories and data support the
emerging theory (Glaser and Strauss, 1967, p. 61; Creswell, 2002, p. 450; Ezzy, 2002, p. 93)
, and when the variable covers variations and processes (Moghaddam, 2006). Of course, one can
never know for certain that the categories are saturated, as fresh data may come along that refute
the existing theory. The partner of saturation is theoretical completeness, when the theory is
able to explain the data fully and satisfactorily.
Theoretical sensitivity
Researchers must possess theoretical sensitivity, i.e. the ability to perceive and notice the
important parts of data and to accord them meaning (Strauss and Corbin, 1990, p. 46).
It concerns personal qualities in the researcher (p. 41), and sensitivity to the subtleties and
complexities of the data and ability to develop theoretical insights into the research. Birks and
Mills (2015) comment that it is the researcher’s ability to ‘recognize and extract from the
data’ (p. 58) those elements which have relevance and meaning for the emerging theory, without
‘forcing’ the data into a theory (p. 59). Such sensitivities can be developed from studying
relevant literature, professional and personal experience, the processes and procedures followed
in the data analysis, continually interacting with the data, reflexivity, standing back from the data
to review what is happening, maintaining a critical, perhaps skeptical, attitude to possible
explanation, categories and hypotheses concerning the data, i.e. regarding them as provisional
only (Strauss and Corbin, 1990, pp. 42–5).
The strength of the grounded theory
As a consequence of theoretical sampling, coding, constant comparison, the identification of the
core variable and the saturation of data, categories and codes, the grounded theory (of whatever
is being theorized) emerges from the data in an unforced manner, accounting for all of the data.
The adequacy of the derived theory can be evaluated against several criteria.
Glaser and Strauss (1967, p. 237) suggest four such main criteria:
1 the closeness of the fit between the theory and the data;
2 how readily understandable the theory is by lay- persons working in the field, i.e. that it makes
sense to them;
3 the ability of the theory to apply to a wide range of everyday situations in the same field, i.e.
not simply to specific kinds of situation (p. 237); 4 the user of the theory must have sufficient
control over their everyday lives so that applying the theory is possible and worthwhile (p. 245)
Evaluating grounded theory
Strauss and Corbin (1990) indicate that the grounded theory generated should be judged
against several criteria (pp. 252–6):
• The reliability, validity and credibility of the data;
• the adequacy of the research process;
• the empirical grounding of the research findings;
• the sampling procedures;
• the major categories that emerged;
• the adequacy of the evidence base for the categories
• that emerged; the adequacy of the basis in the categories that led
• to the theoretical sampling; the formulation and testing of hypotheses and their
relationship to the conceptual relations among the categories;
Preparing to work in grounded theory
Glaser (1996) offers some useful practical and personal advice for researchers working with
grounded theory. He suggests that researchers must be able to:
(a) tolerate uncertainty (as there is no preconceived theory), confusion
(cf. Buckley and Waring, 2009, p. 330) and set- backs (e.g. when data disconfirm an
emergent theory);
(b) avoid premature formulation of the theory, but, by constant comparison, enable the final
theory to emerge. They need to be open to what is emerging and not to try to force data to fit a
theory but, rather, to ensure that data and theory fit together in an unstrained manner.
Some concerns about grounded theory
There are several concerns raised about grounded theory. Thomas and James (2006) mount a
withering critique of grounded theory, arguing that it ‘oversimplifies complex meanings and
inter-relationships in data’ (p. 768) by focusing on the ‘immediately apparent and observable’ (p.
769), that it ‘constrains analysis’ by putting the cart of procedures (theoretical sampling,
coding, categorizing, constant comparison, saturation, identification of the core category)
before the horse of interpretation, and that it unfairly privileges induction over explanation and
prediction (p. 768). They argue that, since grounded theory has many versions, its identity is
unclear.
The meaning and status of theory
The meaning and status of theory Thomas and James (2006)
suggest that the term ‘theory’ is ill-defined and vague in grounded theory, and has many
meanings: ‘theory’ here is ‘merely a narrative’ rather than an explanation (p. 778), and
what grounded theory generates is not a ‘theory’ at all (p. 780) but simply ‘mental
constructions’, with little explanatory, empirical or predictive power (see also Silverman,
1993, p. 47)
The role of literature and prior disciplinary knowledge
Glaser and Strauss (1967) suggested that the reader should not conduct a literature review in
advance of the data analysis, or bring advanced disciplinary knowledge to bear on the analysis,
so that the data can speak for themselves, unaffected or contaminated by prior researcher
knowledge or preconceptions which might stifle the process of theory generation, and to avoid
being ‘ “awed out” by the work of others’ (Dunne, 2011, p. 115). Indeed Glaser (1998) argues
that, since the researcher does not know in advance what literature will be relevant in the data,
conducting a literature review may be timewasting and inefficient, as it may engage irrelevant
material.
Silverman (1993, p. 47) suggests that eschewing early literature reviews and disciplinary
knowledge fails to acknowledge the implicit theories which guide research from its earliest
stages (i.e. data are not theory neutral but theory saturated), and this should feed into the
process of reflexivity in qualitative research.
The question of the ‘ground’ in ‘grounded theory
Charmaz (2002), in her constructivist version of grounded theory, suggests that the ‘ground’ is
people, i.e. that the theory is grounded in the meanings and meaning-making that people give to,
or construct from, the data, i.e. theory has no objective existence that is waiting to be discovered
but is defined by people. Theory is constructed and co-constructed, not discovered, by
researchers and participants who bring their own biographies, experiences, contexts and back-
grounds to bear on their theories (Charmaz, 2002, 2006; Thornberg, 2012a).
Generalizability
A concern of grounded theory is how generalizable is the emergent theory. Is it restricted, for
example, to being an explanation of the phenomenon in question or does it have wider
application, being of a more abstract and lawlike generalization (with the rider that there may not
be laws in social science, in contrast to laws in the natural sciences) Does grounded theory aspire
to being a ‘grand theory’ a ‘middle-range’ theory or an ‘empirical theory’, it for the reader to
decide whether the theory can apply to a new situation (e.g. Glaser, 1998)? Glaser (1998)
argues that a grounded theory must have ‘transferability’, i.e. must not be bound by the
specificities of the particular study in question and must be able to apply to other situations
(e.g. through conceptual similarities).
The dependence on coding
Finally, some versions of grounded theory are heavily dependent on coding, and we refer
readers also to the critiques of coding set out. Grounded theory, then, though widely used, is
not without its challenges (cf. Birks and Mills, 2015). Researchers must decide on its fitness for
purpose and, if working with it, must be mindful of its challenges and criticisms.