Deterding Coding Interview
Deterding Coding Interview
Approach
Abstract
Qualitative coding procedures emanating from grounded theory were lim-
ited by technologies of the 1960s: colored pens, scissors, and index cards.
Today, electronic documents can be flexibly stored, retrieved, and cross-
referenced using qualitative data analysis (QDA) software. We argue the oft-
cited grounded theory framework poorly fits many features of contemporary
sociological interview studies, including large samples, coding by teams, and
mixed-method analysis. The grounded theory approach also hampers
transparency and does not facilitate reanalysis or secondary analysis of
interview data. We begin by summarizing grounded theory’s assumptions
about coding and analysis. We then analyze published articles from American
Sociological Association flagship journals, demonstrating that current con-
ventions for semistructured interview studies depart from the grounded
theory framework. Based on experience analyzing interview data, we suggest
steps in data organization and analysis to better utilize QDA technology. Our
goal is to support rigorous, transparent, and flexible analysis of in-depth
interview data. We end by discussing strengths and limitations of our
twenty-first-century approach.
1
Institute for Research on Poverty, University of Wisconsin–Madison, Madison, WI, USA
2
Harvard University, Cambridge, MA, USA
Corresponding Author:
Nicole M. Deterding, Institute for Research on Poverty, University of Wisconsin–Madison, 1180
Observatory Drive, Madison, WI 53706. USA.
Email: [email protected]
Deterding and Waters 709
Keywords
in-depth interviews, qualitative coding, qualitative data analysis software,
grounded theory, sociological research methods
Well into the twenty-first century, most qualitative research training is still
either consciously or unconsciously based on grounded theory—a model
designed in the 1960s. Many books or articles based on in-depth qualitative
interviews begin the methodological section with a citation to this classic
approach. Grounded theory, developed by Glaser and Strauss (1967) and
elaborated by Glaser (1992), Strauss (1987), Strauss and Corbin (1990), and
Charmaz (2000), provides a set of steps for conducting and analyzing
qualitative research. Although often cited, a true grounded theory approach
is less common among sociologists doing interview studies today. Its cen-
tral prescriptions—theoretical sampling toward saturation, strongly induc-
tive analysis, and full immersion in the research field—bear little
resemblance to the actual methods used by many large-scale interview
researchers. And yet, grounded theory continues to have an enormous
influence on how qualitative research design and interview coding are
taught in graduate programs. How should twenty-first-century interview
researchers proceed?
In this article, we argue that the coding procedures emanating from
grounded theory were limited by the technology available to researchers at
the time: paper, scissors, index cards, and colored pens. In a path-dependent
process, open descriptive coding aggregating to conceptual abstraction
shaped the features of computer assisted qualitative data analysis (QDA)
software. Despite the fact that electronic documents can be flexibly stored
and snippets of text easily cross-referenced and retrieved, the result is that the
procedures outlined by Glaser and Strauss were turned into algorithms for
how to do qualitative research. Indeed, most sociologists reporting the use of
QDA software in their published work appear to do little more than apply
virtual sticky notes and sort piles of electronic note cards (White, Judd, and
Poliandri 2012).
Reliance on the twentieth-century approach poses a number of problems
for today’s researchers. First, most researchers have many theoretical ideas
and concepts they will apply to a single set of data. Codes that work well for
one chapter of a book or dissertation are not necessarily the ones a scholar
wants for a different substantive chapter or article, and grounded theory
710 Sociological Methods & Research 50(2)
coding does not easily allow for flexible reanalysis. Second, coding line by
line takes a great deal of time and effort before one establishes a set of codes
that can be applied to the entire set of data; establishing the reliability and
validity of coding is a different challenge when using software over hard-
copy coding. Third, the grounded theory approach was developed based on
projects with a relatively small number of interviews, generally conducted,
and analyzed by the researcher himself or herself. Increasingly, qualitative
studies involve large numbers of interviews, often numbering near 100.
While reliably and validly applying thematic codes across 80–100 interviews
is a logistical problem for even a single researcher, today’s large-N studies
often frequently involve teams of people who interview, code, and write
separate papers based on a shared pool of material.
Finally, structuring data using many small codes does not easily facilitate
analytic transparency or secondary analysis by researchers not involved in
the initial data collection. More and more, qualitative researchers are
exhorted to clearly communicate the logical steps in their data analysis, yet
conventions for doing so have yet to appear. Relatedly, norms of open sci-
ence—such as data archiving and secondary analysis—that are taking hold in
quantitative social science have largely yet to make it to qualitative research.
While secondary analysis of interview data is an opportunity that is rarely
taken advantage of today, we believe it should be encouraged. Other
advances in technology—such as secure servers—enable researchers with
institutional review board (IRB) permission to analyze existing qualitative
data, even if the researcher may not be at the same institution as the data.
However, to date, there is little methodological guidance on how to plan for
or undertake secondary analysis.
In this article, we briefly describe the assumptions involved in the
grounded theory approach to coding and analysis. Next, we analyze the
methodology sections of in-depth interview studies in American Sociolo-
gical Association (ASA) journals to highlight their mismatch with current
conventions for semistructured interview research in sociology. We then
propose a set of procedures—which we call flexible coding—that flips the
script to take advantage of modern QDA technology. Rather than limit
ourselves to how most qualitative interviewers are taught to code or to how
beginning qualitative researchers think they ought to approach the craft, our
goal is to better reflect how one can practically go about analyzing large-
scale interview data. We end by weighing the strengths and limitations of a
twenty-first-century approach to qualitative analysis compared to the pre-
vious approaches.
Deterding and Waters 711
Background
Grounded Theory: Origins and Approach
In 1967, two sociologists, Barney G. Glaser and Anselm L. Strauss, pub-
lished The Discovery of Grounded Theory: Strategies for Qualitative
Research. Building on Znaniecki’s (1934) “analytic induction,” their book
promoted a systematic, inductive approach to qualitative research, suggest-
ing qualitative researchers should abstract conceptual relationships from data
rather than deduce testable hypotheses from existing theory. After publica-
tion, Glaser and Strauss came to disagree about central features of the method
and parted ways. They each have been active since (Strauss until his death in
1994), publishing competing guides to grounded theory (Glaser 1998, 2005;
Strauss 1987; Strauss and Corbin 1990).
Much ink has been spilled explicating their variations on a theme, but the
primary divergence is that Glaser stresses induction as the core of the
approach, ultimately going so far as to advocate that researchers avoid any
literature relevant to the subject being studied until after the interviews are
complete and initial coding of the data is done (Glaser 1998). While main-
taining a focus on induction, Strauss became more concerned with validation
and systematic procedure. Most modern variants of grounded theory favor
Strauss over Glaser in their attention to the particular words of study respon-
dents. Charmaz (2006), Bryant (2002), and Bryant and Charmaz (2007) built
on Strauss, developing what they label “constructivist grounded theory.”
This approach stresses the coproduction and construction of concepts and
interpretations by researchers and participants, recognizing their different
positions, roles, backgrounds, and values.
There are most likely several reasons for the outsized influence of
“grounded theory” in qualitative interview training. Glaser and Strauss
offered an early attempt to systematically describe how qualitative research
ought to be conducted. Their grounded theory offered a “scientific” prescrip-
tion for what midcentury positivists had diagnosed as the biased, impressio-
nistic, and anecdotal condition of qualitative research. The inductive
approach begins with the data itself and exhorts the researcher to produce
concepts and theory from it, promising theory generation rather than mere
theory testing. Later, as broad questions of epistemological authority roiled
the social sciences, the method was easily adapted to emphasize the cate-
gories of meaning offered by respondents themselves (Charmaz 2006). This
combined practical and epistemological appeal remains even today. By pro-
viding step-by-step directions, the grounded theory approach can be taught to
students, including applied researchers, regardless of their disciplinary
712 Sociological Methods & Research 50(2)
background. The approach also likely remains cited because most qualitative
research has at least an element of induction. However, many scholars have
speculated that researchers cite Glaser and Strauss merely to imply that they
took an inductive approach.
Based on our informal discussions with contemporary researchers using
large-scale interview data, it appears that few actually implement the unfold-
ing data collection suggested by a grounded theory approach: theoretical
sampling toward conceptual saturation. Practical demands of modern aca-
demic life, including adhering to the requirements of IRB, grant writing to
secure research funding, balancing competing professional demands and
multiple projects, working with research teams, and coordinating transcrip-
tion of large samples all run contrary to the image of a researcher wholly
immersed and spontaneously pursuing their interview project. This is one
way that what interview researchers actually do departs from the methodo-
logical guidance.
This approach to coding makes some sense when we imagine that the data
were printed in hard copy; the codes that were applied were written in the
margins of the transcripts; and the resulting categories were cut up into bits of
paper, taped onto index cards, and then sorted into thematic piles. However,
the widespread development of word processing in the early 1980s and the
development of qualitative data analysis software soon after have given
qualitative researchers new tools to do their research. It is unfortunate, how-
ever, that these tools have not led to a fundamental rethinking of how coding
is actually performed. This is what we propose.
Data reduction is a fundamental part of qualitative analysis (Miles and
Huberman 1994). When grounded theory was developed, chopping up the
data into small pieces was a very time-consuming, physical process. After
that came the stage of tossing aside material that was no longer needed and
then combining the small pieces into larger codes. An example would be
taking the pile of pieces of text labeled “attitudes about congressmen” and
putting it together with a pile of text labeled “attitudes about mayors” and
then labeling this new pile “attitudes about elected officials.” The initial level
of abstraction mattered deeply: If the researcher started by cutting up the
transcripts into piles with broader categories—attitudes toward elected offi-
cials, for instance—when she decided to analyze mayors differently than
congressmen, transcripts would need to be cut up again. It is easier when
dealing with paper to sort codes and chunks of text in different ways to see
how they come together, rather than to look at large chunks of text more
broadly to determine how the data should be split. In short, the idea of
starting with tiny bits of text and many codes and aggregating up made a lot
more sense than starting with big chunks of text when researchers were doing
it all by hand, with little piles of paper on their dining room tables. This is no
longer required.
Many researchers find visual display of the elements of their story a valuable
means of achieving both local integration and inclusive integration. Miles and
Huberman recommend diagrams on paper, with lines linking related issues, to
display graphically the conceptual framework of the final report. Becker sug-
gests putting data and memos about the data on file cards that can then be
spread on a large flat surface and arranged and rearranged until they achieve a
logical sequence. Agar suggests finding an empty classroom full of black-
boards on which can be drawn maps of concepts and their interrelations.
(p. 162).
opportunities for team work, complex projects, secondary analysis, and repli-
cation studies.
Large Ns
The main consensus among interview methodologists regarding how many
interviews a researcher should plan is “it depends” (Baker and Edwards
2012). Yet we hypothesized that the typical number of interviews in socio-
logical studies has expanded far beyond what would be required for a
grounded theory threshold of conceptual “saturation” (Glaser and Strauss
1967; for an empirical test suggesting the number may be as low as 12, see
Guest, Bunce, and Johnson 2006). Exactly how much does practice diverge
from what is recommended by this benchmark of grounded theory?
Table 1 reports information on the number of studies including semistruc-
tured interviews by journal. The minimum, median, and maximum number
of interviews is reported. Overall, only 19 articles (19.4 percent) were based
solely on interview data. Consistent with the idea that that a single data
collection often yields multiple analyses, several studies appeared in more
than one publication. One study was represented three times (Calarco 2011,
2014a, 2014b), and five were represented twice (Ispa Landa, Rivera, Teeger,
Turco, and Wilkins). As far as could be ascertained from the methodology
718 Sociological Methods & Research 50(2)
Studies N Interviewsa
Interviews Min.
Journal All Only (interview only) Median Max.
discussions in the articles, only three of the studies (Brown 2013; Collett,
Vercel, and Boykin 2015; Deterding 2015a) included secondary analysis of
semistructured interview data.
While qualitative methodologists will rarely state a hard cutoff for the
number of required interviews, these data demonstrate that large studies end
up published in the core disciplinary journals. The number of interviews
ranged from 12 to 208, with only 19 studies containing fewer than 30 inter-
views. The median number for the whole set is 55, and interview-only studies
generally had larger samples than studies that combined data sources. One in
four articles (24 total) reported on 100 or more interviews. Clearly, a sub-
stantial portion of contemporary qualitative researchers face a large amount
of interview data to handle during analysis.
Opaque Coding
Table 2 reports information coded from authors’ descriptions of their coding
process. By and large, the type of information included in descriptions of the
coding process is not standardized, either within or across journals. The most
commonly cited methodologists were the grounded theorists and their suc-
cessors: Charmaz, Glaser and Strauss, and Miles and Huberman. More than
half of the articles (55 percent) did not cite a specific methodological text,
instead using terms that are drawn from these authors: “inductive,”
Deterding and Waters 719
No Describes
Methodologist Coding Mentions
All Cited Procedure QDAS
Studies
Journal N N Percent N Percent N Percent
a
American Journal of Sociology 29 21 72.4 20 68.9 9 42.8
American Sociological Review 28 16 57.1 23 82.1 11 39.2
Journal of Health and Social 7 4 57.1 7 100.0 4 57.1
Behavior
Social Psychology Quarterly 15 7 46.7 13 86.7 4 26.7
Sociology of Education 19 7 36.8 18 94.7 11 57.9
Total 98 55 55.1 61 82.6 30 39.7
a
In American Journal of Sociology several studies used in-depth interviews as part of a historical
case method approach. None of these mentioned coding, but it’s a different method of analysis.
Why So Large?
We’d briefly like to speculate on why sociological interview projects are
typically so large. Given that contemporary sociology privileges describing
mechanisms, many sociologists today design their work to highlight con-
trasts between groups of respondents or research sites. By designing a study
across hypothesized salient differences, scholars are able to construct argu-
ments that illuminate the relationship between context and concept. This
approach requires enough respondents in each “cell” to establish patterns
of difference by group or site. Studies designed this way produce large sets of
textual data, which presents logistical challenges for the researcher during
the analysis phase. We believe studies examining the contours of group
difference are particularly well suited to the coding procedure we outline
below.
720 Sociological Methods & Research 50(2)
(Weitzman and Miles 1995). As of this writing, the site lists 10. Common
tools across the set of programs include the ability to write and track memos;
index or code data with thematic or conceptual labels; add demographic or
other categorical information to compare subgroups; run searches or queries;
develop visual models or charts; and generate reports or output from the data
(Lewins and Silver 2007). That major software options share these features is
evidence of the fact that, over time, QDA software has “simultaneously
become more comprehensive, more applicable to a diverse range of meth-
odologies, and more homogenous” (Bazeley and Jackson 2013:6). We ela-
borate our approach using NVivo Version 10, though we have substantial
experience with several iterations of ATLAS.ti and some experience with the
first version of Dedoose. We are confident that a similar approach can be
used with these programs. Any software package with the major capabilities
above should be able to facilitate our approach to organizing and analyzing
interview data. Based on our analysis of published in-depth interview studies
appearing in ASA journals above, ATLAS.ti and NVivo appear to be the
most commonly used among sociologists.
disciplines writing about many separate topics. The RISK qualitative data
have been used to examine physical and mental health outcomes of Katrina
(Bosick 2015; Lowe, Rhodes, and Waters 2015; Morris and Deterding 2016);
experiences of racism during the hurricane and its aftermath (Lowe, Lustig,
and Marrow 2011); changes in marital and partner relationships (Lowe,
Rhodes, and Scoglio 2012); residential choices and social mobility (Asad
2015); posthurricane (im)migration and race relations (Tollette 2013); and
educational planning and return (Deterding 2015a). Other methodological
treatments of team-based coding focus on the intercoder reliability of analy-
tic codes in a single study, which is noted as a very time-intensive iterative
process (Campbell et al. 2013; Price and Smith 2017). We present an alter-
native model for teamwork in qualitative analysis. Because our initial
“indexing” step is broad rather than fine-grained, the task can be more easily
distributed among members of a research team than can line-by-line abstrac-
tion. Once the index is established, either an individual analyst or a research
team can proceed to the analytic stage, using the software’s capacity for data
reduction to enhance the reliability and validity of analytic codes. These
steps are further discussed below.
to each individual) and attitudes toward success (an analytic code applied to
the text later) was the dependent variable (Kasinitz et al. 2008).
In mixed-methods data, attributes can include person-level data from
respondent screening sheets, demographic information sheets, or surveys.
For instance, using the RISK data, we connected longitudinal survey
responses to transcripts using NVivo Version 10 “classification sheets,”
allowing us to describe qualitative mechanisms for mental health differences
among survey respondents with proximal and distant social networks (Morris
and Deterding 2016). Attributes may not be limited to person-level charac-
teristics; other units of analysis are possible. For instance, if the project has
multiple research sites, “site” can be designated as an attribute, allowing the
analyst to query the transcripts and memos to examine thematic differences
between sites. If this information is not available in a demographic informa-
tion form, the researcher will need to record respondent attributes when
reading the interview text and applying index codes (discussed below).
Why is it important to connect transcripts with attributes? QDA’s query-
ing capabilities rely on the intersection of codes, and attributes are codes that
are applied to the entire transcript. Attributes are applied by linking docu-
ments to a classification sheet (NVivo version 10), descriptors (Dedoose), or
a primary document family (ATLAS.ti). In each of these software options,
documents can be assigned attributes manually from within the software.
However, with more than a handful of respondents, attributes are more
quickly and reliably imported via a spreadsheet that includes the respondent
ID. Because of this, we recommend recording attributes for all of the respon-
dents in a spreadsheet and then importing this to the database. The work of
identifying attributes can be done while sampling and interviewing is
ongoing or at any point afterward. Although it is likely that other important
conceptual categories warranting inclusion as an attribute will arise during
analysis, identifying and coding initial respondent attributes is an important
part of early data preparation. Querying the intersection of attributes and
analytic codes will reappear in the third stage of the data analysis, when
testing the robustness of the theoretical claims.
Great quote and “aha”. Throughout all stages of the coding process, research-
ers should take care to note chunks of text where respondents are particularly
concise, articulate, or poignant. Include or make a separate code for snippets
of text that trigger “aha” moments in understanding the data so that these are
easily retrieved later. When writing, queries of the overlap of “great quote”
or “aha” with analytic codes will identify quotes to include in the paper.
728 Sociological Methods & Research 50(2)
several ideas that could be pursued in many papers or book chapters. The list
will include, but not be limited to, the original research questions that
informed the study; some themes will be truly “emergent.” The work done
during respondent and cross-case memoing will offer a variety of possible
directions for the first paper from this project. It is very common for large
qualitative projects to unfold over time and result in a range of products,
beginning with journal articles and culminating in a book. Being strategic
about the analytic process can help researchers meet very real publication
pressures. Also, to avoid being overwhelmed by possibilities, we suggest
approaching the application of analytic codes one research question or
paper at a time. It is not necessary to write all of the papers at once! The
familiarity with the transcripts built during the first reading means research-
ers will have a good idea where to find the relevant chunks of the transcript
for a single research question.
A major problem of analyzing a large interview data set is applying codes
in a reliable manner (Campbell et al. 2013). While sometimes this issue is
solved by coding in teams, it is possible to use the software to aid the process.
On the second reading, we suggest limiting reading to the relevant text only,
considerably reducing the task of applying analytic codes. We also recom-
mend ignoring respondent attributes while applying the analytic codes. Only
after coding thematically across all transcripts should the researcher examine
whether there are patterns of qualitative difference by attributes. This tech-
nique allows the analyst to avoid confirmation bias, keeping the final argu-
ment as close to the text as possible.
Use the index code to display the relevant sections of the transcripts and
apply only one or two analytic codes at a time to this text. By focusing the
analytic process in this way, it is possible to increase the reliability and
validity of coding. For example, Deterding (2015a) built concepts of
“instrumental education” and “expressive education” in conceptual memos
during her initial read of respondents’ discussion of their college plans. Using
the index, she was then able to apply instrumental/expressive codes to text in
127 transcripts in about 10 hours. By limiting herself to the code indexed at
“education history” and “successful adulthood,” her second read covered
approximately 20 percent of the full transcripts, a piece of information she
reported in her methodology section. The indexing reading of the 120 tran-
scripts took a group of graduate student coders about 250 hours plus time for
cleanup and matrix checking, and it would be nearly impossible to reliably
apply well-defined analytic codes over such a long period of time.
In addition to the a priori attributes discussed above, some analytic codes
should be applied to the entire interview, becoming attributes. This is
730 Sociological Methods & Research 50(2)
the body of data. Now it is time to explore how deeply the story is grounded in
the entire body of text. Software can aid this process, by helping researchers
identify trends across cases, investigate alternative explanations, and quickly
locate negative cases that help refine or limit the theoretical explanation.
QDA software also makes it relatively simple to examine the cross-case
reliability of the thematic coding. While other authors have suggested
options for enhancing reliability that involve multiple coders (see Campbell
et al. 2013; Price and Smith 2017), an alternate option is to query the inter-
section of the typology (stored as an attribute) and analytic codes. To con-
tinue the example above, for each person-level categorization (instrumental/
expressive/mixed), Deterding output the analytic codes for instrumental and
expressive logic in order to make sure that the textual evidence identified
groups that were truly distinct. The process of reducing data down from full
transcripts, to indexed extracts, and finally to grouped analytic codes allowed
her to judge whether she had applied uniform qualitative criteria across the
sample, increasing reliability or construct validity. When looking at the data
in this organized, reduced form, some respondents seemed misclassified. She
then revised their classification, assuring the construct validity of her
typology.
Other features of the software can help researchers test and refine the
theoretical explanations they have developed. There is considerable debate
over methods for determining what counts as a robust finding in qualitative
research. We do not believe it is necessary for a phenomenon to apply to the
entire sample to be analytically important. However, a systematic treatment
of alternative explanations and negative cases is an important part of con-
textualizing findings and creating a convincing theory. The intuition is that it
is possible to learn about the scope of the theory and refine an understanding
of important relationships by examining (and interpreting) where it does not
appear to apply. Blee (2009:148) sets out the following criteria for a quali-
tative analytical plan, arguing that it should take into account “how will data
be assessed to ensure that (1) all data are considered (2) spectacular/extraor-
dinary events are not overly stressed (3) data that diverge from the pattern are
not discounted without a clear rationale to do so.” Querying the data with
QDA software can aid in this process.
While the easy production of frequency tables is a useful feature of QDA
software, taking advantage of software does not require a frequentist per-
spective. If one approached analysis from a case-based perspective, a single
disconfirming case or cluster of exceptions may crystalize the conceptual
limitations of the trend or help refine the working theory to account for the
exceptions. From this perspective, it is not the number of exceptions to the
732 Sociological Methods & Research 50(2)
theme that is analytically important but how the exceptions help to refine the
theory. The data querying capacity of QDA software allows one to easily
identify cases that are exceptions to trends and require further examination,
meeting a typical analytical requirement in the methods literature that negative
cases should be explicitly treated in the analysis (Katz 1982; Luker 2008).
If the index and analytic codes are applied reliably and analytic attribute-
level categorization is consistent, it is possible to run text queries to docu-
ment the robustness and the limitations of the findings (Maxwell 1992:48).
On the frequentist end of the spectrum, which the software easily facilitates,
the analyst may want to make statements such as “N respondents demon-
strated this logic.” As cautioned by scholars such as Small (2011), however,
it’s important to make sure that such statements are appropriate for the form
of the data. For instance, if the interview protocol evolved over the course of
the study, and the same questions were not asked of everyone, it may not be
appropriate to report coding counts. It may also be the case that one wants to
write about a topic that only applies to half of the interviewees. By querying
codes, it is possible identify in how many transcripts the topic appeared,
which might be considered a more accurate number of interviews to report
in the Methods section of the paper than the full interview sample.
Limitations
While we use QDA software to flip the script on grounded theory, flexible
coding may not be appropriate for every interview project. It might not be
worth fully indexing transcripts for projects with a small number—fewer
than 30—of interviews. If the research question is tightly circumscribed and
the data are intended for a single article, it may not be important for the
researcher (or others) to revisit data in the future. Finally, many of
the published papers we examined included in-depth interviews as one of
the multiple data sources. Some of these pieces were historical case studies,
others were ethnographies, and some articles drew on a handful of interviews
to add color to primarily quantitative analyses. In these circumstances, less
intensive forms of interview data preparation and analysis may well be
appropriate. And, of course, if the researcher truly performed a grounded
theory study, with an unfolding data collection, where transcripts were ana-
lyzed as they were completed and the protocol modified toward a final
emergent concept, then our method is not for them.
Finally, we do not want to overemphasize the importance of easy quanti-
fication and tabulation of data, pressing too far beyond our primary concern
about the internal validity of qualitative explanations. Our goal has been to
suggest ways that qualitative researchers analyzing interview data can iden-
tify findings that are firmly grounded in the data they have collected, prop-
erly contextualize findings in the set of data as a whole, and easily identify
negative cases for the refinement of theory. In the end, we believe that the job
of generalizing theories generated by qualitative data falls to future research
using more representative methodologies.
Conclusion
If the methodology sections of recent sociology journal articles are to be
believed, qualitative data analysis appears stuck in the twentieth century.
Grounded theory certainly deserves credit for the role it played in
Deterding and Waters 735
Funding
The author(s) disclosed receipt of the following financial support for the research, author-
ship, and/or publication of this article: This work was supported by National Institutes of
Health Grants R01HD046162, RO1HD057599, and P01HD082032. Waters was
supported by a Robert Wood Johnson Investigator Award in Health Care Policy.
ORCID iD
Nicole M. Deterding https://orcid.org/0000-0001-5819-8935
Supplemental Material
Supplemental material for this article is available online.
References
Asad, Asad L. 2015. “Contexts of Reception, Post-disaster Migration, and Socio-
economic Mobility.” Population and Environment 36(3):279-310.
736 Sociological Methods & Research 50(2)
Lowe, Sarah R., Jean E. Rhodes, and Mary C. Waters. 2015. “Understanding Resi-
lience and Other Trajectories of Psychological Distress: A Mixed-methods Study
of Low-income Mothers Who Survived Hurricane Katrina.” Current Psychology
34(3):537-50.
Luker, Kristin. 2008. Salsa Dancing into the Social Sciences: Research in an Age of
Info-glut. Cambridge, MA: Harvard University Press.
Maxwell, Joseph. 1992. “Understanding Validity in Qualitative Research.” Harvard
Educational Review 62(3):279-301.
Miles, Matthew B. and Michael Huberman. 1994. Qualitative Data Analysis.
Thousand Oaks, CA: Sage Press.
Morris, Katherine Ann and Nicole M. Deterding. 2016. “The Emotional Cost of
Distance: Geographic Social Network Dispersion and Post-traumatic Stress in
Survivors of Hurricane Katrina.” Social Science and Medicine 165(1):56-65.
Price, Heather and Christian Smith. 2017, August. “Process and Reliability for Cultural
Model Analysis Using Semi-structured Interviews.” Paper presented at the Annual
Meeting of the American Sociological Association, Montreal, Québec, Canada.
Ragin, Charles, Joane Nagel, and Patricia White. 2004. “Workshop on Scientific
Foundations of Qualitative Research.” NSF Report 04-219, National Science
Foundation, Arlington, VA.
Small, Mario L. 2011. “How to Conduct a Mixed-methods Study: Recent Trends in a
Rapidly Growing Literature.” Annual Review of Sociology 37(1):57-86.
Silbey, Susan. 2009. “In Search of Social Science.” Pp. 76-82 in Workshop on
Interdisciplinary Standards for Systematic Qualitative Research, edited by M.
Lamont and P. White. Arlington, VA: National Science Foundation.
Strauss, Anselm L. 1987. Qualitative Analysis for Social Scientists. New York:
Cambridge University Press.
Strauss, Anselm L. and Juliet M. Corbin. 1990. Basics of Qualitative Research:
Grounded Theory Procedures and Techniques. London, England: Sage.
Timmermans, Stefan and Iddo Tavory. 2012. “Theory Construction in Qualitative
Research: From Grounded Theory to Abductive Analysis.” Sociological Theory
30(3):167-86.
Tollette, Jessica. 2013. “Nueva New Orleans: Race Relations and (Im)migration in
the Post-Katrina South.” Poster presented at the Annual Meeting of the Population
Association of America, New Orleans, LA.
Wacquant, Loı̈c. 2002. “Scrutinizing the Street: Poverty, Morality, and the Pitfalls of
Urban Ethnography.” American Journal of Sociology 107(6):1468-532.
Waters, Mary C. 1990. Ethnic Options: Choosing Identities in America. Berkeley:
University of California Press.
Waters, Mary C. 1999. Black Identities: West Indian Immigrant Dreams and
American Realities. Cambridge, MA: Harvard University Press.
Deterding and Waters 739
Waters, Mary C. 2016. “Life after Hurricane Katrina: The Resilience in Survivors of
Katrina (RISK) Project.” Sociological Forum 31:750-69.
Waters, Mary C., Patrick J. Carr, Maria J. Kefalas, and Jennifer Ann Holdaway. 2011.
Coming of Age in America: The Transition to Adulthood in the Twenty-first
Century. Berkeley: University of California Press.
Weiss, Robert S. 1994. Learning from Strangers: The Art and Method of Qualitative
Interview Studies. New York: Free Press.
Weitzman, Eben A. and Matthew B. Miles. 1995. Computer Programs for Qualitative
Data Analysis: A Software Sourcebook. Thousand Oaks, CA: Sage.
White, Michael J., Maya D. Judd, and Simone Poliandri. 2012. “Illumination with a
Dim Bulb? What Do Social Scientists Learn by Employing Qualitative Data
Analysis Software (QDAS) in the Service of Multi-method Designs?” Sociologi-
cal Methodology 42(1):43-76.
Znaniecki, Florian. 1934. The Method of Sociology. New York: Farrar and Strauss.
Author Biographies
Nicole M. Deterding produced this work while a National Poverty Center Postdoc-
toral Fellow with the Institute for Research on Poverty, University of Wisconsin–
Madison. Her research uses mixed methods to examine the role of postsecondary
education and training in the lives of economically vulnerable adults. Empirical
articles on which this article is based have recently appeared in Sociology of Educa-
tion and Social Science and Medicine.