Content Analysis: Method, Applications, and Issues: Health Care For Women International
Content Analysis: Method, Applications, and Issues: Health Care For Women International
Content Analysis: Method, Applications, and Issues: Health Care For Women International
To cite this article: Barbara Downe‐Wamboldt RN, PhD (1992) Content analysis: Method,
applications, and issues, Health Care for Women International, 13:3, 313-321, DOI:
10.1080/07399339209516006
Download by: [The UC San Diego Library] Date: 04 May 2017, At: 02:14
CONTENT ANALYSIS: METHOD, APPLICATIONS,
AND ISSUES
In recent years there has been a growing recognition that both qualita-
tive and quantitative approaches are needed for advancing nursing sci-
ence. Morse (1991) has suggested, "researchers who purport to sub-
scribe to the philosophical underpinnings of only one research approach
have lost sight of the fact that research methodologies are merely tools,
instruments to be used to facilitate understanding" (p. 122). The most
recent controversial issue is not whether one method is intrinsically bet-
ter than another, but which combination of methods is best to meet the
aims of a particular study. An integral part of contemporary nursing
research, content analysis methodology offers the opportunity to com-
bine what are often thought to be antagonistic approaches to data analy-
sis.
The intellectual basis of content analysis can be traced to the begin-
ning of conscious use of symbols and language. Before World War n ,
the method was primarily restricted to critique of journalistic endeavors,
to document their religious, scientific, or literary content (Krippendorff,
1980; Speed, 1893). The simplistic reliance on counting words or
phrases mat was characteristic of the method in its early development is
attributed to these journalistic roots. According to Weber (1985), the
first large-scale application of the method was during World War n by
the U.S. Office of Special Services to Nazi war propaganda. Although
Assessing Reliability
When is a study science and when is it one person's philosophy?
Controversy exists over whether reliability should be sacrificed for
meaning or vice versa. Researchers frequently face a difficult choice
between depth or level of understanding and repeatability or reliability.
The researcher wants the most empirically meaningful information with-
out too much loss of reliability. Often compromises are essential. When-
ever possible, the best solution to the dilemma between level of under-
standing and specifity is to use both latent and manifest content analysis
approaches to data analysis. In latent content analysis, the researcher is
concerned with the underlying meaning in each passage of the text.
Coding the underlying meaning or latent content focuses on the tone or
implied feeling, whereas coding the manifest content describes only the
visible, surface, or obvious components of communication. Field and
Morse (1985) described latent content analysis as reviewing data within
the context of the entire data set for each participant and manifest con-
tent as a check for specific instances of the categories. For example,
when the researcher's goal is to detail the psychological status of termi-
nally ill persons, using both latent and manifest content analysis to de-
scribe the tone, meaning, and number of references to dying will pro-
vide more insightful and meaningful results than would using either
approach alone.
Stability and agreement reliability are the most pertinent types of
reliability for narrative data. The extent to which the results of content
classification are consistent over time when the data are coded more
than once by the same coder (intrarater reliability) can be ascertained by
using Cohen's kappa for nominal data (Weber, 1985). Interrater reliabil-
ity, or agreement between two different raters coding the same data, can
be assessed using Cohen's kappa or percentage agreement (Polit &
Hungler, 1991). Detailed descriptions of methods to assess interrater
reliability have been provided by Krippendorff (1980) and Berelson
(1952). Test-retest reliability may be used to determine if the narrative
data are consistent from one time to another. Test-retest reliability may
be appropriate in coding interview data when consistency is expected;
conversely, it is inappropriate when change in the data are anticipated.
The larger the unit of analysis, the more difficult it is to achieve satisfac-
tory levels of reliability in coding data.
If the researcher plans to use more than one coder for the study, the
reliability of the coding process must be assessed before all of the data
are analyzed and disputes among coders must be resolved (Weber,
1985). The coding rules will need to be revised if the reliability is low (a
kappa of 0.8 to 0.9 is the desired level, with 0.7 being the minimum
Content Analysis 319
standard). After the revisions, more pretests of the coding scheme need
to be completed until an acceptable level of reliability is achieved and
before all of the data are analyzed. Human errors are always possible in
coding data and may be related to fatigue, personal bias, and perception.
Consequently, systematic checks of accuracy of coding are necessary
throughout the process. Interpretations of definitions that seemed clear
at the beginning of the project may change in subtle ways as familiarity
with the data increases. One of the advantages of computer programs for
coding data is the opportunity to achieve perfect reliability. Once the
content-coding schemes have been defined, an endeavor that requires
imagination, logic, creativity, and theoretical work, a computer will
consistently apply the rules to text.
Assessing Validity
Whose reality is the most accurate relative to the research question?
Few people share the same social, cultural, political, or historical per-
spective. Thus, it is absurd to think that one person can understand the
subjective experience of another person exactly as he or she has lived it.
What the researcher is able to know about the lives and thoughts of other
people is directly influenced by his or her personal history, areas of
interest, and focus. Consider the example of an elderly woman with
arthritis describing for her rheumatologist how she manages her activi-
ties of daily living under the adversity of functional losses associated
with arthritis. In this interaction the elderly woman relates her prob-
lems, as she experienced them, to a medical expert. Their perspectives
are assumed to be different. The rheumatologist may be interested in
detailing the disease process; the elderly woman, in describing her
unique difficulties. Thus meanings are always relative to the perspec-
tives of the sender and the receiver of the message.
The strategy of taking the results to the participants to have them
validate one's interpretations can be a useful approach to authenticate
the experiences of the participants. Achieving intersubjective agreement
would simplify the analysis; however, it is not a presupposition for con-
tent analysis (Krippendorff, 1980). The researcher, who has a broader
understanding of the historical context of social structures that have
influenced the actions of the players, may develop a broader understand-
ing of what is going on, in addition to the understanding that he or she
may share with the participants. Multiple meanings are always present
in data—there is no right meaning, only the most accurate meaning from
a particular perspective.
Validity has to do with what is being measured and how well. Validity
is confirmed or denied by returning to the original text to find examples
320 B. Downe-Wamboldt
CONCLUSION
Content analysis provides a mechanism to yield interesting and theo-
retically useful generalizations with minimal loss of information from
the original data. "It may be applied to virtually any form of linguistic
communication to answer the classic questions of who says what to
whom, why, how, and with what effect" (Babbie, 1986, p. 268). Finan-
cial costs are usually minimal because content analysis essentially is a
coding operation involving logical and conceptualizing work that can be
completed effectively by only one researcher. Disadvantages associated
with this method include its being limited to recorded communications
(verbal, visual, or written data), the amount of time required to code
data, and the type of statistical procedures that can be applied to data.
By paying careful attention to the assessment of reliability and validity,
the researcher can minimize problems associated with these issues. Be-
cause of its focus on human communication, content analysis is particu-
larly well suited to research involving the practice and education of
nurses and other helping professionals.
Content Analysis 321
REFERENCES
Babbie, E. (1986). The practice of social research (4th ed.). Belmont, CA: Wadsworth.
Beaton, J. I. (1990). Dimensions of nurse and patient roles in labor. Health Care for
Women International, 11, 393-408.
Berelson, B. (1952). Content analysis in communication research. Glencoe, IL: Free
Press.
Brink, P. J. (1991). Issues in reliability and validity. In J. M. Morse (Ed.), Qualitative
nursing research: A contemporary dialogue (pp. 151-168). Newbury Park, CA:
Sage.
Downe-Wamboldt, B. L., & Ellerton, M. L. (1986). A study of the role of hospice
volunteers. Hospice Journal, 1, 17-31.
Field, P. A., & Morse, J. M. (1985). Nursing research: The application of qualitative
approaches. Rockville, MD: Aspen.
Flaskerud, J. H., & Rush, C. E. (1990). AIDS and traditional health beliefs and prac-
tices of Black women. Nursing Research, 38, 210-215.
Fox, D. J. (1982). Fundamentals of research in nursing. Norwalk, CT: Appleton-
Century-Crofts.
Krippendorff, K. (1980). Content analysis: An introduction to its methodology (4th ed.).
Newbury Park, CA: Sage.
McLaughlin, F. E., & Marascuilo, L. A. (1990). Advanced nursing and health care
quantification approaches. Philadelphia: W. B. Saunders.
Morse, J. M. (1991). Approaches to qualitative-quantitative methodological triangula-
tion. Nursing Research, 40, 120-123.
Polit, D., & Hungler, B. (1991). Nursing research: Principles and methods (4th ed.).
Philadelphia: J. B. Lippincott.
Speed, C. J. (1893). Do news now give news? Forum, 15, 705-711.
Stern, P. N. (1991). Are counting and coding a capella appropriate in qualitative re-
search? In J. M. Morse (Ed.), Qualitative nursing research: A contemporary dia-
logue (pp. 135-147). Newbury Park, CA: Sage.
Stiles, W. B. (1978). Manual for a taxonomy of verbal response modes. Chapel Hill,
NC: Institute for Research in Social Science.
Weber, R. P. (1985). Basic content analysis. Newbury Park, CA: Sage.
Wilson, H. S. (1989). Research in nursing (2nd ed.). Redwood City, CA: Addison-
Wesley.