The Steps of Qualitative Data Analysis
The Steps of Qualitative Data Analysis
The Steps of Qualitative Data Analysis
Alvin Toffler (in Coveney and Highfield, 1991) said that, we are so good at
dissecting data that we often forget how to put the pieces back together
again. This problem will not arise if description and classification are not
ends in themselves but must serve an overriding purpose, that is to produce
an account for analysis. For that purpose we need to make connections
among building block of concepts of our analysis. Here the author offers
graphic representation as a useful tool in analyzing concepts and their
connections.
The author used this chapter to introduce computer as a main tool for
qualitative data analysis. Here we can find several aspects in which a
computer can help us to analyze our data. Detail applications of these
aspects will be found separately in the next chapters. Fortunately,
explanations that was given by the author in this chapter are not due to
particular research method, but they can be applied for any research method
that produce qualitative data.
To give a direction in finding a focus, we can use the questions such as what
kind of data do we want to analyze, how can we characterize this data, what
are our analytic objectives, why have we selected this data, how is the data
representative/exceptional, who wants to know and what do they want to
know. These questions are not a sequence that should be followed in logical
order. So, the researchers are free to use them, base on their intention
priority. Besides that, we also can use resources such as personal experience,
general culture, and academic literature to help find a focus.
How well we read our data may determine how well we analyze it. Related
to this, Dey gives place for reading in qualitative data analysis. The aim of
reading is to prepare the ground for analysis. Reading itself is not passive
but interactive. How do we read data in interactive way? Here the author
mentions some techniques: (1) the interrogative quintet by asking the
question Who? What? When? Where? Why? These questions can lead all
sorts directions, opening up interesting avenues to explore in the data, (2) the
substantive checklist, (3) transposing data, (4) making comparisons, etc.
In qualitative research much of our data may take the form of notes.
Annotating data involves making notes about the notes (p.88). To distinguish
the two, Dey call the notes about notes memos. In principles, we need to
record (notes) data as soon as possible.
In order to analyze our data, we must be able to identify bits of data. One
way to do that is by grouping the data (the author also calls it creating
categories). Here we put all the bits of data which seem similar or related
into separate piles, and then compare the bits within each pile. We also can
divide up the items in a pile into separate sub-piles if the data merits further
differentiation.
In practice we dont need to separate these activities with the previous one.
But to make clearer, the author considers them as distinct activities. At a
practical level, this activity involves the transfer of bits of data from one
context (the original data) to another (the data assigned to the category). The
bits of data are not actually transferred: they are copied, and the copy filed
under appropriate category (p.113). So, the process so simple: copying and
filling. The computer software has been design to facilitate this task.
There are some general and specific decisions in assigning categories (we
can find them in p.125). After this, we are challenged to make further
decisions such as: "Should we assign other categories? Should we create a
new category?
Firs of all, it is better for us to see difference between link and connection as
it is shown in the scheme below.
Link
Connection
Here we use the link to establish a substantive connection between two bits
of data. But in making connection we connect two categories base on our
observation and experience of links and how they operate. So, the links are
an empirical basis for connecting categories.
There are two ways in making connection: connecting through association
and connecting with linked data. In the first one we identify correlation
between categories, even in the second one we identify the nature of link
between data bits.
The author starts this chapter with interesting statement: what you cannot
explain to others, you dont understand yourself" (p.237). It means that
producing an account is not just something we do for the audience, but also
for ourselves. Through the challenge of explaining ourselves to others, we
can clarify and integrate the concepts and relationships we have identified in
analysis.
The techniques of producing an account are drawing diagrams, tabulating
tables and writing texts. To produce an account, we have to incorporate these
disparate elements into a coherent whole. As the ultimate product of the
analytic process, it provides the overall framework for our analysis.
In the last part of this chapter we can find the issue generalization. There
are two aspects of generalization, inference and application. Here the author
mentions that qualitative analysis often provides a better basis for inferring
generalizations than for applying them.
The author also remain us that the computer software only provides a set of
procedures that can replace or facilitate the mechanical tasks involved in
analyzing data, but not the creative and conceptual tasks that this requires.
So, the important and crucial tasks still left on the analyst.
Procedures for qualitative data analysis that are proposed by Dey in this
book are complete enough, because he give us a guidance to do all steps that
we should do in qualitative data analysis (from finding a focus and
managing data until producing an account). By following these procedures
the researchers can finish their tasks in analyzing qualitative data. If we look
the book of Miles & Huberman (Qualitative Data Analysis, 1994), we will
find similar procedures, as it is shown in the next table.
Procedures of Qualitative Data Procedures of Qualitative Data
Analysis in Miles & Huberman Analysis in Dey
The way they present the procedures is also the same, that is in the form of a
logical sequence of steps, as it was described in introduction chapter. But in
practice both authors prefer to employ an iterative model because qualitative
data analysis tends to be an iterative process (Miles & Huberman call it
Interactive Model (p.12), even Dey uses the term Iterative Spiral (p.53)).
As comparison, the book of Miles & Huberman presents very clear structure.
Every method/ procedure was presented in the same pattern. It was begun
with the name of method/procedure, analysis problem, brief description,
illustration, until time required, so we can follow and understand the
procedure easily. Each method/procedure of data display and analysis is
described and illustrated in detail, with practical suggestions for user for
adaptation and use.
About the language of Deys book is rather difficult, because (1) the author
tends to present every concept in long narration, (2) most examples that are
used frequently integrated in the long texts and not familiar for me (the
author rarely use the examples from an education setting). So, I can say that
the way he presents the contents sometimes make me rather difficult to
understand the book.
Maybe the author understands that long texts can make the reader feel
boring and not interesting in the book. To reduce these things he trays to
present a large number of examples. Sometimes they are presented in form
of texts, and the others in form of pictures, schemas or graphics. But again, it
is little bit difficult to find and understand examples in form of texts because
they are frequently integrating in other texts.
The quality of examples are good in general, because most of them taken
from humor (created by Victoria Wood and comedian Woody Allen) and
everyday life. Sometimes they can help the reader become more interest and
easier to understand the book. Especially for pictures, schemas and graphics,
they are clear enough and well presented. I think the experiences of the
author as a researcher (had worked in a variety of qualitative methods), as a
lecturer (in research methodology) and as a software developer (Hypersoft)
help him in creating various examples. But especially for me (that come
from an eastern country) sometimes it is not easy to understand the humor
from the western country. Besides that, as a reference (or tool) for the
researchers, it is better to presents more examples that are taken from others
research (such as from educational settings), because it is more applicable
for the reader (or user).
In writing this book the author uses many references. A few books came
from 60s and 70s and the rests is from 80s and earlier 90s. But among
these references, only three books that discuss about using computer for
qualitative data analysis (Pfaffenberger (1988), Fielding (1991), Tesch
(1990)). So most contents of the book create by the author himself base on
his experiences as a researcher, lecturer and software developer. It can be
understood because such kinds of references are still rare.
Although there are some weaknesses as I mention before, this book stills a
good reference (or tool) for qualitative data analysis. It is not only
addressing a particular research method (or research question), such as case
study or survey, but it can used to analyze our qualitative data that is
collected through any research methods. Moreover, analyzing qualitative
data by using the computer that was offered in this book could be something
that very promising.
Finally, for one who interesting in this book can find it in Toegepaste
Onderwisjkunde (TO) Library, University of Twente, Netherlands (register
number: TO 3:167d39), British Library (England), Library of Congress
(USA) or can buy it through internet (http://www.amazon.com) or the
publisher.
References
Chapter 15
Qualitative Data Analysis
The purposes of this chapter are to help you to grasp
the language and terminology of qualitative data
analysis and to explain the process of qualitative
data analysis.
Data analysis tends to be an ongoing and iterative
(nonlinear) process in qualitative research.
C The term we use to describe this process is
interim analysis (i.e., the cyclical process of
collecting and analyzing data during a single
research study). Interim analysis continues until
the process or topic the researcher is interested
in is understood (or until you run out of
resources!).
C Throughout the entire process of qualitative
data analysis it is a good idea to engage in
memoing (i.e., recording reflective notes about
what you are learning from the data).
C The idea is to write memos to yourself.2
when you have ideas and insights and to
include those memos as additional data to
be analyzed.
Data Entry and Storage
Qualitative researchers usually transcribe their data;
that is, they type the text (from interviews,
observational notes, memos, etc.) into word
processing documents.
C It is these transcriptions that are later analyzed,
typically using one of the qualitative data
analysis computer programs.
Data
Analysis
Type of Analysis
Qualitative In-Between Quantitative
Literary criticism
Interpretation Statistical analysis of
Boolean
Thematic coding text frequencies or
algebra
Qualitative Boolean code co-occurence
algebra?
MDS
Type
of coding
Data Interpretation of
statistical results
Graphical
displays of data Std statistics (e.g.,
regression)
Quantitative
Naming
factors/clusters Multivariate methods
in factor analysis
& cluster
analysis
http://www.analytictech.com/geneva97/whatis.htm
In a recent book Michael Patton writes, "As a good hammer is essential to fine
carpentry, a good tape recorder is indispensable to fine fieldwork" (Patton
2002: 380). He goes on to cite an example of transcribers at one university
who estimated that 20 per cent of the tapes given to them "were so badly
recorded as to be impossible to transcribe accurately-- or at all." Surprisingly
there is remarkably little discussion of tools and techniques for recording
interviews in the qualitative research literature (but see, for example, Modaff
and Modaff 2000).
This overview discusses the potential advantages of digital recording and provides some
technical background and a checklist of features to consider when buying a digital
recorder. It concludes with brief comments on the different types of recorder currently
available and the names of some of the leading manufacturers. As the technology is
changing rapidly and new recorders are appearing constantly there is little point in
recommending particular models as the recommendations would rapidly become
obsolete.
Why digital?
Audio Quality
The recording process used to make analogue recordings using cassette tape introduces
noise, particularly tape hiss. Noise can drown out softly spoken words and makes
transcription of normal speech difficult and tiring. Digital recorders generally have a
much higher signal to noise ratio. Less noise reduces the risk of lost data and results in
faster, less expensive and more accurate transcription.
Note that audio quality also depends on using a suitable external microphone or
microphones properly positioned near speakers in an environment with low levels of
ambient noise.
Digital Editing
There are cheap, sophisticated audio editing programs (e.g. Syntrillium's CoolEdit 2000)
that can be helpful if they are used with care. These programs can be used to adjust the
recording level, fix recordings in which one speaker sounds louder than another, reduce
unwanted background noise, filter unnecessary frequencies, silence personal or
identifying information to protect anonymity, and cut extraneous sections from the
beginning or end of audio files.
Archiving
It is easy and inexpensive to backup and archive digital audio recordings. When using a
compressed digital format such as MP3 it is possible to store an entire research project on
one or two CD-ROMs. However, because digital audio is readily copied and transmitted,
additional steps may need to be taken to ensure that original recordings are kept secure
and research participants' confidentiality is adequately protected.
Computer-based Transcription
Transcription software can greatly improve the usability and usefulness of transcriptions
(Muhr 2000). This is primarily accomplished through the automatic insertion of tags that
encode additional data during transcription. See for example DGA's Transcriber software,
which uses eXtended Markup Language (XML) tags. The most obvious use of tags is
synchronization of transcript files to their corresponding audio files. This facilitates
checking, correcting, or later referral as the researcher can go to any point in the
transcript and immediately play the corresponding segment of audio. Hopefully, these
type of features will be integrated into analysis software eventually.
Technical background
Frequency Response
The audible range of the human ear is approximately 20 Hz to 20 kHz. The most
important frequencies for speech occur in the "mid range" between 250 Hz and 8 kHz.
Channels
Single channel or mono recording often works fine for interviews. Mono recording also
doubles the available record time when using a digital recorder. However, stereo
recording may be an advantage in some situations where the speakers are separated from
each other or where there are several speakers. To take advantage of stereo recording a
microphone setup that allows each microphone element to be positioned next to a
different speaker or set of speakers will be necessary. This will aid transcription by
making it easier to ensure that a good audio recording level is obtained for all speakers
and making it possible for the person doing the transcription to use the stereo separation
to help identify speakers and transcribe overlapping speech.
Recording Level
The level of the audio signal-- how much the microphone signal is amplified-- needs to
be set properly to make a good recording. If the signal is too strong it will be distorted;
too weak and the speech one wishes to record may be swamped by noise and difficult to
hear. The majority of cheap recording devices do not provide any visual display of the
level and set the recording level automatically. This makes recording easy but automatic
level control (ALC) can be problematic (Modaff and Modaff 2000). ALC constantly
adjusts the level to any audio input, even background noise during pauses in speech. This
may result in the level being frequently, although briefly, poorly adjusted to the speech
being recorded. ALC also changes the overall dynamics so that the difference between
loud and quiet speech is compressed.
Digital Audio
Digital audio is recorded by sampling a sound wave and assigning each sample a value.
The quality of the audio depends on the sampling frequency and the resolution, that is,
the range of values that can be assigned to each sample. The sampling frequency is
significant to the extent it needs to be at least double the highest frequency one wishes to
record. Music CDs use a sample rate of 44.1 kHz -- a rate more than adequate for
encoding frequencies up to 20 kHz. For recording speech, a sample rate of at least 16 kHz
will ensure good quality. Audio is normally encoded in 8 bits, 16 bits, or in some cases
higher resolutions. The higher the bit depth, the greater the number of amplitude values
that can be represented for each sample. An 8 bit resolution may be adequate for
recording speech for some purposes, but 16 bits is better.
CD quality digital audio corresponds to a sample rate of 44.1 kHz, encoded at 16 bits, on
two channels. This works out to:
To record at this rate consumes a considerable amount of storage space. The same is true
of other forms of Pulse Code Modulation (PCM) audio, which is the usual format for
Windows WAV files and Macintosh AIFF files. Even if the encoding rate is reduced by
using a 16 kHz sample rate at eight bit resolution with one channel (which for many
purposes might be satisfactory for recording speech) the recording will still consume 57.6
MB/hr.
The solution to the space problem is compression schemes or codecs that use
psychoacoustic principles and other audio features to reduce the bit rate in ways that limit
the perceived quality loss of the audio stream. Common compression schemes include:
Fraunhofer MPEG 1 Layer 3 (MP3), Advanced Audio Coding (AAC), Adaptive
Transform Acoustic Coding (ATRAC), and Windows Media Audio (WMA). MiniDisc
uses ATRAC, which in standard mode, like CD audio, samples at 44.1KHz, in stereo, and
encodes in 16 bits, but saves the audio in 1/5 the space without perceptible loss of quality.
Fraunhofer MP3 saves audio in 1/11 the space. A Fraunhofer MP3 audio file encoded at
32 kbps (22.05 kHz sample rate, with 16 bit encoding, mono) will provide good voice
recording for many purposes and only takes up 14.4 MB/hr. Newer codecs such as WMA
and AAC maintain perceived audio quality at even greater compression ratios.
Factors to consider
Many of the digital recorders designed specifically for recording interviews or meetings
are expensive, complicated, and geared to the needs of broadcast journalists. Other types
of digital recorder that are simpler and cheaper are often designed primarily as portable
music players or for simple dictation and may have some significant limitations when
used to record interviews and meetings. At the moment, there are few devices that fall in
the middle ground, but new ones are constantly appearing.
While not digital, a cassette recorder can still be used to create digital audio files by re-
recording cassette tapes to a computer equipped with a soundcard. Disadvantages are a
low signal-to-noise ratio, limited recording time, and the need for analogue to digital
conversion.
PocketPC and Palm devices can be used to record audio but very few of these devices
support the use of external microphones. Handheld computer devices may eventually
appear with input jacks or add-ons that allow external microphones to be used.
Desktop and Portable Computers
Direct to computer recording may be the best and cheapest way to make digital
recordings of interviews done by phone when equipped with a good telephone coupler,
soundcard or USB audio input device (e.g. Griffin Technology's iMic), and recording
software. A computer may be cumbersome for field recordings. The latest ultra
subnotebooks (Sony, Toshiba, Fujitsu) are quite small and light, but availability is often
limited outside Japan. This is an expensive option but most people either own a computer
already or need one for other tasks.
Consumer Player/Recorders
Some portable consumer devices that are primarily designed for listening to music can be
used to record speech. At the moment, these devices are designed around either small
hard drives (Creative Labs, Archos) or solid-state memory storage (Pogo Products).
Reliability may be an issue with some of these devices. They nearly always lack a
microphone input jack as well as other features that would make them good field
recorders. That said, some of these devices have great potential and future developments
are worth watching.
MiniDisc provides 'near CD' quality audio recording, is very portable, has long record
times, and is relatively cheap (although the cheapest recorders should be avoided if they
lack a microphone jack). MiniDiscs are often used by broadcast journalists and others as
a cheap alternative to more expensive field recorders. Disadvantages include a poor
computer interface -- upload of audio files is only possible by real time re-recording.
MiniDisc also needs to be used carefully to ensure directory information is saved or
recordings will be lost.
These small solid-state devices are designed to record memos, dictated letters, and the
like. Some of the more expensive ones have microphone jacks and interface well with
computers through a USB connection or removable flash memory cards. Most of these
devices save audio in highly compressed formats, with low sampling frequencies, and
limited frequency sensitivity. These factors will limit audio quality. Future models are
likely to support higher quality audio.
These recorders are designed for field recording of interviews by broadcast journalists.
They are usually rugged and reliable, have sophisticated recording features, are generally
larger than other portable recorders, interface well with computers, and are usually very
expensive.
CD-R/RW Recorders
Marantz has recently started to sell a professional portable CD-R/RW recorder (CDR300)
designed for recording meetings and interviews. It is expensive but audio quality should
be excellent, blank discs are cheap, and audio is easily transferred to computer.
Personal Note
My own practical experience is presently limited to the use of cassette tape, MiniDisc,
and computer recording. I currently make recordings using either a MiniDisc recorder or
a computer. In the future, I plan to trade in my MiniDisc for a suitable solid-state
recorder. To learn about my approach to using digital audio in my own qualitative
research see: http://www.edc.org/CAEPP/resources/audio.asp.
References
77777777777777777777777777777
Choosing a CAQDAS Package
A working paper by Ann Lewins & Christina Silver
CAQDAS - Computer Assisted Qualitative Data Analysis
Qualitative data analysis see below
Content analysis quantitative analysis of the occurrence of words, language
Interactive good, instant hyper linking (usually one click or double click) between an object e.g. a code in
one pane
and e.g. and highlighted source context in another (KWIC is an aspect of interactivity)
Invivo Coding in the context of software, a short-cut tool for using a word or short phrase in the data, as a
code label
Autocoding or Text search - a search for words or phrases in the data files
Hits the initial finds in the data which result from the above
KWIC - Retrieval of key words in context
It is not always easy to visualise exactly what a CAQDAS package offers when exploring it for the first time
yourself.
Equally, when asking someone else for their opinion, it is not always easy to know which questions you
should be
asking. Most of the software packages we are aware of and discuss regularly are excellent products in one
way or
several! Sometimes you choose the package that is already in situ and make good use of it but if you
have a choice
about which software to purchase for your research project, you may be in some uncertainty about how to
proceed.
You may have basic understanding about what a CAQDAS software package will do, but the differences
between the
tools offered in each package are subtly but significantly different.
What does this paper do?
This paper offers a summary of types of software for managing textual or qualitative data as
a starting
point for thinking about which one may be most suited to the type of project and data you
are working
with and the way you like to work.
It provides more detailed information focused mainly on those traditionally categorised as Code-based
Theory
Building software packages (see below). We cannot provide an exhaustive description or comparison of all
available CAQDAS software here, but we aim to highlight some of their key distinguishing elements in order
to
provide you with an early impression of each package.
o Firstly we provide a description of the tools that these software packages have in common.
o Secondly we provide information of some of the distinctive features of as many packages as we can.
(This will be added to as we complete more software reviews and hear more opinions)
It aims to address the most frequently asked questions that we at the CAQDAS Networking Project
receive.
It aims to assist you in your search for more information by strongly recommending that you also
visit the
Software Developer websites as part of your decision-making proces s, where you can access
and
download demonstration versions.
It aims to review both commercially available software and open access or free software.
In order to help you with making this decision the CAQDAS Networking Project provide a series of (FREE)
Software
Planning Seminars [http://caqdas.soc.surrey.ac.uk/softwareplanning.htm] where we raise some of the issues
you may
need to consider, and discuss how some of the software programs differ in the way they handle such
aspects. We also
provide an email and telephone helpline [ [email protected] +44 (0)1483 689455] if you require
more specific
advice in choosing and using CAQDAS software.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 1.Choosing
CAQDAS software Ann Lewins & Christina Silver
What types of software do we categorise as CAQDAS?
Software which falls under the CAQDAS umbrella includes a wide range of packages but its general
principles are
concerned with taking a qualitative approach to qualitative data. A qualitative approach is one where there is
a need to
interpret data through the identification and possibly coding of themes, concepts, processes, contexts, etc.,
in order to
build explanations or theories or to test or enlarge on a theory. Many approaches (including, for example,
participant
observation, action research, grounded theory, conversation analysis etc.) broadly make use of qualitative
perspectives, though each may emphasise particular processes or differ in their sequence. Qualitative
researchers may
also use quantitative data, for instance, relating to the descriptive features of research participants sex,
age and so on
to help in the organisation of qualitative data. This assists in cross-referencing and comparing qualitative
ideas across
groups and subsets. This approach remains distinct from content analysis methodology, in which it is the
statistics of
word or phrase frequencies and their occurrence relative to other words or phrases across a textual dataset
that are the
basis of the analytic work. Although you will see later in this page that we refer to packages that deal with
the content
analysis approach in varying degrees, we only describe such packages in detail where they also incorporate
a strong
qualitative dimension.
What ranges and types of software support work with qualitative data?
We refer in part to a typology suggested by Miles and Weitzman (1995)1 concerning the handling and
management of
qualitative (and at that time, mostly textual) data. We make some reference to this typology, because it is
still current in
some respects, but increasingly with the advances of individual software described in this paper, the
distinctions for
instance between software formerly labelled Code and Retrieve and the more extensive functionality in
Code-based
Theory Builders have become blurred. Now very few software programs remain in the Code and Retrieve
category.
Similarly, some of the Code-based Theory Building software have taken on features more traditionally
featured in Text
retrievers, or Text based managers e.g. Content analysis tools, Word frequencies, word indexing
with Key Word in
Context retrieval (KWIC) and complex text based searching tools. Conversely, by expanding to
provide different add-on
software to enable thematic coding, Text retrievers and Text based managers are beginning to enable tasks
originally
only possible in Code-based Theory Building software.
Code-based Theory building software
Both these and the earlier Code and Retrieve packages assist the researcher in managing the analysis of
qualitative
data, to apply thematic coding to chunks of data, thereby enabling the reduction of data along thematic lines
(retrieval),
limited searching tools and probably good memoing facilities. Code-based theory building software
packages build on
those tools and extend the collection of search tools allowing the researcher to test relationships between
issues,
concepts, themes, to e.g. develop broader or higher order categories, or at the other extreme, to develop
more detailed
specific codes where certain conditions combine in the data. Some of the programs enable the graphic
visualisation of
connections and processes using mapping tools.
Text Retrievers, Textbase Managers
Tools provided in both these categories often provide more quantitative and actual content based analysis
of textual
data. There are complex and sophisticated ways to search for text and language including the use of
thesaurus tools to
find words with similar meaning, to index all words contained in the text, to provide word frequency tables, to
create
active word lists, to provide easy key word/phrase in context retrieval (KWIC). However, broadly
summarising them in
this fashion masks the huge variety of differing ways in which they each handle the quantitative analysis of
content, and
the importance given to particular aspects in each software. Generally speaking, some automatic content
analysis
often happens just as part of the process of importing data.
Some Textbase Managers have very sophisticated content analysis functions; creation of keyword co-
occurrence
matrices across cases, creation of proximity plots for identification of related keywords, charting and graph
building
facilities etc. Textbase Managers tend to offer more functionality than Text Retrievers and more possibilities
to manage
huge datasets in varied ways, but it must be said again, that all these categories are becoming blurred, and
we link to
developer sites and software within these two categories without making any distinction between them.
1 Weitzman, E. & Miles, M.; (1995) A Software Source Book: Computer Programs for Qualitative Data Analysis
Thousand Oaks, Sage
Publications.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 2.Choosing
CAQDAS software Ann Lewins & Christina Silver
The theory building category itself is blurred by the increasing addition of some quantitative language based
content
analysis features (MAXqda, with add-on software MAXdictio with good KWIC), and to a lesser extent
ATLAS.ti.
One or two of the Textbase Manager packages now incorporate thematic qualitative coding functions which
can be
integrated with the range of content analysis tools e.g. the language based quantitative functions. So they
offer a
comprehensive range of both qualitative and quantitative approaches to data within one software, e.g. QDA
Miner with
the add-on Wordstat module, and CISAID). Where this is the case, we will try to provide information on the
particular
attributes of such software programs tools (from a qualitative standpoint) and will continue to add more
comment and
comparative information to this online resource as we undertake a review each software.
Which is the best CAQDAS Package?
This is perhaps the most frequently asked question we receive however, it is impossible to answer! All the
packages
we work with and teach have their own range of tools to support you in the various stages of the analytic
process. As
such, they each have their own advantages and disadvantages.
Whilst we would argue that some software packages may be more suited to certain types of
approach, their
purpose is not to provide you with a methodological or analytic framewor k. The tools
available may support
certain tasks differently and there is some debate as to whether a particular package may steer the way
you do the
analysis . However, as the researcher you should remain in control of the interpretive
process and decide
which of the available tools within a software can facilitate your approach to analysis
most effectively.
Whichever package you choose, you will usually be able to utilise a selection of tools which will help
significantly with
data management and analytic tasks .
Thinking about and using CAQDAS software should not necessarily be any different from other types of
software
package just because a tool or function is available to you does not mean you will need or have to use it.
You
therefore need to think clearly about what it is you are looking to the software to help you with and we would
caution
against choosing a package simply because it seems the most sophisticated. Conversely, and this may
have
something to do with your longer term plans, you may feel a more ambitious selection of tools will serve you
better over
time.
The Basic Functionality of CAQDAS Software :
Key Similarities between Packages
Whilst there are many differences between CAQDAS packages, the key principles behind each of them in
terms of
facilitating the qualitative research process are similar in many ways; (the ways in which they differ are
described in the
software specific sections later on).
Structure of work
The project that the user creates in the software acts as a container or a connector to all the different data
files within
the current working project. Internal databases, contain the individual data files i.e. when you import
data it is a copy
process. External databases connect to the data files i.e. you assign data to the project, but the
data remains in its
original location. In either case, the opening of one project file enables immediate access to all components
of the
dataset. There are varying benefits and disadvantages to either system. Internal databases are easier to
move around
from computer to computer, external databases may cope better with very large datasets and are more
likely to directly
handle a range of multimedia files.
Closeness to data and interactivity
At the most basic level, the packages discussed here provide almost instantaneous access to all source
data files once
introduced into the project. Whatever tools you subsequently use, live contact to source data is always
easy, which
increases the researchers closeness to data.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 3.Choosing
CAQDAS software Ann Lewins & Christina Silver
Explore the data
Text search tools offer ways to search for one word or phrase, or even a collection of words around
the same topic
area. Such searches usually provide good access to those parts of the documents where those keywords
appear
allowing a fairly instant retrieval of topic related material sometimes abbreviated as KWIC (Key Words in
Context).
Code and Retrieve Functionality
They all offer code and retrieve functionalities. User-defined key-words and/or conceptual categories
(codes) can be
applied to selections of (overlapping, embedded) text and as many codes as required can be applied to the
same text
selection. Usually, the user has complete freedom concerning the structure of the coding schema and the
coding
strategies employed e.g. whether inductive, deductive or a combination of both.
In general terms code generation is both easy and flexible and the researcher is free to use a combination
of strategies
if desired and to refine coding if necessary. In all but one software reviewed here, the responsibility for
thinking about
each coding action rests entirely on the user. In Qualrus, however, the software (using Artifical
Intelligence) learns from
previous actions and makes suggestions.
In all packages, coded data can be retrieved, re-coded and outputted with ease. Software differ in the way
coded
information is provided and visible in the data file itself, for instance in the margin area. While this can be
produced by
most of the software reviewed here, it may be the ease with which such a view can be generated, the
flexibility of what
can be shown and the interactivity of the margin area which present key differences.
Project Management and Data Organisation
All these packages also offer powerful means by which to manage the research project as a whole and to
organise data
according to known facts, descriptive features and data types. Any files which can be converted into the
format(s)
supported by the given software constitute data as far as the software is concerned.
The project management elements mean that these packages are not simply tools to facilitate the
analytic stage
of the research process. For example, much work can be done before data is introduced to the software
and as such,
theory-building CAQDAS packages both reflect and significantly facilitate the cyclical nature which is
characteristic of
many qualitative research and analysis processes. Data organisation enables the researcher to focus on
(combinations
of) sub-sets of data, thereby facilitating comparison. Even when used in the most basic way, therefore,
CAQDAS
software packages all significantly increase the researchers access to the different elements of their project
and sub-sets
of data.
Searching and interrogating the database
At any stage of the process all the packages offer means by which to interrogate the dataset. This includes
searching
the content of data based on language used as well as searching for relationships between codes as
they have
been applied to data (for example, co-occurrence, proximity etc.). Search tools also allow you to combine
the coding
(interpretive or conceptual) and organisational (descriptive) dimensions of your work.
Writing tools
The process of qualitative data analysis is rarely linear and the various writing tools (for example memoing,
commenting, annotating etc..) offered by CAQDAS packages provide ways to increase the researchers
contact with
his/her thoughts and processes provided, of course, they are used in a systematic way. Add notes from
paper
Output
All reviewed software packages have a fairly standard selection of reports (output) which allow the user to
view material
in hard copy or integrate into other applications, e.g. Word, Excel, SPSS. Those standard reports will usually
include
coded segments either from one code or a selection of codes. The latter option is not always as
straightforward as it
seems but when it happens, it is accompanied by varying amounts of identifying information (source
document identifier
and the coded segments themselves is usually the minimum, and sometimes but not always, the relevant
code label is
included above the segment).
Tabular output: usually simple tabular output is available providing breakdown of code frequencies which
can be
exported to Word, Excel or SPSS. Programs vary in the types of tables which can be generated. Output also
varies in
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 4.Choosing
CAQDAS software Ann Lewins & Christina Silver
terms of the level of its interactivity with live source data. Results of searches can be viewed often in both
output format
or inside the software, integrated into the working project.
When the software supports the use of mapping nor more graphic representations of coding schema etc.,
these can
usually be exported and pasted or inserted in Word files.
NOTE: The fine distinctions in the ways that output can be varied by the choices the user makes in each
software, are
not reviewed comprehensively, though we point out particularly distinctive or interactive forms of output.
Summary
The combination of these key aspects of CAQDAS packages mean that the researcher is able to keep in
reliable and
continuous contact with the different elements of the project and the analytic processes. When used
systematically and
within the individual researchers comfort zone, CAQDAS packages can aid continuity, and increase
transparency
and methodological rigour.
Deciding on which is the best CAQDAS software package is necessarily a subjective judgement and will
probably be
based on reaching a balance between a number of factors.
Some general questions to ask when choosing a CAQDAS
package
What kind(s) and amount of data do you have, and how do you want to handle it?
What is your preferred style of working?
What is your theoretical approach to analysis and how well developed is it at the outset?
Do you have a well defined methodology?
Do you want a simple to use software which will mainly help you manage your thinking and thematic
coding?
Are you more concerned with the language, the terminology used in the data, the comparison and
occurrence
of words and phrases across cases or between different variables ?
Do you wish to consider tools which offer suggestions for coding, using Artificial Intelligence
devices?
Do you want both thematic and quantitative content information from the data?
Do you want a multiplicity of tools (not quite so simple) enabling many ways of handling and interrogating
data?
How much time do you have to learn the software?
.How much analysis time has been built into the project?
Are you working individually on the project or as part of a team?
Is this just one phase of a larger project do you already have quantitative data?
Is there a package and peer support already available at your institution or place of work?
Coming soon : summaries of selected caqdas packages
see http://caqdas.soc.surrey.ac.uk/
Coming first Atlas.ti, HyperRESEARCH, MAXqda (&
MAXdictio), N6, NVivo, Qualrus,
QDA Miner
also look out for summaries of freely available software
AnSWR, InfoRapid, Transana
and non code-and-retrieve based software Storyspace
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 5.Choosing
CAQDAS software Ann Lewins & Christina Silver
Summaries of some Theory-Building CAQDAS Software
Packages
in alphabetical order
ATLAS.ti V5, HyperRESEARCH V.2.06, MAXqda, N6, NVivo,
Qualrus
This section summarises some CAQDAS packages, but its only focus is the distinctive tools provided by
each software
program in the above or additional categories. Note that it does not seek to summarise the entire
functionality of each
software. We remind you that basic functionality available in all quoted software is listed in the above
sections. For
each software we have divided our work into main headings for ease of comparison:
Minimum Specifications
Structure of work how your work and data is managed by the software
Data types and format
Closeness to data and interactivity (e.g. hyper connectivity between coding & source text)
Coding schema coding structures and organisation
Coding Processes
Basic Retrieval of coded data
Searching and Interrogating the database distinctive tools
Teamwork
Going beyond code and retrieve various functions enabling other ways to work
We strongly recommend that you visit individual software sites and experiment and
explore demonstration
versions of the software before making a final decision.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 6.Choosing
CAQDAS software Ann Lewins & Christina Silver
ATLAS.ti (Version 5): distinguishing features and functions
ATLAS.ti was initially developed at the Free University, Berlin as a collaborative project between the
Psychology
department and Thomas Muhr (a computer scientist), as an exercise to support Grounded Theory.
Subsequently
Thomas Muhrs independent company, Scientific Software Development, continues to develop and support
the
software. http://www.atlasti.de
Minimum System Specifications (recommended by developer)
MS Windows 98 or later (W2000 or XP recommended)
RAM 64Mb (minimum), 256Mb (recommended)
25 Mb free disk space (minimum), 45Mb (recommended)
Structure of work in ATLAS.ti V5
ATLAS.ti functions using an external database structure which means data files are not copied into the
project, but are
assigned and then referred to by the software when the project is open.
Two main elements of the researchers work in ATLAS.ti consist of:
1. The Hermeneutic Unit (HU) in which only the records of assigned documents, quotation/segment
positions, codes
and memos are contained, and because of the external database, separately held are.
2. the data files, e.g. transcripts, multimedia data etc
The main functions of the software operate from (always open in a typical view) main menus, a large
selection of icons,
and the main PD (Primary Document) pane, in which the qualitative data file currently selected is on display.
In addition
are four main manager panes which allow Documents, Quotations, Codes, and Memos to be created,
assigned, and
managed.
Codes Manager window --
a separate Manager
window can be opened
for Documents,
Quotations Codes and
Memos
Comment bar
Data types and format V5
Textual data: Version 5 of ATLAS.ti handles either rich text format (.rtf) data files (which can be marked,
edited and
annotated at any stage and other objects (such as graphics, tables etc.) can be embedded into them), or
Word files
(which cannot be edited after assignment to the HU). Coding can be assigned to any chunk of text however
small.
Multimedia data: digital video, sound and graphic files can be directly assigned to the project and coded
using similar
processes as are used to code text. Some of the multimedia formats include: .jpg, jpeg, .gif, .bmp .wav, AVI
Video,
.mpg .mpeg, .mp3. See developer information for complete list.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 7.Choosing
CAQDAS software Ann Lewins & Christina Silver
Closeness to data interactivity in ATLAS.ti V5
The central functionality of ATLAS.ti is connected to the independence of quotations (or interesting
segments of text)
that can just be selected and listed in the quotations list. They do not have to be coded. You can navigate
around the
list of significant quotations (free quotations) that you make, and the quotations are then highlighted
within their source
context, or, in the case of multimedia, clips of the sound or video file are played back. In their listing,
individual
quotations can be renamed to be more descriptive of each video clip. Similarly the researcher is kept close
to data
throughout the analytic process for example, when navigating around coded quotations in the data
(whether across
the whole database or within sub-sets of it) the original source context is always immediately visible around
the coded
segment. The margin view allows the researcher to gain an overview of how a data file has been coded, but
it is also
fully interactive for example, highlighting a code in the margin will display the exact segment of text which
has been
coded; codes and memos can be edited from the margin. Similar interactivity exists within the network
(map) views.
Coding schema in ATLAS.ti V5
In the first instance, the coding schema is, in its main listing and structure, non-hierarchical; this is intended
to free the
user who wants instead to be able to create semantic or meaningful links between codes, going beyond any
structural
hierarchy. Codes can be initially created in network (map) view e.g. to display a theoretical model.
Hierarchical or semantic links can be created to express relationships between concepts, themes etc.
These links
can be seen in Network views or the Object Explorer but not in the main codes list/manager windows.
Families of codes: any code can be put in any number of families/collections, allowing the user to filter
to view or get
output on just a family or to create Superfamiles from combinations of families. Families of codes can
represent small
or large collections of codes representing theory or process or similar types of theme, (Attitudes,
Outcomes, etc). A
code can belong in any number of families.
Margin display: fully interactive with edit functions. The user can decide which objects to have on display
in the
margin, i.e. Codes or Memos or Hyperlinks or all three.
Colour of codes in margin display: the user has no control over colours used in the margin and these will
vary
according to size of segment or its relationship to large/smaller segments at that point.
Coding Processes in ATLAS.ti V5
Codes can remain abstract/free of text or can be dragged and dropped on to highlighted text or marked
clips/quotations
in the multimedia file either from the Codes Manager window or from the codes area in the Object Explorer.
Several iconised shortcuts can be utilised; coding to multiple codes at a time, in vivo coding etc
Codes and memos can be edited from the margin, i.e. change boundaries of quotation (selection of text)
coded, name
of code, merge codes, unlink codes from quotations.
Basic Retrieval of coded data in ATLAS.ti V5
In ATLAS.ti there are several ways to retrieve data. Selecting a code and either using navigation keys or
double
clicking allows the user to see in turn each coded segment highlighted (or played back) within the whole
source context,
whether the segment is textual or multimedia. Alternatively coded data can be lifted out of the context by
asking for
output e.g. on a selected code or a family or collection of codes or all codes. Segments are clearly
labelled with source
file name, paragraph number, code label.
Organisation of data in ATLAS.ti V5
Documents can be organised into subsets by putting them into Document Families.
Tabular data in the correct format can be imported to quickly organise whole files into Families e.g. based
on Socio
demographic characteristics or the user can do so in a step-by-step way in one user friendly window.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 8.Choosing
CAQDAS software Ann Lewins & Christina Silver
Writing tools in ATLAS.ti V5
ATLAS.ti allows the user to write in many different places.
Memos: Memos are a main object / tool in ATLAS.ti and form the central place to write, and have listed
memo topics.
Memos can be free or linked to points in the text. They can be categorised or put in collections, and
filtered or
exported. Memos can be linked to each other in the network / mapping tools and they can be linked to
codes.
Comments: In addition each object manager in the software, Documents, Quotations, Codes, Networks,
Families, all
have Comment bars in which the user can write ad hoc notes to describe an item.
Searching and Interrogating the database in ATLAS.ti V5
The Query tool: The Query Tool allows the searching of coded data based on a number of standard
Boolean,
semantic and proximity based operators.
The Supercode tool: this is particularly useful allowing the researcher to save the way a search/query
has been built in
order to re-run later in the process. Having been created, the Supercode in ATLAS.ti is listed as a special
code in the
code list, enabling up to date navigation around the coded results of complex searches in the same way
as happens
for normal codes, thus enabling the testing of an early test or hypothesis later in the analytic process. Super
codes can
be also combined with other codes in the formulation of an even more complex query.
Auto-coding - Text searching: The usual tools are available but autocoding of finds or hits can be
controlled as the
search is done; the user can enlarge each find to include sentences or more text as the software finds and
displays
each hit in its full context (KWIC). Or the autocoding search can be done routinely without confirmation,
enlarging each
coded passage to the sentence, the lower level paragraph (separated by one hard return), or collections of
these
paragraphs, finally ended by two or more hard returns, or the whole file.
Co-occurring codes in network view: This is so far, an exclusive tool to ATLAS.ti 5: it allows the user
to select a
code in a network and see co-occurring codes. This will put any other code which overlaps or
coincides in the text, with
the original selected code.
Output in ATLAS.ti V5
Normal coded output distinguished by labelling of each segment with code, source document, paragraph
numbers.
Output can be varied to produce summaries; lists of codes and how they are connected to other codes.
Tables which display frequency of codes by documents can be filtered to show comparisons between
document
subsets and coding subsets (families). Both varieties can be exported e.g. into Word, or Excel or printed.
Multiple codes
and their quotations/segments can be exported into one file in a clearly labelled format . Networks (maps)
can be
exported to Word, either via clipboard or via generation into graphic file formats. The whole HU can be
exported into an
SPSS file. The export of the whole HU to an HTML file allows navigation around the project by the non
expert user of
ATLAS.ti .
Word frequency tables can be exported to Excel or run in the software (not interactfively connected to
source context)
XML export: A Hermeneutic Unit exported to XML representation can be converted into a wide variety of
other
representations by using stylesheets. Individual reports, conversions into other programs input formats or
creating
clickable viewer versions of your HUs are among the options available. This relates to teamwork in that the
XML
export allows team members or interested non users of ATLAS.ti to navigate around XML generated reports
from a
web page.
Team-working in ATLAS.ti V5
Teams working separately can now use shared data files from one shared drive, although they cannot yet
work on the
same HU at the same time. If the HUs/projects themselves need to be merged a flexible Merge tool allows
different
models of merging HUs.. The Merge too allows the user to select from several options to create a tailor
made Merge
e.g. to merge same codes or different codes, same data or different data files.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 9.Choosing
CAQDAS software Ann Lewins & Christina Silver
Ability to go beyond Code and Retrieve in ATLAS.ti V5
Quotations
You do not need to code when looking at interesting data in ATLAS.ti because a segment of text or
quotation can be
selected and viewed in the list of quotations just because it is significant. You do not have to decide to
code the data in
order to get back to or retrieve the significant quotations. Quotations are a central and independent feature
of the
architecture of ATLAS.ti software - a distinctive feature compared with other code-based software.
Hyperlinks between quotes alternative to coding
The centrality of the quotation structure enables effective hyperlinking between quotations/passages of text
rather
than being dependent on coding as the mechanism for following a theme. This allows you to track (and
subsequently
jump between) places in the data in the sequence you want in order to tell or explain e.g. a story or a
process more
effectively. You can additionally express relationships between quotes (software defined relationships e.g
explains,
justifies, contradicts or user defined connections e.g. leads to, denies, reacts).
Mapping using the Networking tool
The Networking tool is flexible, allowing graphic, meaningful links to be created between quotations, codes,
documents
and memos. Hyperlinks between quotes can be created in Networks (as above). Codes can be linked
together to
express connections in a visual way, whilst certain relational links created between codes function as
semantic search
operators in the Query tool, retrieving e.g. everything connected to a code via a series of uni-directional
transitive links.
Any number of networks can be created, but links between any codes are meaningful, in that wherever two
or more of
the same linked codes appear in any network view they will be linked in the same way. Links can be
reversed, broken,
and changed in appearance.
Content analysis - word frequency
Network containing
codes, quotations
and memos
This tool counts the occurrence of words across the whole dataset, or a single file and the results can be
saved into an
exported spread sheet file or into a memo. The list of words is not interactive i.e. no KWIK (but see later
Searching
tools for KWIC)
Object crawler
The object crawler allows the user to search for the use of strings, (keywords?) phrase etc., in the entire
HU/project
in data, codes, memos networks, comments.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 10.Choosing
CAQDAS software Ann Lewins & Christina Silver
CAQDAS Networking Project Comment on ATLAS.ti:
Atlas.ti V 5 offers great flexibility and provides several different ways of
working to suit different purposes the
software goes far beyond code and retrieve as a method of analysis, yet if
that is all the user wants it is very
easy to get to that point. The flexibility provided by the quotation structure
and the ability to hyperlink between
places in the data is useful but if used extensively the user has to come up
with an effective way of managing
these linkages, and finding start points of such trails in the data.
The external database makes the process of saving and moving the HU and
data slightly more difficult to
manage for users. The single most asked about issue concerns users who
have moved their HU file without
also moving their data. Version 5 and its altered Copy Bundle function in the
Tools menu makes this process
easier.
Related to this issue, users who edit data files within the HU, (now possible
in ATLAS.ti 5), will need to take
care in the management and diligent saving of changes to the individual data
file. This issue has improved but
difficulties if encountered may relate to the synchronisation of changes
between data files and the HU (the
project).
Though it is possible to create code connections and collections of a
hierarchical nature, the main working
code list (Codes manager) does not have a functioning hierarchical structure
to choose from. To some users
this is a main attraction, to others it lacks choice since hierarchical code
structures often provide an easy way
to systematically tidy up a codes list.. Of course once the user is much
more familiar with the software,
making hierarchies or collections of codes or linked codes are possible, using
various tools, e.g. Families,
Networks and Supercodes.
The Supercodes function in the Query tool could be an excellent a way to
place marker questions, or
hypotheses, in order to test ideas and connections between themes giving
the easiest way possible to re-run
simple to complex searches. The presence of all supercodes in the codes list
is a constant reminder of
previously interesting ideas with the ability to double click on them in the
codes list re-running the original
query on updated data.
The Query tool itself is easy to use for simple searches but can be daunting
because codes have to be chosen
before the search operator; some search operators have very precise
parameters and the user must be aware
of these to read results and their implications reliably.
The Co-occurring codes in the network function (see above) is unique to
ATLAS.ti and one its best
improvements in version 5. No other software provides a simple tool which
allows the user to choose any code
and find out all other codes which overlap or co-occur with it in any way (in
the data). This tool is particularly
useful for identifying other more complex questions or trends in the data.
The Network tool is flexible but if the user makes connections between
codes too early in the analytic process
it may be difficult to stay aware of links that have been made which are
becoming less generalisable as more
data becomes coded (unless for instance the links have been codes made for
an abstract a priori theoretical
model). Links between quotations on the other hand are made in situ for a
specific situation or interaction.
The Object crawler provides an overview of the project as a whole and
would help for instance in the recovery
of notes that the user knows he has made but cannot remember where
(within an HU)
Export HU (the project) to html is the simplest way to allow the non-user in
a team to navigate and click
around a project and see its workings. This facility is extremely useful in
team situations and the presentation
is user-friendly.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 11.Choosing
CAQDAS software Ann Lewins & Christina Silver
HyperRESEARCH: distinguishing features and functions
HyperResearch is a software program developed by ResearchWare Inc.
Contact address for information is [email protected] . Web site:
http://www.researchware.com
In its latest version 2.6 it is both available for MAC format and Windows. It supports a full range of coding,
annotation
and searching tools for textual and multimedia data. It has a fairly unique structure compared to other
software since its
main focus and default unit of analysis is the Case.
Minimum System Specifications (recommended by developer)
MS Windows 95 or later, Mac OS8 or later
RAM 64Mb
5 Mb disk space
Structure of work in HyperRESEARCH
The project as a whole, is called a Study. The database is external so the source files are accessed by the
Study, not
contained within it. The Study is comprised of Cases, and Cases can have references to one source data
file or many.
The Coded references are listed in the Case card for each case, and the coded references are hyper-linked
to the
source file. Case cards are interactively linked to their relevant references in the source files. See below.
1. The user moves between
Cases, not files
2. Coding is listed in
case cards
4. Retrieval and navigation is
by Cases
5. A case and its case card
can refer to one file or parts of
many files
A coded reference
interactively connected
to the coded part of a
jpeg image file
Data types and formats in HyperRESEARCH
Both textual and many forms of multimedia data can be directly coded in HyperRESEARCH
Textual data can be transcribed in any Word processing application, but needs to be saved as Text only
before it is
opened as a Source file in HyperResearch. Coding can be assigned to any chunk of text however small.
In the text source file window customize the font settings (typeface and size)
Multimedia files are in format necessary to save or condense them in Windows / MAC ,
e.g. .jpg, . jpeg / JPEG, JPG, .gif / GIFf , . bmp / BMP, .wav / WAVE, .avi /
.mov / MooV. .mpg .mpeg / MPEG, .mp3 / MPG3. See developer information for complete list.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 12.Choosing
CAQDAS software Ann Lewins & Christina Silver
Closeness to data interactivity in HyperRESEARCH
There is no contact with source files from within the software until the user starts to code data and therefore
coded
references to data files start appearing on the Case cards. After that there is good contact with the whole
context of the
individual data files because annotations, case card coded references, and coded segments in reports are
all hyper-linked
(one click) to respective source file position, whether the file is textual or multimedia (e.g. the coded video
clip will
replay as the reference is selected). The coded margin display is interactive, but since there are no brackets
showing
the approximate size of coded segments, the user selects the relevant code in the margin to see where
exactly it has
been applied. Reports containing references to coded passages are also hyperlinked back to source file
positions.
Coding Schema in HyperRESEARCH
The coding schema is not hierarchical. The codes list is alphabetically organised. There are no easy ways to
create
hierarchical collections of codes except in the code map (one code map is enabled see Searching and
Interrogating
for more information on how code map works)
Coding processes in HyperRESEARCH
Whilst the correct case card plus source file is selected (on view), codes are assigned to selected text (any
number to
same or overlapping chunks) by menu selection or by double clicking on the code from the codes list. The
code
appears in the margin of the file and is interactively (one click) connected to the passage at that point. One
click on the
code will highlight coded text. Codes can be applied to areas/parts within a graphic file, or clips of a
sound/video file.
The coding reference also appears in the Case Card of the case that is selected. See screen shot
above.
Basic Retrieval of coded data in HyperRESEARCH
Since the case structure of HyperRESEARCH is so dominant you might choose to first navigate around
cases based
on the presence or absence of codes so cases on view can be determined by selection of codes.
Codes in context: navigate between case cards based on:
Selection of codes: by name, or by criteria e.g. a codes presence or absence in a case, or its inclusion with
other
codes in a case. Hyperlink (one click) from case cards to coded references in textual or multimedia files.
Selection of Cases on the basis of various simple to complex criteria (on basis of name or inclusion,
exclusion,
overlapping codes etc.).
Main output of coded retrieval using various criteria
happens in reports (above):
Report building
dialogue box
allows user to select
various aspects for
inclusion.
Hyperlinks to source text included in Reports at
each listed segment (one click) to source text.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 13.Choosing
CAQDAS software Ann Lewins & Christina Silver
Organisation of data in HyperRESEARCH
Basic organisation can happen at the level of the case, but it is also possible to categorise whole or parts of
files by
coding.
Writing tools in HyperRESEARCH
Annotations: you can write comments attached to coded references on each case card the annotations
are flagged
in a column next to the coded reference and are hyper linked (one click) to both the text of the annotation
and the
source file position. These can be included in reports on coded data etc.
Searching /Interrogating the dataset in HyperRESEARCH
Selection of cases or codes: Searching happens in the software just by a process of filtering or
selecting of codes
or cases. Every time the user selects codes by name, by criteria (inclusion, overlaps etc) or by codemap
etc the
case cards reflect the selection since only those references asked for will appear. Every time the user
selects cases,
on the basis of name, code, or criteria, the cases which do not show the criteria are not present as the user
browses
the cases or makes a report. Searching is very useful and flexible when combined with reporting the
results. See also
Reports section, under Coding and Retrieval above.
Hypothesis Tool: Works similarly to the selection of codes or cases, but allows the user to formalise the
searches.
You can test the existence or co-existence of codes in cases to do one of two things:
1. To add higher concept Themes to whole cases
2. To build rules in order to progressively test elements of a whole hypothesis. Hypotheses can be saved
and re-run
on newly coded data or used as the basis for new, edited versions of the test or for completely different
studies.
Themes applied to cases do not act like new codes in terms of hyperlinkages to source text, but they e.g.
act as filters
to allow navigation through Cases in which they DO appear or DO NOT appear and can act as criteria for
building
retrieval, reports, or hypotheses.
Searching by applying code map: the code map view allows the user to create connections
between codes you
can then select one code and instruct the software to select all codes within 1 or 2 etc connectivity (i.e.1=
immediately
connected to selected code or 2= connected at one remove). This then shows user which cases, if any,
have any of
the codes defined by the Apply operation.
Autocoding: Searching tool allows user to search for words or phrase and code results with added
context coded if
wished, to include required numbers of words or characters before and after hits. User must proactively tell
software
which files, within which cases, to search, settings can be saved and reloaded (and amended) for future
autocode
searches.
Output in HyperRESEARCH
See reports above (in Coded retrieval section)
Report parameters can be saved and loaded again. The information saved includes the original code
and case
selections, as well as the information you wished included in the report
Code Matrix: Code frequency across Cases can be exported to e.g. Excel
Code Map can be exported into Word
Teamworking in HyperRESEARCH
There is no straightforward way to merge different projects, though the developers can help with this if it is
required.
Ability to go beyond code and retrieve in HyperRESEARCH
Code-mapping tool
Use the Code Map Window to graphically represent relationships between your codes. Any operation in
map has to be
preceded by selection of appropriate function button (see also Searching and Interrogating data section)
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 14.Choosing
CAQDAS software Ann Lewins & Christina Silver
CAQDAS Networking Project Comment on HyperRESEARCH:
The software is cross platform, for both MAC and Microsoft Windows/XP
users and therefore belongs to a very
small body of software for qualitative data analysis specifically written and
kept up to date for the MAC user.
The software is simple to use.
The unit of the analysis is the case not the File, this may appeal to some
users. Any number of files (or just
one) can be referenced to a case (or to any number of cases. This means files
are not hard wired to cases so
that when in coding mode the user must always be aware what case is open
so that the coding for a file goes
to the correct case card.
The hypothesis tester and the Themes which can then be assigned to
cases as a result of testing, provide an
understandable and clearly visible way of categorising cases by higher
concepts. The fact that further
hypothesis tests can include these themes as criteria for selection
underlines the importance of
understanding how at this stage, the case increasingly becomes the unit of
analysis (though of course you
may only have one file in each case)
Interactivity and hyperlinks between case cards, reports, annotations is very
good. One click or at most a
double click provides access to the whole file with relevant data highlighted.
The report builder and the subsequent reports are unusual and very
useful. The user has complete control
about how much information is included in the report. In the report itself, the
coded segments are clearly
labelled and hyper- linked to source text (one click) whether the actual text
of the segment is included in the
report or whether its just a listed reference to the segment.
The code map is rather cumbersome to use, but acts for a different purpose
to other mapping tools mentioned.
It mainly acts as a filtering /searching device. Its utility to graphically
describe a multiplicity of relationships is
limited since the user must make changes to the one code map permitted in
a study
The software has no backdrop/background so the interface does not obscure
other non minimised
applications. This can be a nuisance.
There is no simple way to pack up all the elements of the Study (i.e. the
Study file and also all the source
files)if for instance you wish to move the files to another computer.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 15.Choosing
CAQDAS software Ann Lewins & Christina Silver
MAXqda distinguishing features and functions
and MAXdictio (add-on module)
MAX, then WinMAX Pro, and now MAXqda www.maxqda.de are part of a software stream originally
authored by Udo
Kuckartz in order to cope with the management and analysis of political discourse data on the environment.
Its
application has widened to multiple academic disciplines and applied research areas. Optional add-on
software
MAXdictio (extra cost) expands its functionality to provide basic content analysis style indexing and counting
of all
words in the dataset. Contact for information: [email protected]
Minimum System Specifications (recommended by developer)
MS Windows 98 or higher
RAM: 64MB (minimum)
The Structure of work in MAXqda
MAXqdas internal database means that data files (documents), once imported are contained within the
Project, and
are moved or saved as part of the project. .
Data types and format in MAXqda
Textual data: files have to be saved in Rich Text Format no other format of data is allowed.
User-defined Text groups act
as organising containers for
data, into which individual
data files be imported or in
which blank files can be
created
Retrieved segments based on
activated codes and activated text files
each segment interactively linked to
source file top right
Hierarchical
coding schema
Interactive margin (colour
codable codes)
Rich Text Format allows full editing rights on the data once in MAXqda. Objects, text boxes and graphics
should not be
contained in the Rich Text file. Any amount of text can be selected for coding purposes.
Closeness to data - Interactivity in MAXqda
Interactivity from Coded segments in the Retrieved segments pane back to source data file, between codes
in margin
and source data, mean that the user is very close to whole context of the source files. Interactive frequency
tables
(varied by activation of different files), results of text searches are all interactively connected to source data
(one
click). The compact user interface with 4 main windows enhances the users interactive contact with the
whole dataset.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 16
Interactivity from Coded segments in the Retrieved segments pane back to source data file, between codes
in margin
and source data, mean that the user is very close to whole context of the source files. Interactive frequency
tables
(varied by activation of different files), results of text searches are all interactively connected to source data
(one
click). The compact user interface with 4 main windows enhances the users interactive contact with the
whole dataset.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 16.Choosing
CAQDAS software Ann Lewins & Christina Silver
Coding schema in MAXqda
The coding schema can be as hierarchical or as un-hierarchical as required. Drag and drop allows
easy re-organisation
of coding schema and hierarchical structure.
Colour of codes in margin display: User can assign codes a specific colour (from a limited selection
of colours)
to emphasise particular aspects of coding better in the margin display
Activated coded segments can be printed or saved as a file for export into e.g. Word.
Coding Processes in MAXqda
Any selection of text can be dragged on to a code (to assign code to text) or use coding bar above text
window to
assign recently used codes.
Undo recent coding actions at coding bar above text window.
Coding margin display: Codes then appear in the margin in Text browser window to allow user
interactive clicking on
code in margin to highlight relevant text. Text browser window with margin can also be printed. Colour
attributes
assigned to certain codes appear in margin display.
Weight: Coded segments (for certain types of codes) can be assigned weighting within a range of 1 to 100.
A weight of
100 is assumed unless otherwise assigned.
Code frequency table interactively shows frequency of codes across whole data set, and also lists
frequencies just
activated files, to enable quick comparison across different subsets of data as they are activated.
.
Basic Retrieval of coded data in MAXqda
Activation: this is the central retrieval principle of MAXqda software allowing the user to control the
way retrieval of
selected codes for selected files appears. Put simply, the user activates the code required, then
activates the files on
which retrieval is required. Segments can be activated by Text groups or Sets of files (see below).
This principle of
activation and its effect when combined with 4 main functioning panes below is what makes MAXqda easy
to grasp and
manipulate.
Activate by weight: activation(retrieval) of a e.g. a code based on a particular weight range which has been
assigned to
segments
Retrieved coded segments appear with interactive source data file information allowing (1 click)
hyperlink connection
from segment to its in-context position in data file. See lower right pane in above screen shot.
Organisation of data in MAXqda
Variables: The assignment of e.g. socio demographic variables and values to data files or parts of data
files to allow
the interrogation of subsets of the data is possible in a step-by-step or by importing tabular information.
Sets of documents: drag data files into new Sets (involving the creation of shortcuts to original files)
means that the
user can activate or switch on files on the basis of sets - useful for e.g. code retrieval, for frequency
tables, or for
retrieval of all memos written about any of the Sets documents.
Writing Tools in MAXqda
Attach memos to areas in text then flagged up and opened in margin display
Link memos to topics (codes) to enable overview and easy collection, listing and export of all notes
written any where
in project about that topic. Print or export any memo and its content, or a collection of memos.
Retrieval of memos into collections: e.g. all memos linked to a particular code, all memos within one
document, all
memos for a Set of documents, all memos for a text group can all be printed or saved as a file
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 17.Choosing
CAQDAS software Ann Lewins & Christina Silver
Searching and interrogating the dataset in MAXqda
Interrogation of the coding schema happens by simple to complex states of activation and there are several
more
complex search operators or (activation using logic) e.g. followed by, near.
Using weight filter inserts an extra dimension on any search performed.
Autocoding /Text search: a range of words or expressions can be searched for. The list that is
produced is
interactively connected to source data. The minimum unit of text saved (if the finds are coded) is always the
whole
paragraph around each hit. The search expression can be saved and reloaded.
Output in MAXqda
Any of the 4 main panes of the user interface van (and their contents) can be printed or exported into file
format (Text
manager pane, Text Browser pane (with codes in margin), Codes list or Retrieved segments pane.
Memos can be outputted in one file to be opened in Wordpad or Word (either all memos, or all memos
linked to one
data file or a set of files, or all memos linked to one code).
Tabular data: any interactive table produced inside the software can also be exported into Excel or dbf.
format
Ability to go beyond Code and Retrieve
MAXdictio (add on module to MAXqda cost is extra)
Functions which are added:
Word Frequency across all files
Produces an interactive frequency index of all words. User can choose any word from this list to produce
further index
occurrences of the word, each hyper linked to positions in source data (0ne click) see screen shot below
Word frequency across activated (or selected) files
Create stop list: allows user to build list to exclude certain words in count i.e. and, but, is, etc.
(or this can be done by double clicking on any word in the Frequency list)
Create dictionary: allows user to build list of active words s/he is interested in
- a further button allows use of this dictionary to govern functioning of other buttons
Teamworking in MAXqda
Maxqda comes with various tools allowing various types of merging. Individual documents can be imported
together
with their coding from other team members projects, or a whole projects database can be imported into
another.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 18.Choosing
CAQDAS software Ann Lewins & Christina Silver
CAQDAS Networking Project Comment on MAXqda:
The MAXqda user interface is very appealing and tidy with auto arranging
but resizable windows allowing
easy customisation and focusing on particular element(s) of work. The
compact user interface with 4 main
windows repeatedly enhances the users interactive contact with the whole
context of source data.
Having said that, the user interface may feel rather cramped with larger
datasets, especially for low resolution
screens e.g. 800x600 pixels would be too low
It is an intuitive and simple software, and easy to get to grips with - perhaps
particularly good to teach students
Very good memoing and writing tools easy systematic retrieval options
which may be particularly useful in
team situations
In the team situation, users can be selective about which items they merge
together into one larger project to
be able to compare or continue working cumulatively.
Interactive and colour margin display of coding, which prints out very easily
and satisfactorily.
The simple activation of single or multiple codes (OR) or intersecting (AND)
multiple codes is easy but the
more complex proximity search tools are more difficult to make use of
The autocoding tools are perhaps less flexible than in other software
because the only units of context to
which you can auto code to are the search string itself or the paragraph
With addition of Maxdictio the software has a small but useful range of
content analysis, word frequency tools
not currently available in other code based packages because they provide
KWIC retrieval.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 19.Choosing
CAQDAS software Ann Lewins & Christina Silver
QSR N6 some distinguishing features and functions
N6 is the latest product from Qualitative Solutions Research (QSR) in the NUD*IST range of software, (Non-
numerical
Unstructured Data : Indexing, Searching and Theorizing). It was originally developed at La Trobe University,
Melbourne, Australia by Tom Richards to support social research by Lyn Richards. Subsequently QSR
International
was formed which now develops and supports all QSR software: N6, NVivo, Merge for NVivo and XSight.
http://www.qsrinternational.com/
Minimum System Specifications (recommended by developer)
MS Windows Me, 2000, XP
RAM : for Me 64Mb, for 2000 and XP 128Mb
15 Mb disk space required (plus space for project data files)
Structure of Work in N6
N6 functions with an internal database which means that all individual data files are imported into the
Project and so
wherever the Project is placed on your computer the data files will be inside it. There are two main elements
to the
database the Document Explorer (where all the datafiles (called documents) are held) and the Node
Explorer (where
the coding schema is housed). The Project Pad provides access to main functions and has 3 tabs
Documents, Nodes
and Commands
Hierarchical or un-hierarchical
coding
schema
with code
definitions in
description area
In Vivo coding and Coding Bar
Data types and format in N6
N6 directly handles plain text (.txt) data files only although internal references to other external data
formats are
possible. The researcher has a choice of the minimum unit of text to which to apply codes line, sentence,
paragraph.
This then becomes the text unit. Editing of text is possible but only on one text unit at a time.
Closeness to data interactivity in N6
Documents can be viewed and annotated before any coding takes place. Viewing coded data lifts out coded
segments
from their source position, though the user can return to the relevant position in the source file (2 clicks
away). There is
no margin view showing how a whole document has been coded within the software (although an output
report can be
generated (see below)
Coding schema in N6
The N6 coding schema has a hierarchical and non-hierarchical area allowing the researcher to work in a
combination of
ways. However, although codes can be moved around the coding schema by cutting and attaching,
there is no drag
and drop option for the re-organisation of the coding schema.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 20.Choosing
CAQDAS software Ann Lewins & Christina Silver
Coding is facilitated by a coding bar at the base of the screen allowing the very quick creation and
application of new
codes to selected text units. In addition an In Vivo coding tool provides easy generation of new codes
based on words
or phrases in the data although it is up to the user to note which document the code originated from.
Quick coder tool supports the researcher who prefers to code on paper and subsequently input
codes within N6
without having to go through the source data again.
Basic Retrieval of coded data in N6
Making a code stripes report: allows user to select up to only 26 codes to view in a documents
margin (or a
code/node report)
Retrieval of coded data: Make a report or browse all data coded at a topic by lifting it out of the
original source data
files, spread the context / jump to the source data (2 clicks) or continue to code (code-on / re-code)
and create new
categories.
Export coding schema to mapping package Inspiration or Decision Explorer to manipulate
connections between
themes and issues.
Organisation of data in N6
The organisation of data happens at the coding level within N6 and can be achieved semi-automatically by
importing
tabular information. This can be useful where large amounts of data is being analysed and/or when
quantitative
(descriptive) information concerning interview respondents, for example, is already held in spreadsheet
format.
Writing Tools in N6
One memo can be created for each document, and for each node/code. Annotations which can be
coded can also be
inserted in the text occupying a new text unit, but this alters text unit numbering. All annotations can then be
browsed,
or combined in searches with thematic codes. Text units can also be edited (one at a time) so this also can
be a
method of embedding notes in the data.
Searching and Interrogating
N6 includes a range of sophisticated search tools; graphic descriptions of search operators, included in the
user
interface, are useful when learning the software. The results of any type of search are automatically saved
as a new
code thereby easily facilitating the process of asking further questions based on the results of previous
findings.
The way a search is performed can also be saved as a command file for re-running on different or
accumulated data.
Qualitative cross tabulations interactive tables: Unique to QSR software at present, is the ability
to carryout
qualitative cross-tabulations (matrix searches), which may be useful resources at the far-end of the research
process.
Interactive tables which summarise with a variable count of the results give access to the relevant qualitative
data from
each cell of the table.
http://caqdas.soc.surrey.ac.uk/ 21 Ann Lewins and Christina Silver 2004 CAQDAS Networking Project.Choosing
CAQDAS software Ann Lewins & Christina Silver
Text search / autocoding tools can be flexibly applied e.g. to all datafiles or data coded in a particular
way or on
those data NOT coded in that way. These can be performed individually or multiple searches can be
combined in
command tools. The hits from these searches can be saved with surrounding context either the text unit
(maybe a
line / sentence /paragraph) or to a paragraph (whatever the text unit) or the section.
The ability to go beyond Code and retrieve in N6
Automation and Command files
A number of data processing tasks can be automated using command files, and an easy to use and intuitive
Command
Assistant facilitates the process of writing and building the command files. Command files can automate a
range of
(repetitive) clerical tasks and scripts written in the command file language can be saved in order, for
example, to be re-run
on new waves of data.
Teamworking in N6
N6 software is supplied with its own Merge software allowing different team members to work independently
on their
own parts of a project, with the aim of merging projects into one project at a later or successive points. The
merge must
be planned for, and coding schemas and datasets can be merged using various models described by the
Merge help
notes/ manual. Experimental merges should be performed to see the effects of such operations, and to see
whether
further preparations on individual projects are necessary before the merged information is used seriously.
CAQDAS Networking Project Comment on N6:
The decision you have to make about text unit type (or the minimum
codeable chunk) means that you need a
good awareness at an early stage of how you will use certain tools at a later
stage. To start with you may not
be thoroughly familiar yet with all the tools or what they will do for you, or
how the ways that data are prepared
can affect the efficiency of the software later; for instance text search tools,
and some command files can
make use of structures and syntax within the data that need to be in place
before, or shortly after, the data is
imported. This can make the initial familiarisation and/or teaching of the
software a little problematic.
The text unit structure also means the coding process may not be as flexible
as in other packages, but
Conversely, however, the text unit structure may be particularly useful for
large or longitudinal projects as it
allows coding (of large amounts of data) to be achieved quickly. Additionally,
text only format for data files
means that searching tasks and browsing codes may happen quicker than
other software which use Rich Text
Format.
The automation (e.g. with command files) which it is possible to build in to a
large N6 project exceeds what is
possible in other packages. Requiring such automation would therefore be a
good reason to choose N6.
Excellent range of search tools in N6 with fairly user friendly dialogue boxes
Although a report can be generated to show coding in margin, it is not
instant and it is very restrictive and
limited compared to other margin displays
Data can be edited but only by opening up each text unit individually and
changing and saving that text unit.
Not to be recommended on a wide scale.
The Quick coder tool new to N6 is really useful allowing those who prefer
to code in the more traditional way
in hard copy to do so and then easily transfer coding to N6 in a easy and
quick way.
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 22.Choosing
CAQDAS software Ann Lewins & Christina Silver
Nvivo (Version 2) some distinguishing features and
functions
NVivo is the sister package to N6 and is also developed by QSR. It is therefore similar to N6 in a number of
ways for
example, in its database structure, coding schema and some of its functionality. However, both software
have
distinctive and separate strengths (see N6 above) and although NVivo offers a range of additional tools and
flexible
ways of working it should not simply be seen as the latest version in the NUD*IST range of software
packages. Rather
it is a package in its own right and enables inquiry to go beyond coding and retrieval.
http://www.qsrinternational.com/
Minimum System Specifications (recommended by developer)
MS Windows Me, 2000, XP
RAM : for Me 64Mb, for 2000 128Mb, for XP 256Mb
40 125 Mb disk space required (plus space for project data files)
Structure of work in NVivo 2
The user creates a Project and into that the individual data files are imported. The internal database means
that when
the user moves project the data files are moved within it. The project itself becomes a folder on your
computer, within
which various other folders sit.
The main Project pad provides initial access to the main functions and it has two foci, Documents
and Nodes.
Selecting either of these changes the buttons on the pad affording more or less symmetrical functions for
handling the
associated tools for documents (individual data files) or nodes (codes, or coding related tasks).
Data types and format in NVivo 2
Files which can be imported for direct handling in NVivo must be in Text only or Rich text format
(.rtf).
Full editing rights on data files: they can be marked, edited or annotated at any stage. This also
means the
researchers own writing in the form of a memo or annotation can be easily coded.
Blank files can be created within the software to contain analytic notes about documents, codes, or perhaps
to
transcribe summaries of multimedia files. Proxy files can represent other files outside the Nvivo data base.
Links can be inserted in NVivo textual documents, to external e.g. any multimedia file or file which is
accessible on
your computer. Multimedia files cannot be coded, although abstracts or proxy documents (which can have
links to the
multimedia file as a whole) can be coded normally.
show
otes.
Internal annotations
(databites) can be
anchored to points in the
text (see green underlining)
and re-opened when
required. Or a report of this
document will list and
text of annotations (and
other links) as endn
Ann Lewins and Christina Silver 2004 CAQDAS Networking Project http://caqdas.soc.surrey.ac.uk/ 23
http://caqdas.soc.surrey.ac.uk/Choosing%20a%20CAQDAS%20package%20-
%20Lewins&Silver.pdf
Cognitive mapping:
Getting Started with Cognitive Mapping
by Fran Ackermann, Colin Eden and Steve Cropper, Management Science, University of
Strathclyde. Copyright 1993-1996. All rights reserved.
Abstract
Cognitive Mapping is a technique which has been developed over a period of time and
through its application has demonstrated its use for Operational Researchers working on a
variety of different tasks. These tasks include; providing help with structuring messy or
complex data for problem solving, assisting the interview process by increasing
understanding and generating agendas, and managing large amounts of qualitative data
from documents. Whilst Cognitive Mapping is often carried out with individuals on a one
to one basis it can be used with groups to support them in problem solving.
This tutorial aims to;
Summary
This web page was converted from a help file, but was originally a conference paper,
produced to help attendees with Cognitive Mapping. We have produced this version of it
to provide further information to people interested in using the Decision Explorer
package. The help file is supplied as part of the Decision Explorer package, from
Banxia Software. It is not to be distributed or used for any other purpose.
Introduction
Cognitive Mapping may be used for a variety of purposes although a "problem" of some
sort usually forms the focus of the work. It is a technique used to structure, analyse and
make sense of accounts of problems. These accounts can be verbal - for example,
presented at an interview, or documentary. Cognitive mapping can be used as a note-
taking method during an interview with the problem owner and provides a useful
interviewing device if used in this way. Alternatively, it can be used to record transcripts
of interviews or other documentary data in a way that promotes analysis, questioning and
understanding of the data.
The technique is founded on George Kelly's theory of personal constructs (Kelly 1955)
and the principles of cognitive mapping indicated below reflect our reading of that theory.
The theory suggests that we make sense of the world in order to predict how, all things
being equal, the world will be in the future, and to decide how we might act or intervene
in order to achieve what we prefer within that world - a predict and control view of
problem solving.
Regardless of the Operational Research technique being applied, being able to understand
the client's perception of the problem is vital to the success of an OR intervention.
Cognitive mapping, by producing a representation of how the client thinks about a
particular issue or situation, can thus act as a valuable technique for helping Operational
Researchers. The technique's ability to help structure, organise and analyse data enable
both the client and the analyst together to begin to negotiate a suitable direction forward.
Whilst being an integral part of the SODA methodology (Eden 1990) creating a map of
the perceptions of the client or group may act as a precursor to other forms of analysis
with great effect. As mentioned above, Cognitive Mapping allows users to structure
accounts of problems. As such it may provide valuable clues as to the client's perceptions
of the problem giving indication as to where the "nub(s)" of the issue may lie. Aims and
objectives can be identified and explored, options examined to see which are the most
beneficial and whether more detailed ones need to be considered. Dilemmas, feedback
loops and conflicts can be quickly distinguished, explored and worked upon. Moreover, it
may increase the user's understanding of the issue through the necessity of questioning
how the chains of argument fit together and determining where isolated chunks of data fit
in. Finally, it may act as a cathartic medium for interviewees who, through the process of
explaining the ideas and how they fit together, begin to gain a better understanding of the
issue.
The technique has thus been seen as aiding the interview process. Through capturing the
chains of argument and linking them together, insights into the nature of the issues are
acquired. In addition the technique can help set an agenda for the interview. If an idea is
isolated - either due to a change in direction of the discussion or the mapper missing the
clue - it can be easily identified and act as a prompt for further questions. This is a clear
benefit over linear notes which can often contain ideas with little in the way of
explanation as to why they were raised. Maps may also act as prompts when attempting to
capture individual or organisational aims or objectives. Using the technique in a group
environment the facility to "weave" together with the ideas and views captured in the
individuals' maps acts as a powerful device. Members are able to see their ideas, in the
context of others, thus increasing their understanding of others' points of view, alternative
options and belief systems. Producing these "group" maps so that the ideas are presented
anonymously can help the group judge ideas on merit alone rather than their proponent's
status or charisma. Furthermore the increased understanding from a group map provides
the group with an emerging common language through which they can begin to agree a
view of the problem.
Interviews using cognitive mapping have often been used to facilitate data collection
especially for those problems which involve messy problems around internal issues - for
example dealing with the management of a central service. By informing group members
that the map will be treated in confidence and all the data kept anonymous, members feel
more able to raise those issues they feel are important without having to consider the
political ramifications to the same extent. Maps have also been used for strategy
development (Eden and Ackermann 1992) and individual problem solving (Eden 1991).
Both these benefit from being able to structure the data, analyse the maps and provide
potential action plans. Although maps may appear strange initially to the interviewee,
"ownership" of the data helps considerably in understanding the map format. Where
cognitive mapping has been used for document analysis (Cropper et al 1990, Ackermann
et al 1990) it has been possible to identify emergent key issues, check for possible loops
and explore the structure and thus test coherency.
Cognitive Mapping like any skill, for example riding a bicycle, takes time to learn and
first attempts are often time consuming, messy and discouraging. From observations
gained from teaching cognitive mapping to a large number of people, certain traits have
become evident and some of these will be discussed below to (hopefully) encourage
future mappers.
Firstly the mapper often finds it difficult to both listen and understand what is being told
him/her whilst trying to remember all the guidelines for creating maps. The result usually
is either giving up with the map and making straightforward notes or missing important
points of view. One suggestion often made to counter-act this is to try practising mapping
in environments where the outcome is not important - for example an informal meeting
between colleagues. This will give valuable experience whilst keeping it in a low risk
environment. Another trait is the length of time taken to create a map - this is especially
the case if the mapper is practising from documents - which may lead to discouragement.
This is also normal and with practice reduced although some of the time spent is through
getting a better understanding of the issue. Controlling the speed of verbal information
rather than being overwhelmed is also important. Feeding back the ideas to the
client/interviewee not only provides the mapper with a chance to catch up with the map
but also acts as a useful validating process to ensure that the views have been captured
correctly.
Another difficulty experienced by novice mappers is, that attempting to write everything
said by the interviewee verbatim, leads to several appearances of a single idea. This
happens when the interviewee, in elaborating an issue, mentions the issue itself several
times (sometimes with slightly changed wording) which is then duly recorded. Rather
than writing the issue down several times - a time consuming activity - it is more
beneficial to link all the strands into the one issue. This provides the mapper with a map
in which a quick look will identify those areas which have been elaborated - key points -
useful especially when summarising. Care nevertheless needs to be taken to ensure that
the interviewee is talking about the same issue. Paper management is also important.
Whilst it is recommended to place concepts into rectangular blocks of text and to start
mapping two thirds up the page (see guideline 12) trying to keep the map on one piece of
paper and maintaining a single direction of writing are also important. Keeping the map
on one sheet of paper allows all the cross links to be drawn rather than having to try to
move between different pages - again time consuming - and increasing the chance of
missing things. If the mapper writes in fairly large script, although A4 paper is
recommended (it is easier to manage), using a sheet of A3 rather than two A4 sheets may
well facilitate the mapping process. Writing in the same direction, whilst sounding trivial,
is often eschewed as mappers squeeze ideas into the map writing at all angles. Whilst this
might facilitate getting all the data onto one page, reading the ideas becomes difficult if
not impossible.
We often suggest that the map be shared with the interviewee. This is often met with
incredulity as the maps, especially at the beginning, are messy and complex. However, it
has been noticed that where the data belongs to the client/interviewee then the "mess"
becomes immaterial as soon as the structure is explained. From being able to share, not
only is the map validated and elaborated, but also the client/consultant relationship is
enhanced as the map becomes a shared document rather than a set of notes belonging to
the mapper. As such sending a copy of the map generated from a meeting/interview has
often provided the stimulus for further work and is used quite explicitly by some
consultants as a means for getting a foot in the door.
How to Map
The technique works by applying the following disciplines:
These guidelines however, are not a recipe which will allow any user to produce the 'right'
model of any given account of problem. There is no definitive map of an account. Models
of an account of a problem produced by different users will differ according to the
interpretation of the data made by each individual user. Mapping is in this sense an
inexact science. Cognitive mapping and the guidelines set out below merely form a prop
to individuals' interpretations of the data they have available. Nevertheless it provides a
powerful way of thinking about, representing and asking questions of an account.
"We need to decide on our accommodation arrangements for the York and Humberside
region. We could centralise our service at Leeds or open local offices in various parts of
the region. The level of service we might be able to provide could well be improved by
local representation but we guess that administration costs would be higher and, in this
case, it seems likely that running costs will be the most important factor in our decision.
The office purchase costs in Hull and Sheffield might however be lower than in Leeds.
Additionally we need to ensure uniformity in the treatment of clients in the region and
this might be impaired by too much decentralization. However we are not sure how great
this risk in this case; experience of local offices in Plymouth, Taunton and Bath in the
South East may have something to teach us. Moreover current management initiatives
point us in the direction of greater delegation of authority."
The above information could have been the basis of a verbal discussion or part of a
written note. In this case it is an extract from an interview. In interviews we build up a
map as we go and the emerging map is an integral part of our interviewing technique. In
the case of documents it is generally helpful to first read through the entire text before
starting to map in order to gain a overall understanding of the issue. There are two
important acts of analysis to which the mapper should attend in starting work on
interview or documentary data. Firstly, mapping works by representing the problem or
issue of concern as a series of short phrases, each of these essentially a single idea, or
concept. The first act, then, is to separate the sentences into distinct phrases.
Guideline 1
Separate the sentences into distinct phrases. These phrases are likely to be no more than
about 10-12 words long. Secondly, mapping is most effective when the mapper has a way
of sorting the concepts into types of concept - we use the idea of layers in a hierarchy to
sort concepts. The layers we often use are simply Goals (at the top), Strategic Directions,
Potential Options.
Guideline 2
Build up the hierarchy. Get the structure of the model right. By placing the goals (often
motherhood & apple-pie type statements eg increase profit and growth) at the top of the
map and supporting these first with concepts indicating strategic direction and further on
with potential options.
In practice, this is perhaps best achieved by identifying, first of all, those concepts which
you feel are "goals" (often inevitably broad statements of intent, all encompassing, or
'motherhood and apple-pie') at the "top of the hierarchy". Goals are those things that are
regarded as 'good things per se' by the problem owner, that is they are not seen as an
option. Goals are often spotted from the non-verbal language of the problem owner such
as intonation and body language - they become more emphatic and express them as if
there were no discussion about their 'goodness'. They are thus often relatively difficult to
spot when working with text. Goals are useful to the mapper because they act as points of
integration and differentiation. These points of reference provide structure to the cognitive
map as it is built up. It is important to note that although these are of primary interest they
are not usually mentioned in the beginning of the discussion/text and therefore need to be
identified and extracted further on.
Guideline 3
Watch out for goals. These will end up at the 'top' of the map - the most superordinate
concepts. It can help to mark them as goals when writing them down. In the above text it
seems likely that "improved level of service" and something other than "higher
administration costs" are goals.
Supporting these goals are usually a set of key, often "strategic" directions, which usually
relate to one or more of the goals [see diagram 1]. The example text is very short and it is
therefore difficult to distinguish strategic directions from options with confidence. It is
likely that the location of the offices ("centralise service at Leeds" rather than "open local
offices") is not a goal but a direction that has strategic implications and would need to be
supported by a portfolio of options [see diagram 1].
Diagram 1: An example of the structure of a Cognitive Map
Guideline 4
Watch out for potential "strategic directions" by noting those concepts that have some or
all of the following characteristics: long term implications, high cost, irreversible, need a
portfolio of actions to make them happen, may require a change in culture. They often
form a flat hierarchy themselves but will be linked to Goals (above) and Potential Options
(below).
Beneath these key issues are portfolios of potential "options" which explain (and thus
suggest potential solutions to) the key issues to which they are linked. Further 'down' the
map these options will become increasingly detailed. The links joining the concepts
together should be interpreted as 'may lead to', or 'may imply'. One concept will thus
explain how another might come about or be a possible means of achieving an option. For
example, "night" may lead to "put lights on"; or "press switch" might lead to "put lights
on".
Below, we have marked the example text following what we have found to be a useful
initial coding technique. Slashes show where the text has been broken into its constituent
concepts - a single slash / indicates the start of a phrase; a double slash // indicates the end
of a phrase. Possible goals are italicised and marked with a "G".
"We need to decide on our accommodation arrangements for the York and Humberside
region. We could /centralise our service at Leeds// or /open local offices// in various parts
of the region. The /level of service we might be able to provide could well be improved
G// by /local representation// but we guess that /administration costs would be higher//
and, in this case, it seems likely that /running costs// will be the most important factor in
our decision. The office /purchase costs in Hull and Sheffield// might however be lower
than in /Leeds//. Additionally we need to /ensure uniformity in the treatment of clients// in
the region and this might be /impaired// by /too much decentralization//. However we are
not sure how great this /risk// in this case; /experience of local offices in Plymouth,
Taunton and Bath in the South East may have something to teach us//. Moreover /current
management initiatives// point us in the direction of /greater delegation of authority//."
The translation of an account like this into a map is not always totally straightforward. We
talk and write in ways which need to be "interpreted" into map form. For example,
concepts will often be adaptations of the phrases we initially identify. This translation is a
central part of the technique. There are guidelines which explain this translation process
below. But the following list of concepts illustrates how we have tackled the example
text.
We are now in a position to start developing a map. Firstly, though, it is worth explaining
some aspects of our coding into phrases.
Although the first part of text mentioned the need to decide on the accommodation
arrangement for the York and Humberside region we have taken this to be the 'title' of the
discussion and therefore have not included it in the map itself. It might be used explicitly
as the title of the map. Or it might conceivably be a part of another map focusing on
issues to do with the process of deciding. We have concentrated in this map on the
considerations that are being brought to bear in this decision since that is what this
snippet of text contains.
Further on in the text it becomes apparent that there is an alternative to "centralise service
at Leeds" which is "open local offices in various parts of the region". Mapping is
designed to highlight such contrasts since they can represent significant choices. The
concepts which form the basis of a map are bi-polar - that is, they will contain two
phrases, each phrase forming a "pole". The contrasting pole is not always mentioned
immediately alongside the first pole but may be emerge later on in the discussion. The
convention of a series of three dots (an ellipsis) is used to represent the phrase "rather
than" and to separate one pole from the other.
With the choice here seemingly about local offices rather than centralisation, this is
represented as
Guideline 5
Look for opposite poles. These clarify the meaning of concepts. Contrasting poles may be
added to the concept later on in the interview when they are mentioned. In cases where
the meaning of a concept is not immediately obvious, try asking the problem owner for
the opposite pole. Alternatively put the word 'not' in front of the proffered pole. In
interviews we ask the question "rather than" - doing so often suggests the more likely
psychological contrast implied by the problem owner.
The first concept "Centralise the service at Leeds rather than open local offices" has been
captured in an imperative form. Mapping is intended to impose an "action oriented
perspective" onto the concept thus making it something that the problem owner can go
and do as a possible part of the solution to the problem. The meaning of a concepts stems
in part from the actions it suggests you should take. In principle, all concepts can be
thought of as an option - something which could be done or which suggests action.
Thinking about a problem in this manner therefore invokes a more dynamic
representation. This dynamism is achieved by placing the verb at the beginning of the
concept for example "ensure ..., provide ..., increase ....". However, where no action is
specifically mentioned or the concept is lengthy it is possible to make the concept
actionable without the verb. The action is simply implied. Thus the action orientation of
mapping is as much as a way of interpreting, firstly the raw data, and secondly the map
itself as an explicit modelling technique to be followed with care. An example of this is in
the case of "local representation". Although the concept does not have a verb attached to
it, it is easy to understand that moving to local offices may lead to improved level of
service.
Guideline 6
Add meaning to concepts by placing the concepts in the imperative form and where
possible including actors and actions. Through this action perspective the model becomes
more dynamic.
Guideline 7
Retain ownership by not abbreviating but rather keeping the words and phrases used by
the problem owner. In addition identify the name of the actor(s) who the problem owner
states and incorporate them into the concept text. Having captured the first concept, in a
bipolar manner and using an action orientation, let's develop the map further. We will start
now to make links between concepts to form chains of reasoning about the problem.
The next sentence in the example mentions that the level of service could be improved by
local representation ... There seems to be an implication that "centralise service" rather
than "open local offices" is one possible way of achieving "[not] local representation".
"Local representation" is seen as the desired outcome, and "open local offices" as a
possible option to achieve it. This can therefore be mapped as:
Guideline 8
Identify the option and outcome within each pair of concepts. This provides the direction
of the arrow linking concepts. Alternatively think of the concepts as a 'means' leading to
an 'desired end'. Note that each concept therefore can be seen as an option leading to the
superordinate concept which in turn is the desired outcome of the subordinate concept.
When coding, it is important to try to avoid phrases beginning with 'need...', 'ought...',
'must...' or 'want...'. These cause problems later, when deciding which of a pair of
concepts is the means and which is the end. For example, "we need a new computer to
sort production problems" might suggest that buying a new computer is a goal, whereas it
is more likely to be one option for sorting production. Rather than use these words which
imply "desires", write concepts as actions.
Moving further on with the example, it continues: "administration costs will be higher".
Through admitting that "running costs" are the most important factor it is possible to infer
that one of the goals of the problem owner is to reduce or keep low administration costs
because running costs contribute directly to administrative costs. Opening local offices
may lead to higher administration costs as seen below:
Guideline 9
Ensure that a generic concept is superordinate to specific items that contribute to it.
Generic concepts are those for which there may be more than one specific means of
achieving it. This follows Guideline 8 and helps ensure a consistent approach in ordering
the data into a hierarchy.
Next it is mentioned that office purchase costs in Hull and Sheffield may be lower
supporting the move to "open local offices". This provides the problem owner with an
explanation whereas both "local representation" and "higher administration costs" were
consequences of "open local offices". Again it is possible to add to this new concept the
contrasting pole of "higher cost in Leeds".
A new topic is now introduced into the text, uniformity of treatment. The text suggests
that this may be impaired through decentralisation however there is little understanding
about the risk that may be encountered but they could learn from the experience in the
South West. This is mapped as:
The link between "use experience of local offices" and "lack of understanding about risk"
is a negative link. It is suggesting that "use experience of local offices" leads to [not]"lack
of understanding about risk". As with the concept "open local offices" the concept
concerning "uniformity" is bipolar as both poles have been stated in the text. Keep the
first-stated phrase as the first pole of a concept even if this means that some negative links
are required. It is sometimes interesting and informative to look at the list of concepts one
pole at a time. We sometimes distinguish between two types of person - those for whom
the world is "tumbling down" and those who have a clear vision of where they want to be.
Those whom the world is "tumbling down" upon usually describe their problems in a
negative manner mentioning aspects such as overwork. An example from the text may be
"too much centralisation". Those having a clear vision of where they want to be tend to
express their ideas in a more positive form for example improve level of service. A look
at the first poles of the concepts can help to identify whether the person to whom the map
belongs is of one type or the other.
Guideline 10
It is generally helpful to code the first pole as that which the problem owner sees as the
primary idea (usually this is the idea first stated). The first pole of a concept is the one
that tends to stand out when reading a map. A consequence of this is that links may be
negative even though it would be possible to transpose the two poles in order to keep
links positive.
Finally the last sentence concerning current management initiatives pointing in the
direction of greater authority. Although the text states that "following current management
initiatives" may lead to "greater delegation of authority" it does not suggest why the
problem owner may want this delegation. The two concepts would therefore become an
island, a group of isolated concepts linked together. This island could be further explored
in an interview and crosslinked into the rest of the map where appropriate. In the case of
documentation however, some judgement by the mapper has to be made as to why this
statement has been made and in this case it has been interpreted as an explanation for
"open local offices". The map is now complete and looks like this:
[Di
agram 1]
Guideline 11
Tidying up can provide a better more complete understanding to the problem. But ensure
that you ask why isolated concepts are not linked in to the main part of the map - often
their isolation is an important clue to the problem owner's thinking about the issues
involved.
Guideline 12
Start mapping about two thirds of the way up the paper in the middle and try to keep
concepts in small rectangles of text rather than as continuous lines of text. If it is possible
ensure the entire map is on one A4 sheet of paper so that it is easy to cross link things (30-
40 concepts can usually be fitted onto a page). Thus pencils are usually best for mapping
and soft, fairly fine (eg 5mm) propelling pencils are ideal.
References
Ackermann, F., Cropper, S., Cook, J. and Eden, C (1990) "Policy
Development in the Public Sector: An Experiment, Working Paper
No 89/2, Department of Management Science, University of
Strathclyde.
Cropper S., Eden, C. and Ackermann, F. (1990) "Keeping Sense
of Accounts Using Computer-Based Cognitive Maps", Social
Science Computer Review, 8 345-366.
Eden, C. (1990) "Using Cognitive Mapping for Strategic Options
Development and Analysis (SODA)" in J. Rosenhead (Ed)
Rational Analysis for a Problematic World. Wiley: Chichester.
Eden, C. (1991) "Working on Problems Using Cognitive Maps" in
Shuttler and Littlechild (Eds)
Eden, C. and Ackermann, F. (1992) "Strategy Development and
Implementation - the role of a Group Decision Support System"
in S. Kinney, B. Bostrom and R. Watson (Eds) Computer
Augmented Teamwork: A Guided Tour. Van Nostrand: New York.
Kelly, G. (1955) The Psychology of Personal Constructs. Norton:
New York.
There is a simple guide to mapping with Decision Explorer in the User Guide.
Summary
This is the summary of the mapping guide. Full details are in the Main paper.
Guideline 1
Separate the sentences into distinct phrases. These phrases are likely to be no more than
about 10-12 words long.
Guideline 2
Build up the hierarchy. Get the structure of the model right. By placing the goals (often
motherhood & apple-pie type statements eg increase profit and growth) at the top of the
map and supporting these first with strategic direction type concepts and further on with
potential options.
Guideline 3
Watch out for goals. These will end up at the 'top' of the map - the most superordinate
concepts. It can help to mark them as goals when writing them down.
Guideline 4
Watch out for potential "strategic issues" by noting those concepts that have some or all
of the following characteristics: long term implications, high cost, irreversible, need a
portfolio of actions to make them happen, may require a change in culture. They often
form a flat hierarchy themselves but will be linked to Goals (above) and Potential Options
(below)
Guideline 5
Look for opposite poles. These clarify the meaning of concepts. Contrasting poles may be
added to the concept later on in the interview when they are mentioned. In cases where
the meaning of a concept is not immediately obvious, try asking the problem owner for
the opposite pole. Alternatively put the word 'not' in front of the proffered pole. In
interviews we ask the question "rather than" - doing so often suggests the more likely
psychological contrast implied by the problem owner.
Guideline 6
Add meaning to concepts by placing the concepts in the imperative form and where
possible including actors and actions. Through this action perspective the model becomes
more dynamic.
Guideline 7
Retain ownership by not abbreviating but rather keeping the words and phrases used by
the problem owner. In addition identify the name of the actor(s) who the problem owner
states are implicated and incorporate them into the concept text.
Guideline 8
Identify the option and outcome within each pair of concepts. This provides the direction
of the arrow linking concepts. Alternatively think of the concepts as a 'means' leading to
an 'desired end'. Note that each concept is therefore can be seen as an option leading to
the superordinate concept which in turn is the desired outcome of the subordinate
concept.
Guideline 9
Ensure that a generic concept is superordinate to specific items that contribute to it.
Generic concepts are those for which there may be more than one specific means of
achieving it. This follows Guideline 8 and helps ensure a consistent approach to building
the data into a hierarchy.
Guideline 10
It is generally helpful to code the first pole as that which the problem owner sees as the
primary idea (usually this is the idea first stated). The first poles of a concept tend to stand
out on reading a map. A consequence is that links may be negative even though it would
be possible to transpose the two poles in order to keep links positive.
Guideline 11
Tidying up can provide a better more complete understanding to the problem. But ensure
that you ask why isolated concepts are not linked in - often their isolation is an important
clue to the problem owner's thinking about the issues involved.
Guideline 12
Practical Tips for Mappers. Start mapping about two thirds of the way up the paper in the
middle and try to keep concepts in small rectangles of text rather than as continuous lines
of text. If it is possible ensure the entire map is on one A4 sheet of paper so that it is easy
to cross link things (30-40 concepts can usually be fitted onto a page). Thus pencils are
usually best for mapping and soft, fairly fine (eg 5mm) propelling pencils are ideal.
"the guidelines are not a recipe which will which will allow any user to produce the 'right'
model of any given account of problem. There is no definitive map of an account. Models
of an account of a problem produced by different users will differ according to the
interpretation of the data made by each individual user. Mapping is in this sense an
inexact science. Cognitive mapping and the guidelines set out below merely form a prop
to individuals' interpretations of the data they have available. Nevertheless it provides a
powerful way of thinking about, representing and asking questions of an account."
Main paper