0% found this document useful (0 votes)
31 views18 pages

BURNS Interviews and Questionnaires

Interviews are a common data collection method in qualitative and mixed methods research, allowing researchers to gather detailed information through verbal communication. Structured interviews involve predetermined questions and a specific order, while the design of interview questions should progress from broad to specific, ensuring clarity and comfort for participants. Although interviews can yield rich data and higher response rates, they are time-consuming and may introduce biases, necessitating careful planning, training, and pilot testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views18 pages

BURNS Interviews and Questionnaires

Interviews are a common data collection method in qualitative and mixed methods research, allowing researchers to gather detailed information through verbal communication. Structured interviews involve predetermined questions and a specific order, while the design of interview questions should progress from broad to specific, ensuring clarity and comfort for participants. Although interviews can yield rich data and higher response rates, they are time-consuming and may introduce biases, necessitating careful planning, training, and pilot testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Interviews in research

Interviews involve verbal communication during which the subject


provides information to the researcher. Although this data collection
strategy is used most commonly in qualitative, mixed methods, and
descriptive studies (Creswell & Clark, 2018; Creswell & Creswell,
2018), it is also used in other types of studies. The various
approaches for conducting interviews range from unstructured
interviews in which study participants are asked broad questions
(see Chapter 12) to interviews in which the participants respond to a
questionnaire, selecting from a set of specific responses. Although
most interviews are conducted face to face or by telephone,
computer-based interviews are also commonly used (Streiner,
Norman, & Cairney, 2015).
Using the interview method for measurement requires detailed
work with a scientific approach. Excellent books are available on the
techniques of developing interview questions (Dillman, Smyth, &
Christian, 2014; Streiner et al., 2015). If you plan to use this strategy,
consult a text on interview methodology before designing your
instrument. Because nurses frequently use interview techniques in
nursing assessment, the dynamics of interviewing are familiar;
however, using this technique for measurement in research requires
greater sophistication (Waltz et al., 2017).

Structured interviews
Structured interviews are verbal interactions with subjects that
allow the researcher to exercise increasing amounts of control over
the content of the interview for the purpose of obtaining essential
data. The researcher designs the questions before data collection
begins, and the order of the questions is specified. In some cases, the
interviewer is allowed to explain the meaning of the question further
or modify the way in which the question is asked so that the subject
can understand it better. In more structured interviews, the
interviewer is required to ask each question precisely as it has been
designed (Waltz et al., 2017). If the study participant does not
understand the question, the interviewer can only repeat it. The
participant may be limited to a range of responses previously
developed by the researcher, similar to those in a questionnaire
(Dillman et al., 2014). For example, the interviewer may ask
participants to select from the responses “weak,” “average,” or
“strong” in describing their functional level. If the possible responses
are lengthy or complex, they may be printed on a card so that study
participants can review them visually before selecting a response.

Designing interview questions


The process for developing and sequencing interview questions
progresses from broad and general to narrow and specific. Questions
are grouped by topic, with fairly safe topics being addressed first
and sensitive topics reserved until late in the interview process to
make participants feel more comfortable in responding.
Demographic information, such as age, ethnicity/race, and
educational level, usually are collected last. These data are best
obtained from other sources, such as the patients’ EHRs to allow
more time for the primary interview questions. The wording of
questions in an interview is crafted toward the minimum expected
educational level of the subjects, with the questions examined for
readability level (see Chapter 16). Study participants may interpret
the wording of certain questions in a variety of ways, and
researchers must anticipate this possibility. After the interview
protocol has been developed, it is wise to seek feedback from an
expert on interview technique and from a content expert.
Interviewing children requires a special understanding of the art
of asking them questions based on their age. The interviewer must
use words that children tend to use to define situations and events.
Interviewers also must be familiar with the language skills that exist
at different stages of development. Children view topics differently
than adults do, and their perception of time and the concepts of past
and present are also different.

Pilot-testing the interview protocol


After the research team has satisfactorily developed the interview
protocol, team members need to pretest or pilot-test it on subjects
similar to the individuals who will be included in their study. Pilot
testing allows the research team to identify problems in the design of
questions, sequencing of questions, and procedure for recording
responses. The time required for the informed consent and
interviewing processes also needs to be determined. Pilot testing
provides an opportunity to assess the reliability and validity of the
interview instrument (Waltz et al., 2017).

Training interviewers
Skilled interviewing requires practice, and interviewers must be
familiar with the content of the interview. They need to anticipate
situations that might occur during the interview and develop
strategies for dealing with them. One of the most effective methods
of developing a polished approach is role playing. Playing the role of
the subject can give the interviewer insight into the experience and
facilitate an effective response to unscripted situations.
The interviewer should establish a permissive atmosphere in
which the study participant is encouraged to respond to sensitive
topics. He or she also must develop an unbiased verbal and
nonverbal manner. The wording of a question, the tone of voice, a
raised eyebrow, or a shifting body position can communicate a
positive or negative reaction to the subject’s responses—either of
which can alter subsequent data (Dillman et al., 2014).
Interviews are conducted using specific protocol to ensure
essential data are collected in a consistent way. The interviewers
must be trained to ensure consistency or interrater reliability among
them in the implementation of the interview protocol. Strong
interrater reliability values greater than 0.80 (80%) increase the
consistency of the data collected (see Chapter 16) (Bandalos, 2018;
Kazdin, 2017).

Preparing for an interview


If you are serving as an interviewer in person, on the telephone, or
by real-time computer communication, you need to make an
appointment. For face-to-face interviews, choose a site for the
interview that is quiet, private, and provides a pleasant
environment. Before the appointment, carefully plan and develop a
script for the instructions you will give the subject. For example, you
might say, “I am going to ask you a series of questions about....
Before you answer each question you need to.... Select your answer
from the following..., and then you may elaborate on your response.
I will record your answer and then, if it is not clear, I may ask you to
further explain some aspects.”

Probing
Interviewers use probing to obtain more information in a specific
area of the interview. In some cases, you may have to repeat a
question. If your subject answers, “I don’t know,” you may have to
press for a response. In other situations, you may have to explain the
question further or ask the participant to explain statements that he
or she has made. At a deeper level, you may pick up on a comment
the participant made and begin asking questions to increase your
understanding of what he or she meant. Probes should be neutral to
avoid biasing participants’ responses (Doody & Noonan, 2013).

Recording interview data


Data obtained from interviews are recorded, either during the
interview or immediately afterward. The recording may be in the
form of handwritten notes, video recordings, or audio recordings.
With a structured interview, often an interview form is developed
and then researchers can record responses directly on the form
(Dillman et al., 2014). Data must be recorded without distracting the
interviewee. Some interviewees have difficulty responding if it is
obvious that the interviewer is taking notes or recording the
conversation. In such a case, the interviewer may need to record data
after completing the interview. If you wish to record the interview,
you first must obtain IRB approval and then obtain the participant’s
permission.

Advantages and disadvantages of interviews


Interviewing is a flexible technique that can allow researchers to
explore greater depth of meaning than they can obtain with other
techniques. Interviewing allows you to use your interpersonal skills
to encourage your participants’ cooperation and elicit more
information. The response rate to interviews is higher than the
response rate to questionnaires, thus collecting data through
interview instead of questionnaire yields a more representative
sample. Interviews allow researchers to collect data from
participants who are unable or unlikely to complete questionnaires,
such as very ill subjects or those whose reading, writing, and ability
to express their thoughts are marginal. Interviews are a form of self-
report, and the researcher must assume that the information
provided is accurate. Interviewing requires much more time than
self-reports, questionnaires, and scales, and it is more costly. Because
of time and cost, sample size usually is limited. Subject bias is
always a threat to the validity of the findings, as is inconsistency in
data collection from one participant to another (Dillman et al., 2014;
Doody & Noonan, 2013; Waltz et al., 2017).
Rieder, Goshin, Sissoko, Kleshchova, and Weierich (2019)
conducted semistructured interviews to examine if salivary
biomarkers were predictive of “parenting stress in mothers under
community criminal justice supervision” (p. 48). The sample
included 23 women who were mothers to at least one minor child
and were currently in contact with one or more of their children. The
following excerpt describes the interviews conducted in this study.

“Procedure
All study procedures occurred during a single 60–90-minute
session. This session took place in the participant’s room or a
private office at the residential treatment center or in the principal
investigator’s office. Participants were fully informed regarding all
study procedures.... Following consent, the participants participated
in a semi-structured interview and provided saliva samples at three
timepoints, including before and immediately after discussing a
stressful parenting event. At the end of the session, participants
were debriefed and compensated with a gift card to a local store
($20) and an age-appropriate children’s book...
Interview We conducted a semi-structured interview to gather
information on family structure, caregiving history for each of the
women’s children, child welfare history, and maternal criminal
justice history (arrest, incarceration, community supervision), as
well as how the women managed mothering under community
supervision. The interview began with the following parenting
stress reminder question: ‘Sometimes things happen with our
children that are extremely upsetting, things like when a child is
hurt or sick, when a mother has to leave her child and live
somewhere else, or when a child is taken away from his or her
mother. Has anything like this happened with the child you have
the most contact with right now?’ This question also served as a
stressor and allowed us to examine changes in stress system activity
in response to a reminder of parenting stress. Interviews were audio
recorded with consent.” (Rieder et al., 2019, pp. 49−50)

Rieder and colleagues (2019) identified the questions used to


initiate the interviews and the topics covered during their
semistructured interview (family structure, child welfare history,
and maternal criminal justice history). These seemed appropriate
based on the purpose and design of the study (Kazdin, 2017). The
interviews were conducted in private areas, such as the researcher’s
office or areas in the residential treatment center. Adequate time
(60−90 minutes) was allowed for the interviews. However, the
researchers did not identify who conducted the interviews. If more
than one person was involved, what was their training and the
interrater reliability value for the interviewing protocol? The study
participants were treated with respect, as demonstrated by being
fully informed about the study procedures, debriefed at the end of
the study, and compensated with a gift card. Rieder et al. (2019)
found that the biomarkers were predictors of parenting stress in
mothers under criminal justice supervision. They recommended
replication of their study with a larger sample.
Questionnaires
A questionnaire is a written self-report form designed to elicit
information that can be obtained from a subject’s written responses.
Information derived through questionnaires is similar to information
obtained by interview, but the questions tend to have less depth. The
subject is unable to elaborate on responses or ask for questions to be
clarified, and the data collector cannot use probing strategies.
However, questions are presented in a consistent manner, and there
is less opportunity for bias than in an interview.
Questionnaires can be designed to determine facts about the study
participants or persons known by the participants; facts about events
or situations known by the participants; or beliefs, attitudes,
opinions, levels of knowledge, or intentions of the participants.
Questionnaires can be distributed to large samples directly, or
indirectly through the mail or by computer. The design,
development, and administration of questionnaires have been
addressed in many excellent books that focus on survey techniques
(Dillman et al., 2014; Harris, 2014; Saris & Gallhofer, 2014; Waltz et
al., 2017).
Although items on a questionnaire appear easy to design, a well-
designed item requires considerable effort. Similar to interviews,
questionnaires can have varying degrees of structure. Some
questionnaires ask open-ended questions that require written
responses. Others ask closed-ended questions with options selected
by the researcher. Data from open-ended questions are often difficult
to interpret, and content analysis may be used to extract meaning
(Miles, Huberman, & Saldaña, 2020). Open-ended questionnaire
items are not advised if data are obtained from large samples or
quantification of responses are needed.
Researchers frequently use computers to gather questionnaire data
(Harris, 2014; McPeake, Bateson, & O’Neill, 2014). Computers are
made available at the data collection site (such as a clinic or
hospital), the questionnaire is presented on the screen, and subjects
respond by using the keyboard or mouse. Data are stored in a
computer file and are immediately available for analysis. Data entry
errors are greatly reduced. Most researchers e-mail subjects and
direct them to a website where they can complete the questionnaire
online, allowing the data to be stored securely and analyzed
immediately. Thus researchers can keep track of the number of
subjects completing their questionnaire and the evolving results.

Using questionnaires in research


The first step in either selecting or developing a questionnaire is to
identify the information desired. The research team develops a
blueprint or table of specifications for the questionnaire. The
blueprint identifies the essential content to be covered by the
questionnaire; the content must be at the educational level of the
potential subjects. It is difficult to stick to the blueprint when
designing the questionnaire because it is tempting to add just one
more question that seems to be a neat idea or a question that
someone insists really should be included. However, as a
questionnaire lengthens, fewer subjects are willing to respond, and
more questions are left blank.
The second step is to search the literature for questionnaires or
items in questionnaires that match the blueprint criteria. Sometimes
published studies include questionnaires, but frequently you must
contact the authors of a study to request a copy of their
questionnaire and obtain their permission to use it. Researchers are
encouraged to use questions in exactly the same form as
questionnaires in previous studies to examine the questionnaire
validity for new samples. However, questions that are poorly written
need to be modified, even if rewriting makes it more difficult to
compare the validity results of the questionnaire directly with those
from previous studies (Harris, 2014).
In some cases, you may find a questionnaire in the literature that
matches the questionnaire blueprint that you have developed for
your study. However, you may have to add items to or delete items
from the existing questionnaire to accommodate your blueprint. In
some situations, items from two or more questionnaires are
combined to develop an appropriate questionnaire. In all situations,
you must obtain permission to use a questionnaire or the items from
different questionnaires from the authors of these questionnaires
(Saris & Gallhofer, 2014).
An item on a questionnaire has two parts: a question (or stem) and
a response set. Each question must be carefully designed and clearly
expressed (Dillman et al., 2014; Polit & Yang, 2016). Problems
include ambiguous or vague language, leading questions that
influence the response, questions that assume a preexistent state of
affairs, and double questions.
In some cases, respondents interpret terms used in the question in
one way when the researcher intended a different meaning. For
example, the researcher might ask how heavy the traffic is in the
neighborhood in which the family lives. The researcher might be
asking about automobile traffic, but the respondent interprets the
question in relation to drug traffic. The researcher might define
neighborhood as a region composed of a three-block area, whereas
the respondent considers a neighborhood to be a much larger area.
Family could be defined as people living in one house or as all close
blood relations. If a question includes a term that is unfamiliar to the
respondent or for which several meanings are possible, the term
must be defined (Harris, 2014; Waltz et al., 2017).
Leading questions suggest to the respondent the answer the
researcher desires. These types of questions often include value-
laden words and indicate the researcher’s bias. For example, a
researcher might ask, “Hospitals are stressful places to work, aren’t
they?” or “Is being placed on hospice care depressing?” These
examples are extreme, and leading questions are usually constructed
more subtly. The degree of formality and permissive tone with
which the question is expressed, in many cases, are important for
obtaining a true measure. A permissive tone suggests that any of the
possible responses are acceptable. Questions implying a preexisting
state of affairs often lead respondents to admit to a previous
behavior regardless of how they answer. Examples are “How long
has it been since you used drugs?” or, to an adolescent, “Do you use
a condom when you have sex?”
Double questions ask for more than one bit of information: “Do
you like critical care nursing and working closely with physicians?”
It would be possible for the respondent to like working in critical
care settings but dislike working closely with physicians. In this case,
the question would be impossible to answer accurately. A similar
question is, “Was the in-service program educational and
interesting?” Questions with double negatives are often difficult for
study participants to interpret. For example, one might ask, “Do you
believe nurses should not question orders from other healthcare
professionals? Yes or No.” In this case, the wording of this question
can be easily misinterpreted and the word not possibly overlooked.
This situation can lead participants to respond in a way contrary to
how they actually think or feel (Harris, 2014; Saris & Gallhofer,
2014).
Each item in a questionnaire has a response set that provides the
parameters within which the respondent can answer. This response
set can be open and flexible, as it is with open-ended questions, or it
can be narrow and directive, as it is with closed-ended questions
(Polit & Yang, 2016). For example, an open-ended question might
have a response set of three blank lines (Creswell & Poth, 2018). With
closed-ended questions, the response set includes a specific list of
alternatives from which to select.
Response sets can be constructed in various ways. The cardinal
rule is that every possible answer must have a response category. If
the sample includes respondents who might not have an answer, a
response category of “don’t know” or “uncertain” should be
included. If the information sought is factual, include “other” as one
of the possible responses. However, recognize that the item “other”
is essentially lost data. Even if the response is followed by a
statement such as “Please explain,” it is rarely possible to analyze the
data meaningfully. If a large number of study participants (> 10%)
select the alternative “other,” the alternatives included in the
response set might not be appropriate for the population studied
(Dillman et al., 2014; Harris, 2014).
The simplest response set is the dichotomous yes/no option.
Arranging responses vertically preceded by a blank reduces errors.
g g p yp y
For example,

____ Yes
____ No
is better than
____ Yes ____ No

because in the latter example, the respondent might not be sure


whether to indicate yes by placing a response before or after the
word.
Response sets must be mutually exclusive, which might not be the
case in the following response set because a respondent might
legitimately need to select two responses.

____ Working full time


____ Full-time graduate student
____ Working part time
____ Part-time graduate student

Mary Cazzell, a pediatric nurse practitioner at Cook Children’s


Medical Center in Fort Worth, Texas, developed the Self-Report
College Student Risk Behavior Questionnaire, an eight-item
questionnaire with a response set of yes and no possible answers.
This questionnaire was developed and refined as part of her
dissertation at The University of Texas at Arlington. Cazzell’s (2010)
questionnaire was developed based on the 87 risk behaviors
identified in a national survey conducted by the US Centers for
Disease Control and Prevention (CDC) on the Youth Risk Behavior
Surveillance System (Brener et al., 2004). Cazzell included the most
commonly identified adolescent risk behaviors from the CDC
survey. Content validity of the questionnaire was developed by
having a doctorally prepared social worker and a pediatric clinical
nurse specialist, both risk behavior experts, evaluate the items. The
content validity index calculated for the questionnaire was 0.88,
supporting the inclusion of these eight items in the questionnaire.
Cazzell (personal communication, 2015) presented her questionnaire
at three national conferences and expanded question 2 on use of
alcohol to target binge drinking (Fig. 17.5).
FIG. 17.5 Self-Report College Student Risk Behavior
Questionnaire. Source: (Adapted from Cazzell, M. [2010]. College
student risk behavior: The implications of religiosity and impulsivity.
Ph.D. dissertation, The University of Texas at Arlington. Proquest
Dissertations & Theses. [Publication No. AAT 3391108.])

A specimen of Self-report college student risk behavior measure


shows unique ID at top-right, with a list of eight statements, marked
by a checkbox for yes or no is given as follows:

1. I smoked a cigarette (even a puff).

2. I drank alcohol (even one drink). If you answered yes to question


2:

a. If you are a female, did you have 4 or more drinks on one


occasion?
b. If you are a male, did you have 5 or more drinks on one occasion?

3. I used an illegal drug (even once).

4. I had sexual intercourse without a condom.

5. I rode in a car without wearing my seatbelt (even once).

6. I drove a car without wearing my seatbelt (even once).

7. I rode in a car with a person driving under the influence (even


once).

8. I drove a car while under the influence (even once).

Questionnaire instructions should be pilot-tested on naïve subjects


who are willing and able to express their reactions to the
instructions. Each question should clearly instruct the subject how to
respond (i.e., “Choose one,” “Mark all that apply”), or instructions
should be included at the beginning of the questionnaire. The subject
must know whether to circle, underline, or fill in a circle as he or she
responds to items. Clear instructions are difficult to construct and
usually require several attempts. Cazzell (2010) provided clear
directions and an example of how to complete her questionnaire and
directed the students to report their participation in these risk
behaviors over the past 30 days (see Fig. 17.5).
After the questionnaire items have been developed, you need to
plan carefully how they will be ordered. Questions related to a
specific topic must be grouped together. General items are included
first, with progression to more specific items. More important items
might be included first, with subsequent progression to items of
lesser importance. Questions of a sensitive nature or questions that
might be threatening should appear near the end of the
questionnaire. In some cases, the response to one item may influence
the response to another. If so, the order of such items must be
carefully considered. The general trend is to ask for demographic
data about the subject at the end of the questionnaire (Dillman et al.,
2014; Waltz et al., 2017).
An introductory page for computer-based questionnaires or a
cover letter for a mailed questionnaire is needed to explain the
purpose of the study and identify the researchers, the approximate
amount of time required to complete the form, and organizations or
institutions supporting the study. Because researchers indicate that
completion of the questionnaire implies informed consent,
researchers need to obtain a waiver of a signed consent from the IRB.
Returning mailed questionnaires is much more complex. The
instructions need to include an address to which the questionnaire
can be returned. This address must be at the end of the questionnaire
and on the cover letter and envelope. Respondents often discard
both the envelope and the cover letter and do not know where to
send the questionnaire after completing it. It is also wise to provide a
stamped, addressed envelope for the subject to return the
questionnaire. If possible, the best way to provide questionnaires to
potential subjects is by e-mailing a Web address so that participants
can easily complete the questionnaire at their leisure, and their
responses are automatically submitted at the end of the
questionnaire. Sending questionnaires by e-mail has many
advantages, but one disadvantage is being able to access only
individuals with e-mail. Researchers need to determine whether the
population they are studying has e-mail access and, if they have e-
mail, whether the addresses are available to the researchers. Another
disadvantage for both mailed and e-mailed questionnaires is not
being able to verify that the person who completes the questionnaire
is the person to whom it was sent (Dillman et al., 2014).
Your questionnaire must be pilot-tested to determine clarity of
questions, effectiveness of instructions, completeness of response
sets, time required to complete the questionnaire, and success of
data collection techniques. As with any pilot test, the subjects and
techniques must be as similar as possible to those planned for the
main study. In some cases, the open-ended questions are included in
a pilot test to obtain information for the development of closed-
ended response sets for the main study.

Questionnaire validity
One of the greatest risks in developing response sets is leaving out
an important alternative or response. For example, if the
questionnaire item addressed the job position of nurses working in a
hospital and the sample included nursing students, a category
representing the student role would be necessary. When seeking
opinions, there is a risk of obtaining a response from an individual
who actually has no opinion on the research topic. When an item
requests knowledge that the respondent does not possess, the
subject’s guessing interferes with obtaining a true measure of the
study variable.
The response rate to questionnaires is generally lower than that
with other forms of self-reporting, particularly if the questionnaires
are sent out by mail. If the response rate is less than 40%, the
representativeness of the sample is in question. However, the
response rate for mailed questionnaires is usually small (25%−35%),
so researchers are frequently unable to obtain a representative
sample, even with randomization. There seems to be a stronger
response rate for questionnaires that are sent by e-mail, but the
response is still usually less than 40% (Saris & Gallhofer, 2014).
Strategies that can increase the response rate for an e-mailed or
mailed questionnaire are discussed in Chapter 20.
Study participants commonly fail to respond to all the questions
on a questionnaire. This problem, especially with long
questionnaires, can threaten the validity of the instrument. In some
cases, study participants may write in an answer if they do not agree
with the available choices, or they might write comments in the
margin. Generally, these responses cannot be included in the
analysis; however, you should keep a record of such responses.
These responses might be used later to refine the questionnaire
questions and responses.
Consistency in the way the questionnaire is administered is
important to validity. Variability that could confound the
interpretation of the data reported by the study participants is
introduced by administering some questionnaires in a group setting,
mailing some questionnaires, and e-mailing some questionnaires.
There should not be a mix of mailing or e-mailing to business
g g
addresses and to home addresses. If questionnaires are administered
in person, the administration needs to be consistent. Several
problems in consistency can occur: (1) Some subjects may ask to take
the form home to complete it and return it later, whereas others will
complete it in the presence of the data collector; (2) some subjects
may complete the form themselves, whereas others may ask a family
member to write the responses that the respondent dictates; and (3)
in some cases, a secretary or colleague may complete the form,
rather than the individual whose response you are seeking. These
situations may lead to biases in responses that are unknown to the
researcher and can alter the true measure of the variables (Dillman et
al., 2014; Harris, 2014).

Analysis of questionnaire data


Data from questionnaires are often at the nominal or ordinal level of
measurement that limit analyses. Analysis may be limited to
descriptive statistics, such as frequencies and percentages, and
nonparametric inferential statistics, such as chi square, Spearman
rank-order correlation, and Mann-Whitney U (see Chapters 22, 23,
24, and 25). However, in certain cases, ordinal data from
questionnaires are treated as interval data, and t-tests and analysis of
variance are used to test for differences between responses of various
subsets of the sample (Grove & Cipher, 2020). Discriminant analysis
may be used to determine the ability to predict membership in
various groups from responses to particular questions.

You might also like