DR Olu's Lecture Notes
DR Olu's Lecture Notes
1
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
• Validity
- refers to the extent to which your measurement procedure is measuring what you think it is
measuring and whether you have interpreted your scores correctly
• A measure must be reliable in order to be valid, but a reliable measure is not necessarily valid
Types of Reliability
• Reliability
- refers to the consistency or stability of the scores of your test, assessment, instrument, or raters
• Test-retest reliability
- consistency of individual scores over time
- same test administered to individuals two times
- correlate scores to determine reliability
- how long to wait between tests
typically an increase in time between testings will decrease reliability
• Equivalent-forms reliability
- consistency of scores on two versions of test
- each version of test given to the same group of individuals
- e.g., SAT, GRE, IQ
• Internal consistency reliability
- consistency with which items on a test measure a single construct
- e.g., learning, extraversion
- involves comparing individual items within a single test
- coefficient alpha (Cronbach’s alpha) is common index
should be +0.70 or higher
multidimentional tests will generate multiple coefficient alphas
• Interrater reliability
- degree of agreement between two or more observers (raters)
- how is interobserver agreement calculated
- nominal or ordinal scale
the percentage of times different raters agree
- interval or ratio scale
correlation coefficient
Validity
- The accuracy of the inferences, interpretations, or actions made on the basis of any
measurement
• Construct validity
- involves the measurement of constructs
- e.g., intelligence, happiness, self-efficacy
- do operational definitions accurately represent construct we are interested in
- operationalization is a never ending process
Methods Used to Collect Evidence of Validity
• Content validity
- validity assessed by experts
do items appear to measure construct of interest? (face validity)
were any important content areas omitted?
2
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
were any unnecessary items included?
• Internal structure
- how well do individual items relate to the overall test score or other items on the test
- uni- vs. multi-dimentional constructs
- factor analysis
statistical procedure used to determine the number of dimensions present in a set of
items
- homogeneity
item to total correlation (coefficient alpha)
• Relations to uther variables
- criterion-related validity
criterion
- the standard or benchmark that you want to correlation with or predict accurately on the basis
of your test scores
predictive validity
- using scores obtained at one time to predict the scores on a criterion at a later time
- e.g., GRE and graduate school GPA, LSAT and law school GPA, MCAT and medical school
GPA
concurrent validity
- degree to which scores obtained at one time correctly relate to the scores on a known criterion
obtained at the same time
- e.g., new depression scale and Beck Depression Inventory
– convergent validity
extent to which test scores relate to other measures of the same construct
e.g., same as predictive and concurrent validity
– discriminant validity
extent to which your test scores do not relate to other test scores measuring different
constructs
e.g., happiness and depression, depression and IQ
– known groups validity evidence
extent to which groups that are known to be different from one another actually differ
on the construct being developed
e.g., females high on femininity and males high on masculinity
Using Reliability and Validity Information
• Norming group
- the reference group upon which reported reliability and validity evidence is based
• Sources of Information about tests
- Mental Measurements Yearbook
- tests in print
- PsycINFO and PsycARTICLES
Sampling Methods
• Sample
- a set of elements selected from a population
• Population
- the full set of elements or people from which the sample was selected
3
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
• Sampling
- process of drawing elements from population to form a sample
• Representative sample
- a sample that resembles the population
• Equal probability method of selection method (EPSEM)
- each individual element has an equal probability of selection into the sample
• Statistic
- a numerical characteristic of sample data
- e.g, sample mean, sample standard deviation
• Parameter
- a numerical characteristic of population data
- e.g., population mean, population standard deviation
• Sampling error
- the difference between the value of the sample statistic and the value of the population
parameter
• Sampling frame
- a list of all the elements in a population
• Response rate
- the percentage of individuals selected to be in the sample who actually participate in the study
Sampling Techniques
• Biased sample
- a non-representative sample
• Proximal similarity
- generalization to people, places, settings, and contexts that are similar to those described in the
study
Random Sampling Techniques
• Simple random sampling
- choosing a sample in a manner in which everyone has an equal chance of being selected
(EPSEM)
- sampling “without replacement” is preferred
- random numbers generators simplify the process
o www.randomizer.org
o www.random.org
• Stratified random sampling
- random samples drawn from different groups or strata within the population
groups should be mutually exclusive
strata can be categorical (nominal or ordinal) or quantitative (interval or ratio)
proportional stratified sampling
involves insuring that each subgroup in sample is proportional to the subgroups in
the population
• Stratified random sampling example (proportional)
- strata – gender (males/females)
- population – presidents of APA, N = 122
14 female presidents (11%)
108 male presidents (89%)
4
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
sample – n = 100
11 female presidents drawn randomly
89 male presidents drawn randomly
• Cluster random sampling
- involves random selection of groups of individuals
- clusters
a collective type of unit that includes multiple elements (has more than one unit in it)
e.g., neighborhoods, families, schools, classrooms
- one-stage cluster sampling
randomly select clusters and using all individuals within
e.g., randomly select 15 psychology classrooms using all individuals in each classroom
- two-stage cluster
randomly select clusters AND
randomly choosing individuals within each chosen cluster
e.g., randomly select 30 psychology classrooms, then randomly select 10 students from
each of those classrooms
• Systematic sampling
- involves three steps
determine the sampling interval (k)
population size divided by desired sample size
randomly select a number between 1 and k, and include that person in your sample
also include each kth element in your sample
periodicity
potential but uncommon problem
problematic situation in systematic sampling that can occur if there is a cyclical pattern
in the sampling frame
5
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
6
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
7
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
2. Research Validity
Learning Objectives
Explain the meaning of research validity.
Explain the meaning of statistical conclusion validity and its importance in research.
Explain the meaning of construct validity as it relates to experimental research.
Describe the threats to construct validity and explain how they threaten construct validity.
Explain the meaning of internal validity and its importance in making causal inferences.
Describe the threats to internal validity and explain how these threats operate in a one- group
and in a two-group design.
Explain how to eliminate the threats to internal validity.
Explain the meaning of external validity and describe the conditions that threaten external
validity.
LECTURE NOTE OUTLINE
Research Validity
• Truthfulness of inferences made from a research study
- Four major types of research validity
statistical conclusion validity
construct validity
internal validity
external validity
Statistical Conclusion Validity
The validity with which we can infer that the independent and dependent variables covary
Answers the question – do independent and dependent variables covary
Inferential statistics allow us to establish this type of validity
Small sample size is a threat to statistical conclusion validity
Construct Validity
• The extent to which we can infer higher-order constructs from the operations we use to represent
them
- e.g., depression, obsessive-compulsive disorder, intelligence, love
• Constructs are used for
- research participants
e.g., children, adults, individuals with sleep apnea
- independent variable
e.g., frustration, anxiety, room temperature, encoding
- dependent variable
e.g., memory, intelligence, reaction time
- experimental setting
e.g., laboratory, college campus, doctor’s office
8
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Threats to Construct Validity
• Participants reactivity to the experimental situation
- research participants’ motives and tendencies that affect their perceptions of the situation and
their responses on the dependent variable
- influenced by the demand characteristics
any of the cues available in an experiment, such as instructions, rumors, or setting
characteristics, that influence the responses of participants
- primary motive – positive self-presentation
- implication for research
• Experimenter effect
- actions and characteristics of researchers that influence the responses of participants
- can be intentional or unintentional
• Experimenter’s motive of supporting the study hypothesis can lead to bias
• Ways experimenter may bias the study
- experimenter attributes
biasing experimenter effects attributable to the physical and psychological
characteristics of the researcher
three categories
biosocial attributes
e.g., experimenter’s age, gender, race, religion
psychosocial attributes
e.g., experimenter’s anxiety level, need for social approval, hostility
situational factors
e.g., prior contact between experimenter and participant, is the
experimenter naïve or experienced
- experimenter expectancies
biasing experimenter effects attributable to the researcher’s expectations about the outcome
of the experiment
e.g., experimenter acts differently toward experimental vs. control group to
influence study to support their hypothesis
Internal Validity
• The correctness of inferences made by researchers about cause and effect
• Criteria for identifying a causal relation
- cause (IV) must be related to the effect (DV) (relationship condition)
- changes in IV must precede changes in DV (temporal order condition)
- no other plausible explanation must exist for the effect
• Primary threat
- confounding extraneous variables
• Extraneous variable
- a variable that competes with the IV in explaining the DV
• Confounding extraneous variables
- an extraneous variable that co-occurs with the IV and affects the DV
• Example
9
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
- IV = tutoring, DV = grades
- you don’t use random assignment, but use two intact classrooms to serve as experimental and
control groups
- the experimental group (who receives extra tutoring) shows significant improvement in grades
- the class serving as your experimental group is an honors class. This is a confounding
extraneous variable
• Eliminate the confounding influence of extraneous variables by
- holding their influence constant
- using random assignment to balance their influence
Threats to Internal Validity
• History
- any event that can produce the outcome, other than the treatment condition, that occurs during
the study before posttest measurement
- typically can be controlled for with comparison control group
• Differential history
- occurs in multi-group design when event has differential impact on groups
- example
a researcher wants to test a new treatment for bipolar disorder on a group of patients. All
patients seem to be showing improvement after 4 weeks
history threat – at Week 2 of treatment, actress Catherine Zeta Jones announces that she is
bipolar and wants everyone who is also suffering with it to know that treatment is key
can the researcher conclude that the new treatment is responsible for the improvement in
the patients?
• Differential history
- example
Shadish and Reid (1984)
evaluation of the efficacy of the WIC program
experimental group – received WIC assistance
control group – did not receive WIC assistance
differential history threat – women who received WIC also received food stamps
can you conclude that the WIC assistance lead to improved pregnancy outcomes in the
experimental group?
• Maturation
- any physical or mental change that occurs with the passage of time and affects dependent
variable scores
- e.g., age, learning, fatigue, boredom, and hunger
- study example
evaluation of Head Start with only one group can leave your study open to maturational
threat
- typically can be controlled for with comparison control group
• Instrumentation
- changes from pretest to posttest in the assessment or measurement of the dependent variable
- e.g., if human observers change measurement because they become bored or fatigued
using multiple observers can assess the validity and reliability of their observations
10
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
training observers on observation techniques can also decrease instrumentation bias
• Testing
- changes in a person’s score on the second administration of a test resulting from having
previously taken the test
- typically can be controlled for with comparison control group
- you might ask why not just eliminate the pretest?
we will see the many advantages of including pretests in Chapter 8
• Regression artifact
- effects that appear to be due to the treatment, but are due to regression to the mean
• Regression to the mean
- the tendency for extreme scores to be closer to average at posttest
• Potential problem if participants with extreme scores at pretest are selected for study
• Typically can be controlled for with comparison control group
• Attrition
- loss of participants because they don’t show up or they drop out of the research study
- only problematic when participants are tested multiple times
• Differential attrition
- in a multi-group design, groups become different on an extraneous variable because of
differences in the loss of participants across the groups
- example
IV = new exercise program
DV = weight loss
differential attrition = more participants drop out of the experimental group because
they are unmotivated to exercise
• Selection
- production of nonequivalent groups because a different selection procedure operates across the
groups
- can be a threat when no random assignment is used in a multi-group design
- e.g., tutoring example of confounding EV
• Additive and interactive effects
- produced by the combined effect of two or more threats
- important to note the interaction between selection and other threats
selection-history
selection-maturation
selection-instrumentation
selection-testing
selection-regression artifact
11
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
External Validity
• Degree to which the study results can be generalized to and across other people, settings, treatments,
outcomes, and times
• A failure to generalize can result from several factors:
- lack of random selection
- chance variation
replication can help reduce this factor
- failure to identify interactive effects of independent variables
e.g., the effect of an attitude change procedure interacts with gender, meaning it works
better for males compared to females
Types of External Validity
• Population validity
- degree to which the study results can be generalized to and across the people in the target
population
- target population vs. accessible population
you must generalize from the sample to the accessible population
then from the accessible population to the target population
• Ecological validity
- the degree to which the results of a study can be generalized across settings or environmental
conditions
- reduced ecological validity is a common criticism of laboratory experiments
• Temporal validity
- the degree to which the results can be generalized across time
- cyclical variation
any type of systematic up-and-down movement on the dependent variable over time
e.g., seasonal variation
values on the dependent variable vary by season
e.g., if research conducted in the summer months increased, juvenile delinquency
would be observed and compared to months school is in session
• Treatment variation validity
- the degree to which the results of a study can be generalized across variations in the treatment
- e.g., clinical treatment during research studies are typically given by seasoned competent
therapists. The results of these studies may not generalize to treatment of the general public
• Outcome validity
- the degree to which the results of a study can be generalized across different, but related,
dependent variables
- e.g., job training programs increase a person’s chances of getting a job after graduation. Does
this correlate with the person keeping the job, which is another potential dependent measure
12
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Relationship between Internal and External Validity
Relationship between internal and external validity is often inverse
Factors that increase our ability to establish cause and effect tend to decrease our ability to
generalize
External validity is established through replication
Emphasis of internal or external validity depends on whether or not a causal relationship has
been established
13
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
3. Survey Research
Learning Objectives
Define and explain the meaning of survey research.
Explain when survey research is used in psychology.
Distinguish between cross-sectional and longitudinal designs.
Discuss survey data collection methods.
Explain how to conduct an effective interview.
Explain the 12 principles of questionnaire construction.
Explain how to construct a survey instrument.
Explain how to select a survey sample.
Describe how to prepare survey data for analysis.
LECTURE NOTE OUTLINE
Survey Research
• Nonexperimental method using interviews or questionnaires to assess attitudes, activities, opinions,
or beliefs
• Surveys often used to
- assess changes in attitudes over time
- test theoretical models
- describe and predict behavior
• To insure high external validity, random samples should be used
Steps in Conducting Survey Research
• Plan and design the survey research study
- determine what issues you want to survey
- determine whether a cross-sectional or longitudinal design will be used
- identify the target population and select the sample(s)
• Construct and refine the survey instrument
• Collect the survey data
• Enter and “clean” the data
- locate and eliminate errors where possible
• Analyze the survey data
• Interpret and report the results
Cross-sectional Designs
• Cross-sectional studies
- collecting data in a single, brief time period
- typically from multiple groups in survey research
- examples
o Whisman (2007)
“Marital Distress and DSM-IV Psychiatric Disorders in a Population-Based
National Survey”
a national survey research study with a representative sample of English-
speaking adults (18 years or older) in the United States
found that marital distress was associated with anxiety, mood, and substance
disorders
the association between marital distress and depression was stronger when one
moved from younger to older age groups
14
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
o Plous (1996)
surveyed APA members to determine members’ attitudes toward the use of
animals in research
the majority of respondents approved the use of animals, but wanted to
eliminate or minimize the pain experienced by research animals and the
number of animals euthanized
Longitudinal Designs
• Longitudinal studies
- collecting data from the same participants at more than one point in time
- can be time consuming and expensive
- In survey research longitudinal studies can be called panel studies
o type of longitudinal design in which the same individuals are surveyed multiple times
over time
o Example
Moskowitz and Wrubel (2005)
wanted to gain a more in-depth understanding of the meaning of having
contracted HIV
participants included 57 gay men, ranging in ages from 24 to 48, who tested
positive for HIV
researchers conducted bimonthly interviews over the course of two years to
identify how these individuals appraised their HIV-related changes over time
Trend Studies
• Independent samples are taken successively from a population over time and the same questions are
asked
- i.e., same survey questions are asked of different samples over time
- example
o general social survey
conducted by the National Opinion Research Center (at the University of
Chicago)
each year, a different sample of U.S. citizens who are 18 years or older are
asked questions about many social, psychological, and demographic variables
15
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Survey Data Collection Method
• Interview
- verbal self-report data are collected from interviewees by an interviewer
- types
o face-to-face or personal interview
advantages
ability to clear up ambiguities and higher completion rate
disadvantage
expense and participants may be uncomfortable discussing private
issues
o telephone interview
less expensive than face to face and comparable data
can utilize random digit dialing for random samples
• Questionnaire
- self-report data collection instrument filled out by research participants
• Mail questionnaires
- advantage
o low cost
- disadvantage
o low return rate, typically 20-30%
• Group-administered questionnaire
- advantage
o quick and efficient
- disadvantage
o cannot be used if participants are spread out across locations
• Electronic survey
- e-mail and Web based
- advantages of electronic surveys
o low cost
o instant access to wide audience
o data in form easy for analysis
o flexible in layout – especially Web-based survey
- disadvantages of electronic surveys
o privacy and anonymity may not be upheld
o sample may not be representative of population because of volunteer sampling
Constructing and Refining a Survey Instrument
• Principle 1. Write items to match the research objectives
- construct items that cover the different areas and content needed to fulfill your objectives
- conduct an extensive review of the literature to make sure you have identified all areas that you
need to cover
- write items and construct a questionnaire that will have the psychometric properties of
providing reliable and valid data
o content and construct validity are especially relevant
• Principle 2. Write items that are appropriate for the respondents to be surveyed
- who will be completing the questionnaire?
16
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
o you need to consider, empathetically, how your participants will view what you write
o don’t use stilted or pretentious language
o consider reading level and the demographic and cultural characteristics of your
participants
o write items that are understandable and meaningful to participants
o use natural and familiar language
• Principle 3. Write short, simple questions
- survey questionnaire items should be short, clear, and precise
- use simple language and avoid jargon
- write items that are unambiguous and easy to answer
• Principle 4. Avoid loaded or leading questions
- loaded term
o a word that produces an emotionally charged reaction
o example – “liberal”
may have political connotations, even when used in the looking question
“I like a liberal amount of peanut butter on my sandwich”
- leading question
o suggests to the respondent how they should respond
o example
• Principle 5. Avoid double-barreled questions
- double-barreled question ask about two or more issues in a single question
- example
o “do you agree that President Obama should focus his primary attention on the economy
and foreign affairs?”
• Principle 6. Avoid double negatives
- double negative
o a sentence construction that contains two negatives
o example
do you agree or disagree with the following statement?
psychology professors should not be allowed to conduct research during their
office hours
• Principle 7. Determine whether closed-ended or open-ended questions are needed
- open-ended question
o a question that allows participants to respond in their own words
o example
“what do you do most often when you feel depressed?”
o open-ended better if researcher is unsure what respondent is thinking or variable is ill-
defined
o commonly used in exploratory or qualitative research
o responses to open-ended questions must be coded and categorized
- closed-ended question
o a question where participants must select their answer from a set of predetermined
response categories
o closed-ended are easier to code and provide more standardized data
o example
- mixed-question format
17
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
o a combination of both open and closed-ended questions
o example
• Principle 8. Construct mutually exclusive and exhaustive categories
- mutually exclusive
o the categories do not overlap
- exhaustive
o categories include all possible responses
• Principle 9. Consider the different types of closed-ended response categories
- rating scales
o dichotomous
two choices (e.g., yes and no)
o multichotomous
more than two choices (usually preferred)
ability to measure direction and strength of attitude
distance between each descriptor should be the same
anchors
descriptors placed on points on a rating scale
Examples of Response Categories for Popular Rating Scales
- binary forced choice
o participant chooses one of pair of attitudinal objects
o can reduce response set
o can be difficult for item analysis
o typically not recommended
Binary Forced Choice Example
• Example
- Narcissistic Personality Inventory (NPI) (Foster & Campbell, 2007)
o used to measure “normal” narcissism in personality and social psychological research
- rankings
o participants asked to put their responses in ascending or descending order
o can be open or closed ended
o typically rank 3-5 objects
o example
- checklists
o participants asked to check all response categories that apply
o example
• Principle 10. Use multiple items to measure complex or abstract constructs
- variables like gender, weight, or ethnicity can be easy to measure
- complex or abstract constructs such as self-esteem, intelligence, or locus of control can be
harder
o multiple items needed to measure these constructs
- semantic differential
o scaling method in which participants rate an object on a series of bipolar rating scales
Semantic Differential Example
18
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
• “Occupation and Social Experience: Factors Influencing Attitude Towards People with
Schizophrenia” (Ishige & Hayashi, 2005)
- measured the participants’ attitudes using 20 bipolar adjectives
- adjective pairs used
o safe vs. harmful, bad vs. good, fierce vs. gentle, shallow vs. deep, active vs. inert, lonely
vs. jolly, simple vs. complicated, dirty vs. clean, distant vs. near
• Principle 10. Use multiple items to measure complex or abstract constructs
- Likert scaling
o a multi-item scale is used to measure a single construct by summing each participant’s
responses to the items on the scale
o questions can be positively or negatively worded
o statistically analyzed using coefficient alpha
Likert Scale Example
Five items each positively and negatively worded
Constructing and Refining a Survey Instrument
• Principle 11. Make sure the questionnaire is easy to use from beginning to end
- ordering of questions
o if using positive and negative questions, ask positive questions first
o ask interesting questions first to capture participants attention
o demographic questions last
- questionnaire length
o questionnaire optimal length unknown
o mail questionnaires should be short
o telephone interviews should be less than 15 minutes
o face to face interviews can be longer
- contingency questions
o an item directing the participant to different follow-up questions depending on the
initial response
o to many can be confusing for participant
o example
- response bias
o social desirability bias
occurs when participants respond in a way to make themselves look good
can minimize by insuring anonymity
if using binary forced choice questions, make each choice equally desirable
- response set
o tendency to respond in a specific way
o participant may not want to pick extremes and always choose middle choice
solution: use even number of response categories on rating scale
o including multiple question types helps to reduce response set, but can also reduce
reliability
• Principle 12. Pilot test the questionnaire until it is perfected
- to identify and fix problems
- to practice protocols
- clear up ambiguity
- use think aloud technique
19
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Selecting Your Survey Sample From the Population
If primary goal is to explore relationship between variables rather than generalization, convenience
sample is acceptable
If generalization to population is needed a random sampling method should be used
Preparing and Analyzing Survey Data
• Check for errors
- examples
o participants answers 7 on a Likert scales that is 1-4
o participant does not answer a question, data missing
• Analyze quantitative data with statistical analysis
20
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
4. Qualitative and Mixed Methods Research
Learning Objectives
- Compare and contrast quantitative, qualitative, and mixed methods research.
- Explain the characteristics, strengths, and weaknesses of qualitative research.
- Explain the major types of validity in qualitative research.
- Explain the validity strategies used to obtain strong qualitative research.
- Compare and contrast the four major qualitative research methods.
- Explain the basic mixed methods research designs.
- Explain the major types of validity in mixed methods research.
21
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
22
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
• Descriptive validity
- the factual accuracy of the researcher’s account
23
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
- strategies for achieving
investigator triangulation
- use of multiple investigators to collect and interpret the data
- helps to insure descriptive validity
• Interpretive validity
- accurately portraying the participants’ subjective viewpoints and meanings
- strategies for achieving
participant feedback
member checking to see if participants agree with the researcher’s statements,
interpretations, and conclusions
low-inference descriptors
descriptions that are very close to participants’ words or are direct verbatim quotes
• Theoretical validity
- degree to which theory or explanation fits data
- strategies for achieving
extended fieldwork
spending enough time in the field to fully understand what is being studied
theory triangulation
the use of multiple theories or perspectives to aid in interpreting the data
pattern matching
construction and testing of a complex hypothesis
peer review
discussing your interpretations with one’s peers and colleagues
• Internal validity
- is observed relationship causal?
- idiographic causation
an action for a particular person in a local situation with an observable result
- Nomological causation
the standard view of causation in science; refers to causal relationships among variables
- strategies to achieve
researcher – as detective
metaphor applied to researcher looking for the local cause of a single event
methods triangulation
use of multiple research methods or methods of data collection
data triangulation
use of multiple sources of data
• External validity
- the ability to generalize the findings to other people, settings, and times
o naturalistic generalization
generalization based on similarity, made by the reader of a research report
o theoretical generalization
generalization of a theoretical explanation beyond the particular research study
Four Major Qualitative Research Methods
• Phenomenology
- researcher attempts to understand and describe how one or more participants experience a
phenomenon
24
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
o e.g., death of a loved one, a counseling session, an illness, winning a championship
football game, or experiencing a specific emotion such as guilt, anger, or jealousy
- key question
o what is the meaning, structure, and essence of the lived experience of this phenomenon
for a particular individual or for many individuals?
- accessing participants’ life world
o the research participant’s inner world of subjective experience
o where you have your “lived experiences”; where your immediate consciousness exists
• examples of phenomenological experiences that have been studied
- obsessive-compulsive disorder (Garcia et al., 2009; Wahl, Salkovskis, & Cotter, 2008)
- addiction (Gray, 2004)
- racism (Beharry & Crozier, 2008)
- sexual abuse (Alaggia & Millington, 2008)
- psychotic symptoms in narcolepsy (Fortuyn et al., 2009)
- life satisfaction (Thomas & Chambers, 1989)
- the meaning of aging (Adams-Price, Henley, & Hale, 1998).
• Phenomenology
- primary method of data collection
o in-depth interviews
o extract phrases and statement that pertain to phenomenon
o interpret and give meaning to phrases and statements
o write narrative describing the phenomenon
25
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Four Major Qualitative Research Methods
• Ethnography
- focuses on the discovery and description of the culture of a group of people
- Culture
o The shared beliefs, values, practices, language, norms, rituals, and material things that
the members of a group use to interpret and understand their world
- Shared values
o culturally defined standards about what is good or bad or desirable or undesirable
- Holism
o idea that a whole, such as a culture, is more than the sum of its individual parts
- Shared beliefs
o statements or conventions that people sharing a culture hold to be true or false
- Norms
o written and unwritten rules specifying how people in a group are supposed to think and
act
- the focus of ethnography
o emic perspective
the insider’s perspective
o etic perspective
the researcher’s external or “objective outsider” perspective
- primary data collection method
o participant observation
researcher becomes an active participant in the group being investigated
requires entry and acceptance by group
must guard against reactive effect
non-typical behavior of participants because of the presence of the
researcher
collect information by observing and listening
- data analysis
o identify themes and patterns of behavior
- write narrative report
• Ethnography
- other terminology
o gatekeepers
group members who control a researcher’s access to the group
o ethnocentric
judgment of people in other cultures based on the standards of your culture
o going native
over-identification with the group being studied so that one loses any
possibility of objectivity
o fieldwork
a general term for data collection in ethnographic research
o fieldnotes
notes taken by the researcher during (or immediately after) one’s observations
in the field
26
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Four Major Qualitative Research Methods
• Case study research
- method in which the researcher provides a detailed description and account of one or more
cases
- case
o a bounded system
- primary data collection method
o multiple sources and methods of data collection are used
o examples
in-depth interviews, documents, questionnaires, test results, and archival
records
contextual and life history data are also collected to contextualize the case and
to aid in understanding the causal trajectories that might have influenced the
case
- types of case studies
o intrinsic case study
case study in which the researcher is only interested in understanding the
individual case, organization, or event
example – Exhibit 13.1
o instrumental case study
case study in which the researcher studies a case in order to understand
something more general than the particular case
conducted to provide insight into an issue or to develop, refine, or alter some
theoretical explanation
example – case studies of Eric Harris and Dylan Klebold after Columbine
o collective case study
study of multiple cases for the purpose of comparison
examples
case study of three individuals with intellectual disabilities who are
placed in a general education class
examining several astronauts’ descriptions and experiences of being in
space
data analysis
cross-case analysis – analysis in which cases are compared and
contrasted
Four Major Qualitative Research Methods
• Grounded theory
- methodology for generating and developing a theory that is grounded in the particular data
- originally formulated Glaser and Strauss (1967)
- foundational question
o what theory or explanation emerges from an analysis of the data collected about this
phenomenon?
• Four key characteristics of good grounded theory
- the newly constructed grounded theory should fit the data.
o does the theory correspond to real-world data?
- the theory must provide understanding of the phenomenon
o is the theory clear and understandable to researchers and practitioners?
- the theory should have some generality
27
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
o is the theory abstract enough to move beyond the specifics in the original research
study?
- the theory should contribute to some control of the phenomenon
o can the theory be applied to produce real-world results?
Grounded Theory Example – Van Vilet (2008)
• Grounded theory
- data collection
o most common methods of data collection are interviews and observations
- data analysis included
o open coding
first stage of data analysis in GT; it’s the most exploratory stage
o Axial coding
second stage of data analysis in GT; focus is on making concepts more abstract
and ordering them into the theory
o selective coding
third and final stage of data analysis in GT in which the theory is finalized
- theoretical saturation
o occurs when no new information relevant to the GT is emerging from the data and the
GT has been sufficiently validated
Mixed Methods Research
• The research approach in which both quantitative and qualitative methods are used
• Compatibility thesis
- position that quantitative and qualitative research methods and philosophies can be combined
• Pragmatism
- philosophy focusing on what works as the criterion of what should be viewed as tentatively
true and useful in research and practice
• Questions to be answered when using a mixed design
- should you primarily use one methodology or treat them equally?
- should phases of study be conducted concurrently or sequentially?
28
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
29
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Research Validity in Mixed Methods Research
• Inside–outside validity
- present when the researcher provides both the insider and objective outsider perspectives
• Weakness minimization validity
- present when the researcher compensates for the weakness of one approach through the use of
an additional approach
• Sequential validity
- making sure that the ordering of quantitative and qualitative components in a sequential design
does not bias the results
• Sample integration validity
- researchers must not treat the quantitative and qualitative samples as equal, but, instead, draw
appropriate conclusions from each sample
• Multiple validities
- making sure your mixed methods study meets appropriate quantitative, qualitative, and mixed
methods validity types
Mixed Methods Designs
• Design scheme based on two dimensions
- time order
o one of the two dimensions used in MM design matrix; its levels are concurrent and
sequential
- paradigm emphasis
o one of the two dimensions used in MM design matrix; its levels are equal status and
dominant status
30
BUS 802 – BUSINESS RESEARCH METHODS 04/01/2017
Lecture Notes Dr. Olu Akintunde, CPA. PMP. CISA
Mixed Method Design
- QUAN and quan both stand for quantitative research
- QUAL and qual both stand for qualitative research
- Capital letters denote priority or increased weight or emphasis
- Lowercase letters denote lower priority or weight or emphasis
- A plus sign (+) indicates the concurrent conduct of the quantitative and qualitative parts (e.g.,
collection of data)
- An arrow (→) represents a sequential conduct of the quantitative and qualitative parts (e.g.,
collection of data)
31