4. 定量研究方法简介 An Introduction to Quantitative Research Methods

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 50

An Introduction to Quantitative

Research Methods
定量研究方法简介 :

安德鲁 · 托马萨布里斯特威斯商学院
教授
Professor Andrew Thomas
Aberystwyth Business School
Agenda
• Overview of methods (traditional v mixed)
• Experimental Design
• Survey Data (Distributions and Regression)
• Hypothesis Testing

• 方法概述(传统 v 型混合)
• 实验设计
• 调查数据(分布与回归)
• 假设检验
Traditional Viewpoints
Depth of Quals
Analysis
深度的分析

Mixed Methods
New Opportunities
混合方法
新机遇

Quants

No of Participants
参会人数
Quants
• You will not get a PhD on purely a quantitative
analysis.
• Problems:
– Hypotheses proved (or not) – so what?
– Using quants as a methodological contribution
only – not enough.
– The answer is ‘12’ and I have proven it – so what?
– 你不会获得纯定量分析的博士学位。
– 问题 :
– 假设被证实 ( 或未被证实 )—— 那又怎样 ?
Quants
• Use quants to explore, test and validate large
scale data findings.
• It must be followed up by rigorous qualitative
development of your arguments using the
quants to validate contribution.
• 使用 quants 来探索、测试和验证大规模数
据发现。
• 在此之后,必须对您的论点进行严格的定
性开发,使用 quant 来验证贡献。
Quants
• Primarily large scale data analysis
• What is the correct sample size?
• What are we measuring?
• How do we measure?
• Do we need to sanitise the data?
• How can we test the data set?
• How can we validate the data?

• 主要是大规模数据分析
• 正确的样本量是多少?
• 我们在衡量什么?
• 我们如何衡量?
• 我们需要清理数据吗?
• 我们如何测试数据集?
Quants

• Unless you are proposing a quantitative methodological contribution to


your work (i.e. a new statistical method of testing data etc) then use the
most appropriate quants techniques that validate your findings and leave
it there.
• No extra points for making your data ‘very hard’ to understand – it is not
clever.
• Every RM has weaknesses so be prepared to find out what weaknesses
your RM has and prepare to defend your decision.

• 除非你是在为你的工作提出一种定量方法的贡献(例如,一种新的测
试数据的统计方法等),那么使用最合适的定量技术来验证你的发
现,并把它留在那里。
• 没有额外的分数让你的数据“很难”理解——这并不聪明。

• 每个核磁共振成像都有弱点,所以要做好准备,找出你的核磁共振成
像有哪些弱点,并准备为你的决定辩护。
Answering Questions

• Quantitative Research attempts to answer


questions by ascribing importance
(significance) to numbers or sizes or reactions
and results.
• 定量研究试图通过将重要性(意义)归因
于数字或大小或反应和结果来回答问题。
The Researcher

• The researcher’s relationship with study participants


can influence outcomes.
• The researcher is always concerned with how various
factors (including the nature of the relationship)
affect study results.
• 研究人员与研究参与者的关系会影响结果。
• 研究者总是关心各种因素(包括关系的性质)如
何影响研究结果。
Pluralist Approach

• Embrace qualitative and quantitative as best


fits each particular situation
• Acknowledge the value of more than one
method of knowing what we need to know
Pros of Quantitative Research?

• Clear interpretations
• Make sense of and organize perceptions
• Careful scrutiny (logical, sequential,
controlled)
• Reduce researcher bias
• Results may be understood by individuals in
other disciplines
Cons of Quantitative Research?

• Can not assist in understanding issues in which


basic variables have not been identified or
clarified
• Only 1 or 2 questions can be studied at a time,
rather than the whole of an event or
experience
• Complex issues (emotional response, personal
values, etc.) can not always be reduced to
numbers
Scientific Attitudes

• Empirical Verification through observation or


experimentation
• Ruling out simple explanations prior to
adopting complex ones
• Cause-Effect
• Probability of response
• Replication of response
Six Types
• Experimental
• Survey
• Meta-Analysis
• Quantitative Case Study
• Applied Behavior Analysis
• Longitudinal
Experimental Research

• Compare two or more groups that are similar


except for one factor or variable
• Statistical analysis of data
• Conditions are highly controlled; variables are
manipulated by the researcher
“The effects of” “The influence of…”
Survey Research

• Use set of predetermined questions


• Collect answers from representative sample
• Answers are categorized and analyzed so
tendencies can be discerned
Meta-Analysis

• Numerous experimental studies with reported


statistical analysis are compared
• Distinguishes trends
• Effect size (the influence of the independent
variable on the dependent variable) can be
compared
Case Study

• Also called single case design


• Describes numerically a specific case (can be
group or individual)
• May test or generate hypotheses
• Results often presented with tables and
graphs
Applied Behavior Analysis
(ABA)
• One person
• Examine the individual’s responses in different
situations (conditions) across time
• Results are usually depicted with tables and
graphs
• Conclusions based on data in these forms of
presentation
Longitudinal

• Individual or group research conducted across


time
• Subject attrition is major problem
• Preserving confidentiality is also difficult
• Specific standardized tools may change over
time
Hypothesis
• Hypothesis = an idea that will be tested
through systematic investigation
• A researcher’s prediction of what outcomes
will occur
• More clearly stated in research of 10 years ago
than now
• Fits experimental research, also called
“Hypothesis Testing”
Basic Statistical Theory
Independent Variable

• The variable that is controlled or manipulated


by the researcher
• The variable that is thought to have some
effect upon the dependent variable
• The one difference between the treatment
(experimental) and control groups
• X Axis
Dependent Variable

• That which is measured


• The outcome
• That which is influenced or affected by the
dependent variable
• Y Axis
Reliability
• The ability of a measurement tool to yield
consistent results over time or under similar
conditions
Content Validity

• The extent to which the items on a testing tool


(that being used to measure the dependent variable) reflect
all of the facets being studied
• All aspects are sampled (e.g. aural skills final
exam)
Criterion-Related Validity

• Also called Predictive Validity


• The extent to which a testing tool yields data
that allow the researcher to make accurate
predictions about the dependent variable
Construct Validity

• The extent to which the testing tool measures


what it is supposed to measure
• Relationship between the items on the tool
and the dependent variable
• Also relates to actual (physical) construction of
a written tool (Survey Design) and how this
impacts the accuracy of the results
Internal Validity

• Relates to the internal aspects of a study and


their effect on the outcome:
• researcher planning and preparation
• judgment
• control for potential confounding variables
External Validity

• Relates to the extent to which findings can


generalize beyond the actual study
participants
• “How valid are these results for a different
group of people, a different setting, or other
conditions of testing, etc.?”
What we will talk about

• Measurement
– Population & Sampling
– Random Assignment
– Generalizability

• Method
– Experiments & Quasi-experiments
– Questionnaires & Surveys
Measurement – Sampling

• Specify your population of concern


• Sampling
– Selecting respondents from population of concern
– Random sampling
– Systematic selection
– Stratified sampling
– Convenience sampling
– Snowball sampling
Sampling Biases (Watch out !)

• Non-response bias
– Be persistent
– Offer incentives and rewards
– Make it look important

• Volunteer bias
– Some people volunteer reliably more than others
for a variety of tasks
Generalizability

• How do you know that what you found in


your research study is, in fact, a general
trend?

• Does A really, always cause B?

• If A happens, is B really as likely to happen as


you claim? Always? Under certain conditions?
Experiments
• An operation or procedure carried out under
controlled conditions to discover an unknown effect or
law, to test or establish a hypothesis, or to illustrate a
known law
Experiments
• Key feature common to all experiments:
– To deliberately vary something in order to
discover what happens to something else later
– To seek the effects of presumed causes
An Experiment is
• A controlled empirical test of a hypothesis.

• Hypotheses include:
– A causes B
– A is bigger, faster, better than B
– A changes more than B when we do X

• Two requirements:
– Independent variable that can be manipulated
– Dependent variable that can be measured
Experiments in Research

• Comparing one design or process to another


• Deciding on the importance of a particular
feature in a user interface
• Evaluating a technology or a social
intervention in a controlled environment
• Finding out what really causes an effect
• Finding out if an effect really exists
Remember
• Experiments explore the effects of things that
can be MANIPULATED
Types of Experiments
• Randomized – units/participants assigned to
receive treatment or alternative condition
randomly
• Quazi – no random assignment
• Natural – contrasting a naturally occurring
event (i.e. disaster) with a comparison
condition
If your study involves
experiments

• Experimental design:
Shadish W.R., Cook T.D. & Campbell P.T. (2002) Experimental
and Quasi-Experimental Design for Generalized Causal
Inference. Boston, Mass: Houghton Mifflin

• Experimental data analysis:


Bruning, J. L. & Kintz, B. L. (1997). Computational handbook
of statistics (4th ed.). New York: Longman.
Questionnaires & Surveys

• Self-report measures
– Questionnaires & surveys
– Interviews
– Diaries
• Types
– Structured
– Open-ended
Questionnaires & Surveys

• Advantages
– Sample large populations (cheap on materials &
effort)
– Efficiently ask a lot of questions
• Disadvantages
– Self-report is fallible
– Response biases are unavoidable
Response biases
• Relying on people’s memory of events & behaviors
– Emotional states can “prime” memory
– Recency effects
– Routines are deceiving
• Social desirability
– Solution: none that are simple
• Yea-saying
– Solution: vary the direction of response
alternatives
General Survey Biases

• Sampling – are respondents representative of


population of interest? How were they selected?
• Coverage – do all persons in the population have an
equal change of getting selected?
• Measurement – question wording & ordering can
obstruct interpretation
• Non-response – people who respond differ from
those that do not
Design is KEY
• Format – booklet, printed vertical, one-sided
• Question ordering – earlier questions can
prime answers to later questions
• Page layout – group similar items & use
consistent fonts and response categories
• Pre-testing – conduct think-alouds with
volunteers demographically similar to
expected participants
Common Problems
• Avoid complicated & double-barrel questions
– Complexity increases errors & non-response
• Navigation is paramount – make sure the
survey is EASY to follow
• Open-ended questions
– The size of the field allotted will determine the
number of words
• Incentive is key
– BUT amount differences have little impact
If your study involves
surveys
• Designing surveys:
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet,
mail, and mixed-mode surveys : the tailored design method
(3rd ed.). Hoboken, N.J.: Wiley & Sons.
Fowler, F. J. (1995). Improving survey questions : design and
evaluation. Thousand Oaks: Sage Publications.

• Analyzing data:
Cohen, J., Cohen, P., West, S., & Aiken, L. (2003). Applied
multiple regression/correlation analysis for the behavioral
sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum
Associates.
So… what?
• Difference between quantitative methods is in
the questions they can answer

• There are a LOT of methods and even more


statistical techniques

• Regardless of the method, if it’s not an


experiment, you CAN NOT prove causation
Exercise
• Going back to the previous exercise, rethink the aim of your
project. Has it changed?
• If yes, what is the new aim
• If not;
• Have a go at further refining your research method.
– What will be your research design/approach? (Survey, experiment,
questionnaire etc)
– What will be the variables? (Dependent and Independent variables)
– How will you validate the study and the variables used.

You might also like