Final Quantitative Research

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

QUANTITATIVE RESEARCH

Quantitative research is explaining phenomena by collecting numerical data that are


analyzed using mathematically based methods.
Research Paradigms/ Paradigms/ Epistemology:
Realism or Positivist (Quantitative research)
 Uncover existing reality. The truth is “out there”, and it is the job of the researcher to
use objective research methods to uncover that truth.
 Researcher needs to be as detached from the research as possible, and use methods
that maximize the involvement of the researcher in the research.
 According to positivism, the world works according to fixed laws of cause and effect.
Subjectivism (Qualitative research)
 Reality is not “out there” to be objectively and dispassionately observed by us, but at
least in part constructed by us and by our observations. There is no pre-existing
objective reality that can be observed.
 Subjectivists are relativistic. All truth can be relative and is never definite.
Pragmatism (Mixed-method research)

ERRORS IN STATISTICAL DATA


The accuracy of a survey estimate refers to the closeness of the estimate to the true
population value. Where there is a discrepancy between the value of the survey estimate and
true population value, the difference between the two is referred to as the error of the survey
estimate. The total error of the survey estimate results from the two types of error:
 sampling error, which arises when only a part of the population is used to represent the
whole population; and
 non-sampling error which can occur at any stage of a sample survey and can also occur
with censuses. Sampling error can be measured mathematically whereas measuring
non-sampling error can be difficult.

It is important for a researcher to be aware of these errors, in particular non-sampling error,


so that they can be either minimized or eliminated from the survey. An introduction to
measuring sampling error and the effects of non-sampling error is provided in the following
sections.
We show that inadequate sampling rates may produce inversions in the cause-effect
relationship among other artifacts. More generally, slow acquisition rates may distort data
interpretation and produce deceptive patterns and eventually leading to misinterpretations,
as predators becoming preys.

Extraneous Variable- variables other than manipulated variables that affect the results of
the study.
Example:
1. Subject Characteristics- subjects in the groups may differ in variables like age, gender,
SES, etc.
2. Maturation Effect- subjects maturing or changing over time
3. Testing Effect- the use of pretest in the intervention studies may create a “practice
effect”
4. Instrumentation Effect- nature of the instrument
5. Selection Effect/Bias- Sampling bias
6. Mortality or Sample Attrition- withdrawal of subjects from the experiment

Quantitative Research Qualitative Research


1 Philosophical PostPositivist worldview Subjectivist/ Constructivist
Assumptions (Single Reality) (Multiple Realities)
2 Principle orientation Deductive; Theory Inductive; Theory Generation
to the role of theory Verification
to research
3 Alternative Research Experimental Narrative research
Designs Correlational Phenomenology
Causal-Comparative Grounded Theory
Survey Ethnographies
Case Study
4 Data Format Numerical (obtained by Textual or words (obtained
assigning numerical values to from audio, videotapes and
responses) field notes)
5 Data Analysis statistical analysis (using Thematic, Discourse or
SPSS) Document Analysis (using
NVivo)
6 Data Interpretation Statistical Interpretation Themes, patterns
interpretation
7 Number of Requires many participants Requires few participants
Participants
8 Research Method Instruments are rigid and Instruments are flexible and
highly structured methods semi-structured methods such
such as questionnaires, as in-depth interview, FGD,
surveys and structure and participant observation
observation (Predetermined) (Emerging)
9 Sample Problem The study examining the lack The research investigating the
of personal interest among common difficulties of high
students in the Science school students in Biology, the
subject. The main purpose of possible reasons for these
the study is to determine the difficulties, and how teachers
correlation between can help students overcome
students’ out-of-school these difficulties. A qualitative
Science-related experiences case study method will be
and their level of learning used to investigate, which will
interests in Science. This is a use in-depth interviews with
correlational study because teachers, classroom
students’ interest is observation, and discussion
correlated to their level of with the teachers, as main
science-related experiences. data collection tools.

A Standardized test is a test that is given in a consistent or “standard” manner. Standardized


tests are designed to have consistent questions, administration procedures, and scoring
procedures. The main benefit of standardized tests is they are typically more reliable and
valid than non-standardized measures.
Instrument is the general term that researchers use for a measurement device (survey, test,
questionnaire, etc.). To help distinguish between instrument and instrumentation, consider
that the instrument is the device and instrumentation is the course of action (the process of
developing, testing, and using the device).
Instruments fall into two broad categories, researcher-completed and subject-completed,
distinguished by those instruments that researchers administer versus those that are
completed by participants. Researchers chose which type of instrument, or instruments, to
use based on the research question.

There are 3 qualities of a good research instrument. These qualities are: (1) validity, (2)
reliability and (3) usability.
Validity is the extent to which an instrument measures what it is supposed to measure and
performs as it is designed to perform. Validity tells you how accurately a method measures
something. The validity of a measuring instrument has to do with its soundness, what the
test or questionnaire measures is effectiveness, and how well it could be applied
As a process, validation involves collecting and analyzing data to assess the accuracy of an
instrument. There are numerous statistical tests and measures to assess the validity of
quantitative instruments, which generally involves pilot testing
4 Types of Validity
1. Content Validity. Content validity means the extent to which the content or topic of
the test is truly representative of the content of the course. Content validity is
described by the relevance of a test to different types of criteria such as thru
judgement and systematic examination of relevant course syllabi and textbook, pooled
judgments of subject-matter experts, statements of behavioral objectives etc. Thus
content validity depends on the relevance of the individuals responses to the behavior
area under consideration rather on the apparent relevance of the item content. Is the
test fully representative of what it aims to measure?
2. Criterion/Concurrent validity. Concurrent validity is the degree to which the test
agrees or correlates with a criterion set up as an acceptable measure. For example: A
researcher wishes to validate a Mathematics achievement test he has constructed. He
administers this to a group of mathematics students and the result is correlated with an
acceptable mathematics test which has been previously proven as valid. Do the results
accurately measure the concrete outcome they are designed to measure? To evaluate
criterion validity, you calculate the correlation between the results of your
measurement and the results of the criterion measurement
3. Predictive validity. Predictive validity is determined by showing how well predictions
made from the test are confirmed by evidence gathered at some subsequent time.
Example: Suppose the researcher wants to estimate how well a high school student
may be able to do in college course on the basis of how well he has done on tests he
took in HS subjects. The criterion measured against which the test scores are validated
and obtained are available after a long period of interval.
4. Construct validity. The construct validity of a test is the extent to which the test
measures a theoretical construct or trait. Construct validity: Does the test measure
the concept that it’s intended to measure? does the questionnaire really measure the
construct of depression? Or is it actually measuring the respondent’s mood, self-
esteem, or some other construct? The questionnaire must include only relevant
questions that measure known indicators of depression.
Reliability can be thought of as consistency. Does the instrument consistently measure what
it is intended to measure? It is not possible to calculate reliability; however, there are four
general estimators that you may encounter in reading research:
1. Split-half method. The test in this method may be administered once but the test
items are divided into halves. The common procedure is to divide a test into odd and
even items. The scores obtained in the two halves are correlated.
2. Test-Retest Reliability: The consistency of a measure evaluated over time. The same
instrument is administered twice to the same group of students and the correlation
coefficient is determined.
3. Parallel-Forms Reliability: The reliability of two tests constructed the same way,
from the same content. Parallel or equivalent forms of a test may be administered to
the group of subjects and the paired observations is correlated. The correlation
between the scores obtained on paired observations of these two forms represents the
reliability coefficient of the test.
4. Internal Consistency Reliability: The consistency of results across items, often
measured with Cronbach’s Alpha.
a) K-R 20 & 21- use in dichotomously scored instrument
b) Cronbach’s alpha- use to measure internal consistency on Likert scale instrument
Usability. Usability means the degree to which the research instrument can be satisfactorily
used by the teachers, researchers etc without undue expenditure of time, money and effort.
In other words, usability means practicability.

Types of Sampling

Sampling is defined as the process of selecting certain members or a subset of the


population to make statistical inferences from them and to estimate characteristics of the
whole population. It is time-convenient and a cost-effective method and hence forms the basis
of any research design.

Advantages of Sampling
 Reduced cost & time- Instead of considering all, get only the subset of it. A
recommended minimum number of subjects is 100 for a descriptive study, 50 for a
correlational study, and 30 in each group for experimental and causal-comparative
studies.
 Reduced resource deployment- Using a sample reduces it reduces time, money &
effort. Imagine the time, money & effort saved between conducting a research with a
population of millions vs conducting a research study using a sample.
 Accuracy of data- Because sample is indicative of the population. The larger the
sample, the more likely the sample mean and standard deviation will become a
representation of the population mean and standard deviation.
 Intensive & exhaustive data- Since there are lesser respondents, the data collected
from a sample is intensive and exhaustive. More time and effort is given to each
respondent rather than having to collect data from a lot of people.
 Apply properties to a larger population- Since the sample is indicative of the larger
population, it is safe to say that the data collected and analyzed can be applied to the
larger population and it would hold true.

Disadvantages of Sampling
 Sampling error (sample unrepresentative of its population)- As with all research
methods, sampling provides room for error. The error here sometimes is that the
information gathered from a small sample is not a representative of the population
studied and cannot generalized to that population.
 Sampling bias- The objective of selecting a sample is to achieve maximum accuracy of
estimation within a given sample size and to avoid bias in the selection of the sample.
This is important as bias can attack the integrity of facts and jeopardise your research
outcome.

Two types of sampling techniques that are widely deployed.


A. Probability or random sampling
B. Non- probability or non- random sampling

A. Probability Sampling
Probability sampling means that every item in the population has an equal chance of
being included in sample. This allows every member to have the equal opportunities to be a
part of various samples. Probability or random sampling has the greatest freedom from bias
but may represent the most costly sample in terms of time and energy for a given level of
sampling error (Brown, 1947).

There are five primary types of probability sampling methods:


 Simple Random
 Systematic
 Stratified
 Cluster
 Multi-stage

A.1 Simple Random Sampling


The most widely known type of a random sample is the simple random sample (SRS).
This is characterized by the fact that the probability of selection is the same for every case in
the population. Simple random sampling is a method of selecting n units from a population of
size (N) such that every possible sample of size (n) has equal chance of being drawn. The
simple random sample means that every case of the population has an equal probability of
inclusion in sample.

Selection of members/elements by:


 Lottery Method
 Table of Random Numbers

Disadvantages associated with simple random sampling include (Ghauri and Gronhaug,
2005):
 A complete frame ( a list of all units in the whole population) is needed;
 In some studies, such as surveys by personal interviews, the costs of obtaining the
sample can be high if the units are geographically widely scattered;
 It is not used if researchers wish to ensure that certain subgroups are present in the
sample in the same proportion as they are in the population. To do this, researchers
must engage in what is known as stratified sampling.

A.2 Systematic Sampling


Systematic sampling is a variant of simple random sampling that involves some listing
of elements - every nth element of list is then drawn for inclusion in the sample. A researcher
has to begin by having list names of members in the population, in random approach.
For example, a researcher intends to collect a systematic sample of 100 people in a
population of 1,000. Each element of the population will be numbered from 1-1,000 and every
10th individual will be chosen to be a part of the sample (Total population/ Sample Size =
1,000/100 = 10). Say you have a list of 1,000 people and you want a sample of 100.

Creating such a sample includes three steps:


1. Divide number of cases in the population by the desired sample size. In this
example, dividing 10,000 by 1,000 gives a value of 10.
2. Select a random number between one and the value attained in Step 1. In
this example, we choose a number between 1 and 10 - say we pick 7.
3. Starting with case number chosen in Step 2, take every tenth record (7, 17,
27, etc.).

A.3 Stratified Random Sampling


In this form of sampling, the population is first divided into two or more mutually
exclusive segments based on some categories of variables of interest in the research. It is
designed to organize the population into homogenous subsets before sampling, then drawing
a random sample within each subset.
With stratified random sampling the population of N units is divided into subpopulations
of units respectively. These subpopulations, called strata, are non-overlapping and together
they comprise the whole of the population. When these have been determined, a sample is
drawn from each, with a separate draw for each of the different strata. The sample sizes
within the strata are denoted by respectively. If a SRS is taken within each stratum, then the
whole sampling procedure is described as stratified random sampling.
Suppose you were interested in investigating the link between the family of origin and
income and your particular interest is in comparing incomes of Hispanic and Non-Hispanic
respondents. For statistical reasons, you decide that you need at least 1,000 non-Hispanics
and 1,000 Hispanics. Hispanics comprise around 6 or 7% of the population. If you take a
simple random sample of all races that would be large enough to get you 1,000 Hispanics, the
sample size would be near 15,000, which would be far more expensive than a method that
yields a sample of 2,000. One strategy that would be more cost-effective would be to split the
population into Hispanics and non-Hispanics, then take a simple random sample within each
portion (Hispanic and non-Hispanic).

A.4 Cluster Sampling


Cluster sampling is where the whole population is divided into clusters or groups.
Subsequently, a random sample is taken from these clusters, all of which are used in the final
sample (Wilson, 2010). Clusters are identified and included in a sample on the basis of
defining demographic parameters such as age, location, sex etc. which makes it extremely
easy for a survey creator to derive effective inference from the feedback.
Cluster sampling is advantageous for those researchers whose subjects are fragmented
over large geographical areas as it saves time and money (Davis, 2005). The stages to cluster
sampling can be summarized as follows:
 Choose cluster grouping for sampling frame, such as type of company or geographical
region
 Number each of the clusters
 Select sample using random sampling

Important things about cluster sampling:


1. Most large scale surveys are done using cluster sampling;
2. Clustering may be combined with stratification, typically by clustering within
strata;
3. In general, for a given sample size n cluster samples are less accurate than
the other types of sampling in the sense that the parameters you estimate
will have greater variability than an SRS, stratified random or systematic
sample.

A.5 Multi-stage Sampling


It is a combination of a combination of several sampling techniques. Multi-stage
sampling is a process of moving from a broad to a narrow sample, using a step by step
process (Ackoff, 1953).
For example, a Filipino publisher of an automobile magazine were to conduct a survey,
it could simply take a random sample of automobile owners within the entire Filipino
population. Obviously, this is both expensive and time consuming. A cheaper alternative
would be to use multi-stage sampling. In essence, this would involve dividing the Philippines
into a number of geographical regions. Subsequently, some of these regions are chosen at
random, and then subdivisions are made, perhaps based on local authority areas. Next, some
of these are again chosen at random and then divided into smaller areas, such as towns or
cities. The main purpose of multi-stage sampling is to select samples which are concentrated
in a few geographical regions. Once again, this saves time and money.

B. Non-probability Sampling
In some research scenarios, it is not possible to ensure that the sample will be selected
based on random selection. Non-probability sampling is based on a researcher’s judgement
and there is possibility of bias in sample selection and distort findings of the study.
Nonetheless, this sampling technique is used because of its practicality. It can save time and
cost, and at the same time, it is a feasible method given the spread and features of a
population.

There are four primary types of non-probability sampling methods:


 Quota
 Snowball
 Convenience
 Purposive or Judgmental

B.1 Quota Sampling


Quota sampling is a non-random sampling technique in which participants are chosen
on the basis of predetermined characteristics so that the total sample will have the same
distribution of characteristics as the wider population (Davis, 2005).
The main reason directing quota sampling is the researcher’s ease of access to the
sample population. Similar to stratified sampling, a researcher needs to identify the
subgroups and their proportions as they are represented in the population. Then, the
researcher will select subjects based on his/ her convenience and judgement to fill each
subgroup. A researcher must be confident in using this method and firmly state the criteria for
selection of sample especially during results summarisation.
Quota sampling is designed to overcome the most obvious flaw of availability sampling.
Rather than taking just anyone, you set quotas to ensure that the sample you get represents
certain characteristics in proportion to their prevalence in the population. Note that for this
method, you have to know something about the characteristics of the population ahead of
time. Say you want to make sure you have a sample proportional to the population in terms of
gender - you have to know what percentage of the population is male and female, then collect
sample until yours matches.

B.2 Snowball Sampling


Snowball sampling is a method in which a researcher identifies one member of some
population of interest, speaks to him/her, then asks that person to identify others in the
population that the researcher might speak to. This person is then asked to refer the
researcher to yet another person, and so on. Snowball sampling is very good for cases where
members of a special population are difficult to locate.
For example, it will be extremely challenging to survey shelterless people or illegal
immigrants. In such cases, using the snowball theory, researchers can track a few of that
particular category to interview and results will be derived on that basis. This sampling
method is implemented in situations where the topic is highly sensitive and not openly
discussed such as conducting surveys to gather information about HIV Aids. Not many victims
will readily respond to the questions but researchers can contact people they might know or
volunteers associated with the cause to get in touch with the victims and collect information.

B.3 Convenience Sampling


Convenience sampling is selecting participants because they are often readily and
easily available. Typically, convenience sampling tends to be a favored sampling technique
among students as it is inexpensive and an easy option compared to other sampling
techniques (Ackoff, 1953). Convenience sampling often helps to overcome many of the
limitations associated with research. For example, using friends or family as part of sample is
easier than targeting unknown individuals.
Using this sampling method, a researcher is free to use anything that they could find in
the research outline. The sample is selected based on preferences and ease of sampling
respondents. This sampling is easier to conduct and less expensive. However, it has poor
reliability due to its high incidence of bias.

B.3 Purposive or Judgmental Sampling


Purposive or judgmental sampling is a strategy in which particular settings persons or
events are selected deliberately in order to provide important information that cannot be
obtained from other choices (Maxwell, 1996). It is where the researcher includes cases or
participants in the sample because they believe that they warrant inclusion.
This sampling method is selected on the basis that members conform to certain
stipulated criteria. You may need to use your own judgement to select cases to answer
certain research questions. This sampling method is normally deployed if the sample
population is small and when the main objective is to choose cases that are informative to the
research topic selected. Purposive sampling is very useful in the early stages of an
exploratory study. One of the disadvantages of this technique is that the sample may have
characteristics different from population characteristics.
For instance, when researchers want to understand the thought process of people who
are interested in studying for their master’s degree. The selection criteria will be: “Are you
interested in studying for Masters in …?” and those who respond with a “No” will be excluded
from the sample.

Strengths and Weaknesses Associated with Each Sampling Technique.

Technique Strengths Weaknesses


Simple Easily understood, results Difficult to construct sampling
Random projectable frame, expensive, lower precision,
no assurance of representativeness
Systematic Can increase Can decrease representativeness
representativeness, easier to
implement than simple random
sampling, sampling frame not
always necessary
Stratified Includes all important sub- Difficult to select relevant
population, precision stratification variables, not feasible
to stratify on many variables,
expensive
Cluster Easy to implement, cost- Imprecise, difficult to compute an
effective interpret results
Convenience Least expensive, least time- Selection bias, sample not
consuming, most convenient representative, not recommended
by descriptive or casual research
Judgmental Low-cost, convenient, not time- Does not allow generalization,
consuming, ideal for exploratory subjective
research design
Quota Sample can be controlled for Selection bias, no assurance
certain characteristics
Snowball Can estimate rare Time-consuming
characteristics

Qualitative vs Quantitative Data Differences


Qualitative Data Quantitative Data

are descriptions, types, and names that you Uses numerical data
assign to each observation.
describe a characteristic and don’t involve a describe how much, how many, or how often
measurement process
QUALITY or attribute, category, cannot be Quantity; can be ordered; can be continuous
ordered measurements on a scale or discrete counts
use words are measures or counts recorded using
numbers
Ex. blood types, religion, and nationality Ex weight, age, length of stay, diameter
Bar charts histograms and scatterplots
Nominal data- categories that do not have a natural order; “name”; measuring percentages;
An experiment is a data collection procedure that occurs in controlled conditions to identify
and understand causal relationships between variables. an experiment involves researchers
manipulating at least one independent variable (aka factors or inputs) under controlled
conditions, and they measure the changes in the dependent variable (outcomes).

You might also like