0% found this document useful (0 votes)
248 views131 pages

Unit 5 - Identifying Variables (6 Files Merged)

1. The document discusses different types of variables that can be used in research, including independent, dependent, extraneous, and intervening variables from the viewpoint of causal relationships. 2. From the viewpoint of study design, it distinguishes between attribute variables like age and gender that cannot be manipulated, and active or experimental variables like teaching methods that can be. 3. In terms of unit of measurement, variables can be quantitative and continuous, quantitative and discrete, or qualitative and categorical with dichotomous or polytomous values.

Uploaded by

azharmalur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
248 views131 pages

Unit 5 - Identifying Variables (6 Files Merged)

1. The document discusses different types of variables that can be used in research, including independent, dependent, extraneous, and intervening variables from the viewpoint of causal relationships. 2. From the viewpoint of study design, it distinguishes between attribute variables like age and gender that cannot be manipulated, and active or experimental variables like teaching methods that can be. 3. In terms of unit of measurement, variables can be quantitative and continuous, quantitative and discrete, or qualitative and categorical with dichotomous or polytomous values.

Uploaded by

azharmalur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 131

Unit 5: Identifying Variables

1 / 22
Outline

1 Variables and Concepts

2 Types of Variable
Viewpoint of causal relationships
Viewpoint of study design
Viewpoint of unit measurement

3 Measurement Scales

2 / 22
Variables and Concepts

Research Journey

3 / 22
Variables and Concepts

What is a Variable?

In the process of formulating a research problem (quantitative research),


there are two important considerations:
► The use ofconcepts
► The construction of hypothesis
Concepts are highly subjective as the understanding of them varies
from person to person. Therefore you have to make it as measurable
as possible by converting concepts to variables.
Variable: An image, perception or concept that is measurable.
► Variables take on different values often expressed as numbers.
► Some variables are incapable to directly measure like feeling, preferences,
values and sentiment. Thus, it may vary from person to person.
Examples:
►Age (years)
►Gender (male/female)
► Salary (in Dhs)
► Weight (in Kg)

4 / 22
Variables and Concepts

Difference between concepts and variables


Measurability is the main difference between a concept and a variable.
Concepts cannot be measured (e.g. satisfaction has different meaning
to different people)
Variables can be measured (e.g. persons weight in kg)

Concepts Variables
Subjective impression Measurable though the degree
No uniformity as to its of precision varies from scale to
understanding scale and from variable to vari-
amon able. (e.g. Attitude subjective,
g different people Income- objective)
Can’t be measured
Examples: Effectiveness, Examples: Gender, income,
satisfaction, self-esteem, age, weight, price, etc.
high achiever, etc.

5 / 22
Variables and Concepts

Converting concepts into variables


If you use a concept in your study, you need to think of operationaliza-
tion how it is measured.
To operationalize, you first have to think of indicators which then can
be converted into variables.
Indicator is a set of criteria reflective of the concept.

► ►
Concepts Indicators Variables

Example:

Variables
Concept Indicators Variables Working definition
Rich/Poor 1. Income Total income per year Rich if income is >
$200,000
2. Value of all assets Total value of; home, Rich if total value of
cars, investments, etc. assets is > $2,000,000

6 / 22
Types of Variable

Types of Variable
Variables can be classified in a number of ways based on the causal rela-
tionship, study design and unit of measurement as shown below.

7 / 22
Types of Variable Viewpoint of causal relationships

From a viewpoint of causal relationships

In studies attempting to investigate a causal relationship or association may


involve four types of variables:
1 Independent variable (IV):

► A variable responsible for bringing changes in a phenomenon or a situa-


tion.
► A phenomena that is manipulated by a researcher and is predicted to

have an effect on other phenomena (Williams & Monge, 2001).


► Examples: A teaching method, a medical treatment, or training regimen.

2 Dependent variable (DV):


► The outcome of the changes brought by an independent variable.
► A phenomenon that is affected by the researcher’s manipulation of an-
other phenomena.
► Examples: Achievement is the effect of a teaching method, cure or not

the effect of a medical treatment, and higher skill level or not (achieve-
ment) the effect of a training regimen.

8 / 22
Types of Variable Viewpoint of causal relationships

From a viewpoint of causal relationships


3 Extraneous variables:
► Other variables, not measured in a study, may increase or decrease the
magnitude or strength of the relationship between IV and DV.
► Example: When investigating the effect of television watching (IV) on

achievement (DV), type of program is an extraneous variable.


4 Intervening (mediating) variable:
► A connecting or linking variable that links the IV and DV. This is a
situation where the relationship between IV and DV can’t be established
without the intervention of another variable.
► Example: When studying the association between income and longevity,

access to medical care is an intervening variable.


Intervening

Independent Dependent

Extraneous

9 / 22
Types of Variable Viewpoint of causal relationships

Examples

10 / 22
Types of Variable Viewpoint of causal relationships

Examples

11 / 22
Types of Variable Viewpoint of causal relationships

Examples

12 / 22
Types of Variable Viewpoint of causal relationships

Exercise

List and label the variables in the following situations and illustrate by means
of diagrams the relationship among the variables.
1 A study suggested that elementary students who watch TV more than 3 hours
a day are more likely to be overweight than students who watch less TV.
2 People are de-motivated to consume alcohol knowing the consequence that it
damages the liver leads to liver cirrhosis. Perhaps behavioral therapy works
better for males and cognitive therapy works better for females.
3 Research suggests that children who eat hot breakfast at home perform better
at school. Many argue that not only hot breakfast but also parental care of
children before they go to school has an impact on children’s performance.
4 Lucy examined relationships between middle-school students’ self-esteem and
their performance in Mathematics. Her data analysis indicated that students
with higher self-esteem perform better than those with lower self-esteem. Her
investigation further revealed that students with higher self-esteem are more
willing to invest effort in solving mathematics problems.

13 / 22
Types of Variable Viewpoint of study design

From a viewpoint of study design

In controlled experiments the independent (cause) variable may be intro-


duced or manipulated either by the researcher or by someone else who is
providing the service. In these situations there are two sets of variables.
1 Attribute variables:

► Variables that cannot be manipulated, changed or controlled and that


reflect the characteristics of a study population.
► Examples: gender, age, level of motivation, nationality, education, etc.

2 Active variables:
► Variables that can be manipulated, changed or controlled in a designed
experiment. They are also called experimental variables.
► Examples: Teaching methods, temperature, product design, etc.

14 / 22
Types of Variable Viewpoint of unit measurement

From the viewpoint of unit measurement


1 Quantitative variables
► Variables have values that represent quantities and can be classified as
either discrete or continuous.
(a) Discrete: have numerical values that arise from a counting process (mea-
suring how many). For example, household size, number of sections,
number of accidents, etc.
(b) Continuous: produce numerical responses that arise from a measuring
process (measuring how much). For example, age, income, distance, etc.
2 Qualitative (Categorical) variables
► Variables that have values that can only be placed into categories and
can be classified as:
(a) Dichotomous variable: has two categories, e.g. yes/no, male/female,
good/bad.
(b) Polytomous variable: has more than two categories, e.g. marital status:
single, married, divorced, widowed.
Note: the measurement of income in dirhams and fils is classified as
the measurement of a quantitative variable, whereas its subjective mea-
surement in categories ’low’, ’middle’ and ’high’ groups is a qualitative
variable.
15 / 22
Measurement Scales

Types of measurement scales


The way a variable is measured determines the type of analysis that
can be performed, the statistical procedures that can be applied to the
data, the way the data can be interpreted and the finding that can be
communicated.
There are four types of measurement scale:
1 Nominal or classificatory scale
2 Ordinal or ranking scale
3 Interval scale
4 Ratio scale
The scales are defined based on whether a variable has the following
four characteristics: classification, order, distance and origin.
In practice, categorical or qualitative variables tend to be reported in
nominal and ordinal scales while quantitative variables are reported in
interval or ratio scales.
Higher levels of measurement generally yield more information and are
appropriate for more powerful statistical analysis.
16 / 22
Measurement Scales

Measurement Scales: Nominal Scale

A nominal scale enables classification of individuals, objects or re-


sponses into categories based on a common or shared property or char-
acteristic (No natural order between the categories).
A nonnumeric label or numeric code may be used. If we use numerical
symbols to identify categories, these numbers are recognized as labels
only and have no quantitative value.
The counting of members in each group is the only possible arithmetic
operation when a nominal scale is employed.
Nominal scales are the least powerful of the four data types (no order
or distance relationship and no arithmetic origin).
Examples:
► Gender (male, female)
► Marital status (single, married, divorced, widowed)
► Work sector (public, private)
► Get promoted (yes, no)

17 / 22
Measurement Scales

Measurement Scales: Ordinal Scale


Ordinal scale not only categorizes variables in such a way as to classify
the various categories, it also rank-orders categories in some meaningful
way.
The use of an ordinal scale implies a statement of “greater than” or
“less than”, without stating how much greater or less.
Ordinal scales tell us relative order, but give us no information re-
garding differences between the categories (Observations need not be
equidistant).
Examples:
► Job performance (excellent, good, fair, poor)
► Course grade (A, B+, B, C+, C, D+, D, F)
► Income level (low, medium, high)

► Satisfaction level (very dissatisfied, dissatisfied, neutral, satisfied, very

satisfied)
► Education level (less than high school, high school, some college,college,

postgraduate)
18 / 22
Measurement Scales

Measurement Scales: Interval Scale

The data have the properties of ordinal data, but only difference be-
tween two observations is meaningful (the scaled distance between 1
and 2 equals the distance between 2 and 3).
The zero point is arbitrary and does not mean the absence of the
quantity that we are trying to measure. That is, there is no absolute
zero or natural origin.
Ratios are meaningless in this scale.
Researchers treat many attitude scales as interval.
Examples:
►Centigrade and Fahrenheit temperature scales:
Note that 0 ◦ C means “cold,” not “no heat”; 40 ◦ C is not twiceas warm
as 20 ◦ C.
► Calendar time

Note: Researchers treat many attitude scales (measured on Likert


scale) as interval.
19 / 22
Measurement Scales

Ratio Scale

The ratio data have all the properties of interval data and the ratio of
two values is meaningful.
Ratio scale contains an absolute zero or origin that indicates that noth-
ing exists for the variable at the zero point.
One can use all mathematical operations on this scale.
Ratio data represent the actual amounts of a variable.
Examples:
► In business and finance: salary, profit, age, price, etc.
► In pharmacy: concentration, drug dose, etc.
► In IT: installation time, CPU speed, download time, etc.
► In general: age, height, weight, distance, etc.

Because of the measurement precision at higher levels, more powerful


and sensitive statistical procedures can be used. When we collect in-
formation at higher levels, we can always covert, rescale, or reduce the
data to arrive at a lower level.
20 / 22
Measurement Scales

Scales: Summary

1 The nominal scale highlights the differences by classifying objects or


persons into groups.
2 The ordinal scale provides some additional info by rank-ordering the
categories of the nominal scale.
3 The interval scale, not only ranks, but also provides us withinformation
on the magnitude of the differences in the variable.
4 The ratio scale indicates not only the magnitude of the differences, but
also the proportion.

Characteristics
Scale Classification Order Distance Origin
Nominal Yes No No No
Ordinal Yes Yes No No
Interval Yes Yes Yes No
Ratio Yes Yes Yes Yes

21 / 22
Measurement Scales

Exercise
Classify each of the following variables as either; qualitative or quantitative,
active or attribute, and identify the level of measurement (nominal, ordinal,
interval, ratio).
1 Prices on the stock market.

2 Marital status, classified as “married” or “never married”.

3 Number of computers owned by a household.

4 Asking whether a patient is allergic to any medication.

5 Grades: A, B, C, D, or F.

6 Quality of medical care at a hospital.

7 Number of errors in a C++ program.

8 Grade point average from 0.0 to 4.0 in increments of 0.1

9 The number of hours you spent studying each day during the past

week.
10 The temperature in cities throughout UAE.

11 The birth weights of babies who were born at Tawam Hospital last

week.
22 / 22
Unit 6: Constructing Hypotheses

1 / 16
Outline

1 Definition of a hypothesis

2 Function of hypothesis

3 Types of hypotheses

4 Errors in testing hypotheses

2 / 16
Outline

Research Journey

3 / 16
Definition of a hypothesis

What is Hypothesis?

As a researcher you do not know about a phenomenon, but you do


have a hunch to form the basis of certain assumption or guesses.
You can test these by collecting information that will enable you
to conclude if your hunch was right.
The verification process can have one of the three outcomes. Your
hunch may prove to be: right, partially right or wrong.
Without this process of verification, you cannot conclude anything
about the validity of your assumption.
Hence, a hypotheses is a hunch, assumption, suspicion, assertion
or an idea about a phenomenon, relationship or situation, the
reality or truth of which you do not know.
A researcher calls these assumptions/hunches hypotheses and they
become the basis of an enquiry.

4 / 16
Definition of a hypothesis

What is Hypothesis?
Definitions of ahypothesis:
► A proposition, condition, or principle which is assumed, perhaps
without belief, in order to draw out its logical consequences, and
to test its accord with facts which are known or may be deter-
mined.
► A proposition that is stated in a testable form and that predicts a

particular relationship between two or more variables.


► A hypothesis is written in such a way that it can be proven or

disproven by valid and reliable data.


► A hypothesis is a logical relationship between two or more vari-

ables expressed in the form of a testable statement.


From the above definitions, it is apparent that a hypothesis:
► A tentative proposition that can be proven or disproven.
► Validity is unknown, hence reliable and valid data needed.
► In most cases, it specifies a relationship between two or more

variables.
5 / 16
Function of hypothesis

Functions of Hypothesis
In most studies the hypothesis will be based either upon previous stud-
ies or on your own or someone else’s observations. The functions of a
hypothesis are:
1 Brings specificity and clarity to a research problem, but are not

essential.
2 This specificity and clarity used to construct a hypothesis ensures

that only information needed is collected, thereby, providing focus


to the study. This also enhances the validity of a study as it
ensures measuring what the study sets out to measure.
3 As it provides a focus, the construction of a hypothesis enhances

objectivity in a study.
4 The testing of a hypothesis enables the researcher to specifically

conclude what is true or what is false, thereby, contributing to-


wards theoryformulation.
6 / 16
Function of hypothesis

The process of testing a hypothesis


The process of hypothesis testing involves three phases:
1 Constructing a hypothesis.

2 Gathering appropriate evidence.

3 Analyzing evidence to draw conclusions as to its validity (true of


false).

7 / 16
Function of hypothesis

Characteristics of Hypothesis
A hypothesis should be:
1 Simple, specific and conceptually clear.
2 Should be verifiable (Methods and techniques must be available
for data collection and analysis).
3 Should be related to the existing body of knowledge.
4 Should be measurable(operationalizable).
Examples:
► The average salary of accountants in Dubai is higher than that in
Al Ain.
► There will be no difference in the level of information literacy

among university students.


► Smoking causes lungcancer.

► Sales in a shop are greater on a Thursday than on other weekdays.

► More than 80% of Al Ain residents are satisfied with the provided

services by Al Ain Municipality.


8 / 16
Types ofhypotheses

Formulating Hypotheses
0 Null hypothesis:
A null hypothesis (denoted by H0) is a claim (or statement) about
the population that is assumed to be true until it is declared false.
The null hypothesis will be rejected only if the sample data provide
substantial contradictory evidence.
In general, the null hypothesis is expressed as no (significant)
difference between groups or no relationship between the variables.
Examples:
► There will be no significant difference in the TOEFL examination
results among students of different programs.
► Customer services training of IT telephone support staff will not

lead to a significant improvement in users’ satisfactionfeedback.


► There is no significant relationship between employee engagement

and employee loyalty.


9 / 16
Types ofhypotheses

Formulating Hypotheses
@ Alternative (Alternate) hypothesis:
The alternative hypothesis (denoted by Ha or H1), is a claim
about the population that will be true if H0 is false.
In general, the alternative hypothesis is perceived as the Research
Hypothesis that you seek to validate through an inquiry.
The alternative hypothesis, the opposite of the null hypothesis, is
a statement expressing a relationship between two variables or
indicating differences between groups.
Examples:
► There will be significant difference in the TOEFL examination
results among students of different programs.
► Customer services training of IT telephone support staff will lead

to a significant improvement in users’ satisfaction feedback.


► There is a significant relationship between employee engagement

and employee loyalty.


10 / 16
Types ofhypotheses

Types of Hypotheses
1 Hypothesis of nodifference:
►It is a statement specifying that there is no difference between two
situations, groups, outcomes, or the prevalence of a condition or
phenomenon.
► Examples:

There will be no significant difference in the GPA among male and


female students at AAU.
There will be no significant relationship between gender and salary
of employees.
2 Hypothesis of Difference:
►It states that there will be a difference but does not specify its
magnitude.
► Examples:

A greater number of males than females are smokers in the study


population.
There is a significant difference in productivity between day and
night shifts at Al Ain Food and BeveragesCompany.
11 / 16
Types ofhypotheses

Types of Hypotheses
3 Hypothesis of point-prevalence:
►It is a statement that speculates almost the exact prevalence of the
situation or the outcome of a treatment program.
► Examples:

A total of 40% of females and 70% of males in the UAE workforce


are expatriates.
The prevalence of diabetes in UAE is less than 20%.
4 Hypothesis of Association:
► It states the extent of the relationship in terms of the effect of
different treatment groups on the dependant variable, or the
prevalence of phenomenon in different populations
► Examples:

There is a significant relationship between information literacy and


academic performance among students at AAU.
There is an inverse relationship between exercise and risk of coro-
nary heart disease.
12 / 16
Errors in testinghypotheses

Errors in testing Hypothesis

Incorrect conclusions about the validity of a hypothesis may be drawn


in cases of:
1 Faulty study design

2 Faulty sampling procedures

3 Inaccurate method of data collection

4 Wrong data analysis

5 Inappropriate statistical procedures

6 Incorrect conclusions

Any, some or all of these aspects of the research process could be


responsible for the neglected introduction of error in the study, making
conclusions misleading.

13 / 16
Errors in testinghypotheses

Errors in Hypothesis Testing


When testing hypotheses, we realize that all we see is a random
sample. Therefore, because of sampling variability, our decision to
accept or to reject H0 may still be wrong.
Summary of the four different states in hypotheses testing:

Truth
H0 is True H0 is False
Decisio

Accept H0 Correct decision Type II error


Reject H0 Type I error Correct decision
n

Types of Errors:
► Type I error: rejection of a null hypothesis when it istrue.
► Type II error: acceptance of a null hypothesis when it is false.
14 / 16
Errors in testinghypotheses

Example of type I and type II errors


A researcher wants to compare the effectiveness of two medications:
The null and alternative hypotheses are:
H0: The two medications are equallyeffective.
Ha: The two medications are not equally effective.
Description of type I and type II errors:
► A type I error occurs if the researcher rejects H0 and concludes
that the two medications are different when, in fact, they are not.
If the medications have the same effectiveness, the researcher may
not consider this error too severe because the patients still benefit
from the same level of effectiveness regardless of which medicine
they take.
► However, if a type II error occurs, the researcher fails to reject H0

when it should be rejected. That is, the researcher concludes that


the medications are the same when, in fact, they are differ- ent.
This error is potentially life-threatening if the less-effective
medication is sold to the public instead of the more effective one.
15 / 16
Errors in testinghypotheses

Hypothesis in Qualitative Research


One of the main differences in qualitative and quantitative re-
search is the extent to which hypotheses are used and the impor-
tance attached to them.
In qualitative research, because of the purpose of an investigation
and methods used to obtain information, hypotheses are not used
and almost no importance is given to them.
However, in quantitative research, their use is far more prevalent
though it varies markedly from one academic discipline to another
and from researcher to researcher.
On the whole it can be said that if the aim of a study is to explore
where very little is known, hypotheses are usually not formulated;
however, if a study aims to test an assertion by way of causality or
association, validate the prevalence of something or establish its
existence, hypotheses can be constructed.
16 / 16
Unit 7: The Research Design

1 / 22
Outline

1 What is a research design?

2 Types of Study Design


Designs based on the number of contacts
Designs based on the Reference Period
Designs based on the Nature of Investigation
Other Designs in Quantitative Research

3 Study Designs in Qualitative Research

2 / 22
Outline

Research Journey

3 / 22
What is a research design?

What is a research design?


Study design refers to the methodology that is used to investi-
gate a particular phenomenon or a situation.
► The research design is a blueprint or detailed plan for how a
research study is to be completed - operationalizing variables so
they can be measured, selecting a sample of interest to study,
collecting data to be used as a basis for testing hypotheses, and
analyzing the results.
It is a procedural plan that is adopted by the researcher to an-
swer questions validly, objectively, accurately and economically as
possible.
► Study design (Exploratory, Descriptive, Correlational, Explana-
tory)
► Sampling design (Sampling technique and sample size)

► Selecting data collection methods

► Selection analysis methods

4 / 22
What is a research design?

Functions of Research Design


The preceeding definitions suggest that a research design has two
main functions:
1 Conceptualize an operational plan to undertake the various pro-
cedures and tasks required to complete a study
2 Ensure that these procedures are adequate to obtain valid, ob-
jective and accurate answers to the research questions.
A research design, therefore, should include the following:
► Name the study design per se that is, ‘cross-sectional’, before-
and-after’, etc.
► Provide detailed information about the following aspects of the

study:
Who will constitute the study population?
Will a sample or the whole population be selected?
If a sample is selected, how will it be contacted?
What method of data collection will be used and why?
How will ethical issues be taken care of?
5 / 22
Types of Study Design

Types of Study Design


The study designs are classified based on:
The number of contacts with the study population;
The reference period of the study;
The nature of the investigation

6 / 22
Types of Study Design Designs based on the number ofcontacts

Cross-sectional Design
Designs based on the number of contacts can be classified into 3types:
0 Cross-sectional Design
Involves observations of a sample, or cross- section of a population
or phenomenon that are made at one point in time.
Also known as one-shot or status studies and it is the most com-
monly used design in the social sciences.
Best suited to studies aimed at finding out the prevalence of a
phenomenon, situation, problem, attitude or issue, by taking a
cross-section of the population.
Extremely simple in design:
► Decide what you want to find out about.
► Identify study population.
► Select a sample.

► Contact your respondents to find out the required information.

7 / 22
Types of Study Design Designs based on the number ofcontacts

Cross-sectional Design
Advantage:
► Comparatively cheap to undertake and easy to analyze.
Disadvantage:
► Cannot measure change.
Examples:
► The attitudes of customers towards the facilities available in the
organization.
► The quality of services provided by registration staff at Al Ain

hospital.
► The relationship between the use of social media and the aca-

demic performance of students.


► The relationship between the home environment and the aca-

demic performance of a child at school.


► The attitudes of students towards the facilities available in their

library.
8 / 22
Types of Study Design Designs based on the number ofcontacts

Before-and-after Design
@ Before-and-after Design
Can be described as two sets of cross-sectional data collection
points on the same population to find out the change in the phe-
nomenon or variable(s) between two points in time.
Also known as the pre-test/post-test design.
Examples:
► The effect of advertisement on the sale of a product.
► The effectiveness of a diet program on weight.

9 / 22
Types of Study Design Designs based on the number ofcontacts

Before-and-after Design
Advantages:
► The main advantage is the ability to measure change in a phe-
nomenon or to assess the impact of an intervention.
Disadvantages:
1 Expensive and time consuming
2 Attrition or changes in the study population
3 Because it measures total change, you cannot ascertain whether
independent or extraneous variables are responsible for producing
change in the dependent variable.
4 Changes in the study population may be because it is maturing
(Maturation effect).
5 The instrument itself educates the respondents (Reactive effect
of the instrument).
6 When you use a research instrument twice to gauge the attitude
of a population towards an issue is a possible shift in attitude
between the two points of data (Regression effect).
10 / 22
Types of Study Design Designs based on the number ofcontacts

Longitudinal Design
� Longitudinal Design
It is a design that helps us to determine the pattern of change in
relation to time (e.g. social and job mobility).
It is also useful when you need to collect factual information on a
continuing basis.
The study population is visited a number of times at regular inter-
vals, usually over a long period, to collect the required information.

11 / 22
Types of Study Design Designs based on the number ofcontacts

Longitudinal Design
Examples:
► National Longitudinal Survey of Youth (NLSY).
Advantages:
► The main advantage is that it allows the researcher to measure
the pattern of change and obtain factual information, requiring
collection on a regular or continuing basis.
Disadvantages:
► Same as before-and-after-studies, in some instances to an even
bigger degree.
► Longitudinal studies can suffer from the conditioning effect (this

describes a situation where, if the same respondents are contacted


frequently, they begin to know what is expected of them and may
respond to questions without thought, or they may lose interest
in the enquiry, with the same result.)

12 / 22
Types of Study Design Designs based on the Reference Period

Retrospective Studies
Designs based on the reference period can be classified into 3 types:
0 Retrospective Studies
Investigate a phenomenon, situation, problem or issue that has
happened in the past.
Usually conducted either on the basis of the data available for
that period or on the basis of respondents’ recall of the situation.
Examples:
► Determine whether exposure to chemicals used in tire manufac-
turing is associated with an increased risk of death.
► A historical analysis of migration in the US.

13 / 22
Types of Study Design Designs based on the Reference Period

Prospective Studies
@ Prospective Studies
Refers to the likely frequency of a phenomenon, situation, prob-
lem, attitude or outcome in the future.
Such studies attempt to establish the outcome of an event or what
is likely to happen.
Examples:
► To investigate the effect of academic counseling services on stu-
dents’ performance.
► To identify the risk factors that lead to breast cancer among

women.

14 / 22
Types of Study Design Designs based on the Reference Period

Retrospective-prospective Studies
� Retrospective-prospective Studies
Focus on the past trends in a phenomenon and study it into the
future.
Part of the data is collected retrospectively from the existing
records before the intervention is introduced and then the study
population is followed to ascertain the impact of the intervention.
Examples:
► The impact of incentives on the performance of workers.
► The impact of maternal health services on the infant mortality
rate.

15 / 22
Types of Study Design Designs based on the Nature of Investigation

Experimental/ Non-experimental Studies


Designs based on the nature of investigation can be classified into 3
types:
1 Experimental Study
► A study where a researcher use an experiment to investigate a
relationship by starting from the cause to determine the effects.
► The researcher introduces the intervention that is assumed to

cause the change in a controlled or natural environment.


2 Non-experimental Study
► Starting with the effect to research the cause; a phenomenon is
known and the researcher attempts to establish what caused it.
3 Quasi- or semi-experimental Study
► This design has elements of both experimental and non-experimental
studies. A part of the study could be experimental and the other
non-experimental.

16 / 22
Types of Study Design Designs based on the Nature of Investigation

Experimental/ Non-experimental Studies

17 / 22
Types of Study Design Designs based on the Nature of Investigation

Experimental/ Non-experimental Studies


Some issues to understand about Experimental Design:
Controlled or Natural Environment
► In a controlled environment the study population is in a ’con-
trolled situation’ such as a laboratory or special room.
► In the natural environment the study population is exposed to an

intervention in its own environment.


Randomization
►In a Random Design, the experimental group or the control group
is not predetermined but randomly assigned.
► This means each and every individual of a study population has

an equal and independent chance of being assigned to an exper-


imental or control group.
► In a Non-Random Design, the experimental group or the control

group is predetermined, and no equal chance for the participants


to be in an experimental or control group.
18 / 22
Types of Study Design Other Designs in Quantitative Research

Other Designs in Quantitative Research

1 Cross-over comparative experimental design:


In this design the experimental group become control group and
vice versa.
2 Replicated cross-sectional design:
In this design you select participants at different phases of the
program to form the basis of the study.
3 Blind studies (Placebo design):
The population does not know whether it is getting real or fake
treatment. The main objective of designing a blind study is to
isolate the placebo effect.
4 Double-blind studies:
Neither the research nor the participants know who is receiving
real or fake treatment.

19 / 22
Types of Study Design Other Designs in Quantitative Research

Other Designs in Quantitative Research


5 Trend studies:
To trace changes over a period of time. Enables you to find out
what has happened in the past, what is happening now and what
is likely to happen in the future.
6 Cohort studies:
Based on the common characteristic such as year of birth, grad-
uation or marriage, within a subgroup of a population that a re-
searcher want to study.
7 Panel studies:
Trend, cohort and panel studies are similar except that panel stud-
ies are longitudinal and prospective in nature and collect informa-
tion from the same respondents. In the trend and cohort studies
the information can be collected in a cross-sectional manner and
the observation points can be retrospectively constructed.
20 / 22
Study Designs in Qualitative Research

Designs in Qualitative Research


1 Case studies:
In-depth exploration of a atypical case of a particular event,group,
instance, etc.
2 Oral history:
Obtaining, recording, presenting and interpreting information in
someone’s own words.
3 Focus groups/group interviews:
Facilitated group interviews of an open discussion of a topic.
4 Participant observation:
Researcher gets involved in a social interaction and observes the
situation first hand.
5 Community discussion forums:
Large group discussion.
6 Reflective journal log:
Diary of the researcher’s thoughts.
21 / 22
Study Designs in Qualitative Research

Exercise: Conceptualizing a studydesign


Answers to the following questions will help you to develop your study
design
1 Is the design that you propose to adopt to conduct your study
cross-sectional, longitudinal, experimental or comparative in na-
ture? Why did you select this design?
2 What, in your opinion, are the strengths, weaknesses and limita-
tions of this design?
3 Who constitutes your study population?
4 Will you be able to identify each respondent in your study popu-
lation? If yes, how will they be identified? If no, how do you plan
to get in touch with them?
5 Do you plan to select a sample? Justify yourdecision.
6 How will you collect data from your respondents (e.g. interview,
questionnaire)? Why did you select this method of data collec-
tion? What, in your opinion, are its strengths and weaknesses?
22 / 22
Unit 8: Methods of Data Collection

Fall 2017 1 / 30
Outline

1 Data Collection Tools

2 Collecting Data Using Primary Sources


Observation
Interview
Questionnaire

3 Interview vs. Questionnaire

4 Formulating Questions

5 Collecting Data Using Secondary Sources

Fall 2017 2 / 30
Outline

Research Journey

Fall 2017 3 / 30
Data Collection Tools

Methods of Collecting Data


Most methods of data collection can be used in both qualitative
and quantitative research.
The distinction is mainly due to the restrictions imposed on flex-
ibility, structure, sequential order, depth and freedom that a re-
searcher has in their use during the research process.
Quantitative methods favour these restrictions whereas qualitative
ones advocate against them.
Major Sources ofInformation:
1 Primary data:
The researcher undertakes the collection of required data.
Provide first hand information.
2 Secondary data:
Information is already available and need only be extracted and
reanalyzed.
Examples: Use of census data, government records, Data from
articles, journals, books, magazines, etc.
Fall 2017 4 / 30
Data Collection Tools

Methods of Collecting Data

Fall 2017 5 / 30
Collecting Data Using Primary Sources Observation

Observation

Observation is a purposeful, systematic and selective way of


watching and listening to an interaction or phenomenon as it takes
place.
Observation is appropriate when you want to:
1 Learn about the interaction in a group.
2 Study the dietary patterns of a population.
3 Ascertain the functions performed by a worker. Study
4 behaviour of personality traits of an individual.
In summary, observation is the most appropriate approach to col-
lect information:
► when you are interested in the behaviour than in the perceptions
of individuals, or
► when subjects are so involved in the interaction that they are

unable to provide objective information


Fall 2017 6 / 30
Collecting Data Using Primary Sources Observation

Types of Observation
Classification based on observer participation:
1 Participant Observations:
► The researcher participates in the activities of the group being
observed in the same manner as its members, with or without
their knowing that they are being observed.
► Examples: Mystery shopper, ethnographic studies.
2 Non-participant Observations:
► The research does not get involved in the activities of the group
but remains a passive observer, watching and listening to its ac-
tivities and drawing conclusions from this.
► Example: Observing participants via one-way mirror or a camera.

Classification based on researcher intervention:


1 Natural: Observing a subject in its natural operation rather than
intervening in its activities.
2 Controlled: Introducing a stimulus to the group to react to and
observing the reaction.
Fall 2017 7 / 30
Collecting Data Using Primary Sources Observation

Problems with Observation


1 Hawthorne Effect: individuals or groups become aware that they
are being observed and change their behaviour - What is observed
may not represent their normal behavior.
2 Observer Bias: The observer uses his or her own subjective view
or disposition to interpret events in the setting being observed.
The observer may be unaware that she or he is doing this.
3 Variation in interpretation drawn from observation from observer
to observer.
4 Possibility of incomplete observation and/or recording.
5 Observer Error: The lack of understanding of, or overfamiliarity
with, the setting in which the observer is trying to operate as a
participant observer may lead the observer unintentionally to
misinterpret what is happening.
Fall 2017 8 / 30
Collecting Data Using Primary Sources Observation

Recording observations
1 Narrative: Making brief notes while observing the interactionand
soon after the observation make details notes in narrative form.
► Advantage: provides a deeper insight into the interaction
► Disadvantage: bias, or unable to record important points.
2 Scales: Using a scale in order to rate various aspects of the
interaction or phenomenon.
► Disadvantage:
Lack of in-depth information
Error of central tendency: avoiding extreme position on the scale.
Elevation effect: tendency to use a particular part of the scale.
Halo effect: rating an individual on one aspect influences the way
to rate him/her on another aspect.
3 Categorical Recording: Similar to scales and depend on classi-
fication developed by researcher; e.g. passive/active, etc.
4 Recording on Mechanical Devices: Observation recorded on a
video tape and then analyzed.
Fall 2017 9 / 30
Collecting Data Using Primary Sources Interview

Interview
An interview is a verbal interchange, often face to face, in which
an interviewer tries to elicit information, beliefs or opinions from
other person. (Burns 1997) 1
Any person-to-person interaction, either face to face or otherwise,
between two or more individuals with a specific purpose in mind
is called an interview.
Most common method of collecting information from people.
Researcher has a flexibility to select format, content, wordings,
order of question, etc.
Interviews are classified into different categories according to the
degree of flexibility in process of asking questions:
1 Unstructured interviews
2 Structured interviews
1Burns, R. B. (1997). Introduction to Research Methods. Addison Wesley
Longman.

Fall 2017 10 / 30
Collecting Data Using Primary Sources Interview

Types of interview

Fall 2017 11 / 30
Collecting Data Using Primary Sources Interview

Types of interview
0 Unstructured interviews are used to gather data which are nor-
mally analyzed qualitatively. These data are likely to be used not only
to reveal and understand the ’what’ and the ’how’ but also to place
more emphasis on exploring the ’why’.
The researcher is free to order the questions in whatever sequence
he/she wishes; no “interview schedule”.
The researcher has complete freedom in terms of the wording used
and the way in explaining questions to the respondents.
The researcher may formulate questions and raise issues on the
spur of the moment, depending upon what occurs to them in the
context of the discussion.
Flexibility in interview structure, interview contents and interview
questions and their wording.
Common in qualitative research.
Fall 2017 12 / 30
Collecting Data Using Primary Sources Interview

Types of interview
@ Structured interviews can be used in survey research to gather
data, which will then be the subject of quantitative analysis.
The researcher asks a predetermined set of questions, using the
same wording and order of questions as specified in the interview
schedule.
►Interview schedule is a written list of questions, open ended or
closed, prepared for use by an interviewer.
► Note that interview schedule is a research tool/instrument whereas

interviewing is a method of data collection


This method provides uniform information that ensures compara-
bility of data.
Rigidity in interview structure, interview contents and interview
questions and their wording.
Less interviewing skills are required compared to the unstructured
interviewing.
Fall 2017 13 / 30
Collecting Data Using Primary Sources Questionnaire

Questionnaires
The questionnaire is a written list of questions, the answers to
which are recorded by respondents after reading the questions
and interpreting what is expected.
Difference between an interview schedule and a questionnaire:
► Interview schedule: the interviewer asks the questions, and if nec-
essary, explains them, and records the respondent’s replies on an
interview schedule.
► Questionnaire: responses are recorded by the respondents them-

selves.
Qualities of a good questionnaire
► Questions clear and easy to understand.
► Layout easy to read, pleasant to eyes, and easy to follow.
► Developed in interactive style.

► Sensitive questions prefaced by an interactive statement explain-

ing the relevance of the questions.


Fall 2017 14 / 30
Collecting Data Using Primary Sources Questionnaire

Ways of Administering a Questionnaire

1 Mailed questionnaire:
► Posted to respondents who return them by post aftercompletion.
► Usually it is a good idea to send a prepaid, self-addressed envelope
with the questionnaire.
► Must be accompanied by a covering letter.

► Major problems: low response rate.

2 Collective administration:
► Captive audience (people assembled at one place) like students in
a class or people attending a function, seminar etc.
► Ensures high response rate.

► Can explain the purpose, relevance and importance of the study

and clarify any questions that respondents may have.


► Saves money on postages.

Fall 2017 15 / 30
Collecting Data Using Primary Sources Questionnaire

Ways of Administering a Questionnaire

3 Administration in a public area:


► Administer a questionnaire in a public place such as a shopping
center, health center, hospital or school.
► Depends upon the type of study population you are looking for

and where it is likely to be found.


► Slightly more time consuming.

4 Web-based or Online questionnaires:


► Sent electronically using the Internet.
► Low response rate is expected but could be increased by sending
reminders.

Fall 2017 16 / 30
Interview vs. Questionnaire

Interview or questionnaire?
The choice between a questionnaire and an interview schedule is
important and should be considered thoroughly as the strengths
and weaknesses of the two methods can affect the validity of the
findings.
The nature of the investigation and the socioeconomic-demographic
characteristics of the study population are central in this choice.
The selection between an interview schedule and a questionnaire
should be based upon the following criteria:
► The nature of investigation: Sensitive questions, questionnaire
better.
► The geographical distribution of the study population: Respon-

dents scattered, use questionnaire - cheaper.


► Type of study population: Illiterate, very young or very old, or

handicapped - use interview schedule.


Fall 2017 17 / 30
Interview vs. Questionnaire

Advantages and disadvantages of a questionnaire


Advantages of a questionnaire
1 Less expensive
2 Offers greater anonymity
Disadvantages of a questionnaire
1 Application is limited - only literate can participate.
2 Low response rate.
3 Self-selecting bias - not everyone returns questionnaire.
4 Opportunity to clarify issues is lacking.
5 Spontaneous responses are not allowed for - gives time to reflect
before answering.
6 Response to a question may be influenced by the response to
other questions - respondents may read all question before an-
swering.
7 It is possible to consult others (mailed questionnaire)
8 A response cannot be supplemented with other information
Fall 2017 18 / 30
Interview vs. Questionnaire

Advantages and disadvantages of the interview


Advantages of theinterview
1 More appropriate for complex situations
2 Useful for collecting in-depth information - probing is possible
3 Information can be supplemented - non-verbal reactions
4 Questions can be explained
5 Interviewing has a wider application - used with any population.
Disadvantages of the interview
1 Interviewing is time consuming and expensive - for geographically
scattered population.
2 The quality of data depends upon the quality of the interaction
between an interviewer and interviewee.
3 The quality of data depends upon the quality of the interviewer
- experience, skills, commitment etc.
4 The quality of data may vary when many interviewers are used.
5 The researcher may introduce his/her bias in framing and inter-
pretation of questions.
Fall 2017 19 / 30
Formulating Questions

Forms of question
Open ended questions:
Possible responses are not given, commonly used for seeking opin-
ions, attitudes and perceptions.
Respondents write down the answers in their words. Investigators
records the answers either verbatim or in a summary. Advantages:
► Provide in-depth and wealth of information.
► Provide opportunity for respondent to express their opinion, re-
sulting in more variety of information.
► Allow respondents to express themselves freely; eliminate the pos-

sibility of investigator bias.


Disadvantages:
► Analysis is more difficult (must do content analysis in order to
classify the data).
► Some respondents may not be able to express themselves, so

information may be lost.

► Greater chanc e of interviewer bias. Fall 2017 20 / 30


Formulating Questions

Examples of Open-ended Questions

What is your year of birth?

How would you describe your current marital status? . . . . . . . . . .


What is your average monthly salary? $ . . . . . . . . . . . . . . . . . . . . . .
What was your high school experience like?
.............................................................
.............................................................
.............................................................
Please list up to three things you like about your job.
1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Fall 2017 21 / 30
Formulating Questions

Forms of question

Closed questions:
Possible answers are set out and the respondent or the investigator
ticks the category that best describes the respondent’s answer;
useful for eliciting factual information.
Category ‘Other/please explain’ to accommodate any response
not listed.
Advantages:
► Ready-made categories; help ensure info needed is obtained.
► Responses are easy to analyze.
Disadvantages:
► Lack depth and variety in information.
► investigators bias - may list only the responses of his choice.

Fall 2017 22 / 30
Formulating Questions

Examples of Closed Questions


What is your nationality? Please tick ✓ the appropriate box

□ UAE □ GCC □ Arab □ Other

Please number each of the factors listed below in order of impor-


tance to you in choosing a new car. Number the most important
1, the next 2 and so on.

[ ] Carbon dioxide emissions [ ] Depreciation


[ ] Boot size [ ] Price

I feel employees’ views have influenced the decisions taken by


management.

□ Strongly agree □ Disagree


□ Agree □ Strongly disagree
Fall 2017 23 / 30
Formulating Questions

Formulating Effective Questions


1 Always use simple and everyday language.
► Example: What do you think of the organizational structure of
the company you work for?
2 Do not use ambiguous (more than one meaning) questions.
► Example: Are you satisfied with the Cafeteria?
3 Avoid double-barreled items (two questions in one).
► Example: Please rate your satisfaction with the amount and kind
of care you received while in the hospital.
4 Avoid leading questions (questions that imply an answer).
► Example: Most doctors believe that exercise is good for you. Do
you agree?
5 Do not ask questions based on presumptions.
► Example: How many cigarettes do you use in a day?

Fall 2017 24 / 30
Formulating Questions

More on Formulating Effective Questions

1 Avoid burdensome questions (tax memory or skills).


2 Avoid negative words and double negatives.
3 Cultural differences in meaning (Phrases or words have different
meanings to different population subgroups).
4 Avoid loaded questions (socially desirable answer or emotionally
charged).
5 Keep open-ended questions to a minimum.
6 Use mutually exclusive questions(use “Other” and “please spec-
ify” to allow for some flexibility)
7 Consider alternate ways to ask sensitive questions.
Important tip: Pretest the questionnaire.

Fall 2017 25 / 30
Formulating Questions

Construction of a Research Instrument


Most important since quality and validity of the output are solely
dependent upon it.
Ensure validity of your instrument by making sure that your ques-
tions relate to the objectives of your study.
Procedure to construct a research instrument:
Step 1: Clearly define and individually list all the specific objec-
tives, research questions or hypotheses to be tested.
Step 2: For each objectives, research question or hypothesis, list all
the associated questions that you want to answer through
your study.
Step 3: Take each research question identified in Step 2 and list
the information required to answer it.
Step 4: Formulate questions to obtain information.
Fall 2017 26 / 30
Formulating Questions

Construction of a Research Instrument

Fall 2017 27 / 30
Formulating Questions

Personal and Sensitive Questions

Researchers sometimes ask sensitive questions in surveys. Re-


spondents are often hesitant to answer sensitive items, so item
nonresponse on these questions is normally higher than for other
questions in a survey.
It is recommended to ask personal or sensitive questions after es-
tablishing rapport with the respondent in the middle of the ques-
tionnaire or the interview.
Sensitive questions could be addressed in two ways:
► Direct method:
Advantage: accuracy of affirmative answer
Disadvantages: offend respondents, possibility of getting no-response
► Indirect method:
Advantage: Avoid offending respondents
Disadvantages: May not produce affirmative answer

Fall 2017 28 / 30
Formulating Questions

Order of Questions
The opening questions should be interesting, simple, and non-
threatening.
As a general guideline, basic information should be obtained first,
followed by classification, and, finally, identification information.
Difficult questions or questions which are sensitive, embarrassing,
complex, or dull, should be placed late in the sequence.
Two opinions regarding the order of questions:
► Random order: useful in a situations where a researcher wants
respondents to express their agreement or disagreement with dif-
ferent aspects of an issue.
► Logical order: is better as it gradually leads respondents into the

themes of the study, starting with simple themes and progressing


to complex ones. It sustains the interest of respondents and
gradually stimulates them to answer the questions.
Fall 2017 29 / 30
Collecting Data Using Secondary Sources

Collecting Data Using Secondary Sources


Both qualitative and quantitative research studies use secondary
sources as amethod of data collection.
► In quantitative research the information extracted is categorical
or numerical.
► In qualitative research you usually extract descriptive (historical

and current) and narrative information.


Categories of secondarysources:
► Government or semi-government publications (Statistics centers)
► Earlier research
► Personal records

► Mass media

Problems with Using Data from Secondary Sources:


► Validity and reliability: Varies from source to source
► Personal bias: Specific problem with personal records
► Availability of data

► Format: Different categories and classes

Fall 2017 30 / 30
Unit 9: Validity and Reliability of a Research Instrument

1 / 17
Outline

1 Validity
The Concept of Validity
Types of Validity

2 Reliability
The Concept of Reliability
Methods of Determining Reliability

2 / 17
Outline

Research Journey

3 / 17
Validity The Concept of Validity

The Concept of Validity


What is validity?
► Validity is the ability of a research instrument to measure what it
is designed to measure.
► Validity is defined as the degree to which the researcher has mea-

sured what he has set out to measure (Smith 1991, 106)


► The most common definition of validity is epitomized by the

question: Are we measuring what we think we are measuring?


(Kerlinger, 1973, 457)
► Extent to which an empirical measure adequately reflects the real

meaning of the concept under consideration (Babbie, 1989)


There are two perspectives onvalidity:
► Is the research investigation providing answers to the research
questions for which it was undertaken?
► If so, is it providing these answers using appropriate methods and

procedures?
4 / 17
Validity The Concept of Validity

The Concept of Validity: Key questions

1 Who decides whether an instrument is measuring what it is sup-


posed to measure?
The person who designed the study, the readership of the report
and experts in the field.
2 How can it be established that an instrument is measuring what
it supposed to measure?
In the social sciences there appear to be two approaches:
► Logical approach: Providing justification of each question in re-
lation to the objectives of the study.
Easy if questions relate to tangible matters.
Difficult in situations where we are measuring attitude, effective-
ness, satisfaction etc.
► Statistical approach: By calculating the coefficient of correlations
between the questions and the outcome variables.

5 / 17
Validity Types of Validity

Types of Validity

6 / 17
Validity Types of Validity

Face and Content Validity


Content validity:
► Extent to which a measuring instrument covers a representative
sample of the domain of the aspects measured.
► Whether items and questions cover the full range of the issues or

problem being measured.


► Coverage of issue should be balanced; that is, each aspect should

have similar and adequate representation in questions.


Face validity:
► The extent to which a measuring instrument appears valid on its
surface.
► Each question or item on the research instrument must have a

logical link with the objective.


Problems:
► Judgement is based upon subjective logic.
► The extent to which question reflect the objectives of a study
may differ.
7 / 17
Validity Types of Validity

Concurrent and Predictive Validity


Concurrent validity:
► It is judged by how well an instrument compares with a second
assessment doneconcurrently.
► It is comparing the findings of the instrument with those found

by another which is well accepted.


► Example:

Comparing a newly designed test to measure intelligence with


existing IQ tests.
Predictive validity:
► Is judged by the degree to which an instrument can forecast an
outcome.
► It cannot be used for all measures.

► Example:

Comparing the SAT test with GPA in college, comparing job


performance with aptitude or ability test.
8 / 17
Validity Types of Validity

Construct Validity

Assesses the extent to which a measuring instrument accurately


measures a theoretical construct (e.g. attitude scales, aptitude
and personality) it is designed to measure.
Measured by correlating performance on the test with performance
on a test for which construct validity has been determined.
Determined by ascertaining the contribution of each construct to
the total variance observed in a phenomenon using statistical pro-
cedures. The greater the variance attributable to the construct,
the higher the validity of the instrument.
Common statistical techniques used in establishing construct va-
lidity include correlational analysis and factor analysis.
Examples:
Job satisfaction, trust, customer loyalty, self-esteem, etc.
9 / 17
Validity Types of Validity

Types of Validity: Summary

Validity Description
Content Does the measure adequately measure the concept?
Face Do “experts” validate that the instrument measures
what its name suggests it measures?
Concurrent Does the measure differentiate in a manner that helps
to predict a criterion variable currently?
Predictive Does the measure differentiate individuals in a manner
as to help predict a future criterion?
Construct Does the instrument tap the concept as theorized?

10 / 17
Reliability The Concept of Reliability

The Concept of Reliability


What is reliability?
► The research tool is reliable if it consistent, stable, predictable
and accurate when used repeatedly. The greater the degree of
consistency and stability in a research instrument, the greater the
reliability.
► A scale or test is reliable to the extent that repeat measurements

made by it under constant conditions will give the same result.


► Reliability is the degree of accuracy or precision in the measure-

ments made by a research instrument.


The concept of reliability can be looked at from two sides:
1 How reliable is an instrument?
Focus on the ability of an instrument to produce consistent mea-
surements.
2 How unreliable is it?
Focus on the degree of consistency in the measurements made by
an instrument.
11 / 17
Reliability The Concept of Reliability

Validity vs. Reliability


Reliability is a necessary contributor to validity but is not a suffi-
cient condition for validity.
► If a measure is not valid, it hardly matters that it is reliable,
because it does not measure what needs to be measured in order
to solve the research problem.
Example: When the bathroom scale measures you
► correctly (using a concurrent criterion such as a scale known to
be accurate), then it is both reliable and valid.
► consistently overweighs you by 2 kg, it is reliable (you get the

same result each time) but not valid (does not allow you to make
accurate conclusions about the weight).
► erratically from time to time, then it is not reliable, and therefore

cannot be valid.
In this context, reliability is not a valuable as validity, but it is
much easier to assess.
12 / 17
Reliability The Concept of Reliability

Validity vs. Reliability

13 / 17
Reliability The Concept of Reliability

Reliability
Factors affecting the reliability of a research instrument:
1 The wording of questions
2 The physical setting
3 The respondent’s mood
4 The interviewer’s mood
5 The nature of interaction
6 The regression effect of an instrument:
Methods of determining the reliability in quantitative research:
There are a number of ways of determining the reliability of an
instrument and these can be classified as:
► External consistency
Test and retest
Parallel forms of the same test
► Internal consistency
The split-half technique
14 / 17
Reliability Methods of DeterminingReliability

External Consistency Procedures


External consistency procedures compare findings from two indepen-
dent processes of data collection with each other as a means of verifying
the reliability of the measure. The two methods of doing this are as
follows:
1 Test/ retest (repeatabilitytest)

► An instrument is administered once, and then again, under the


same or similar conditions.
► The ratio between test and retest score is an indication of the

reliability of the instrument. The greater the value of the ratio,


the higher the reliability of the instrument.
► The greater the difference between the test scores or findings, the

greater the unreliability of the instrument.


► Advantage: it permits the instrument to be compared withitself.

► Disadvantage: a respondent may recall the responses that they

gave in the first round (Overcome by increasing the time span


between twotests).
15 / 17
Reliability Methods of DeterminingReliability

External Consistency Procedures

2 Parallel forms of the same test


► Two instruments intended to measure the same phenomena is
constructed and administered to two similar populations.
► The results obtained from one test is compared with another. If

the results are similar, then the instrument is reliable.


► Advantage: does not suffer from the problem of recall andtime

lapse between two test is not required.


► Disadvantages:

Need to construct two instrument instead of one.


Difficulty of constructing two instruments that are comparable in
their measurement of a phenomenon
Difficulty to achieve comparability in the two population groups
and in the two conditions under which the tests are administered.

16 / 17
Reliability Methods of DeterminingReliability

Internal Consistency Procedures


The idea behind internal consistency procedures is that items or ques-
tions measuring the same phenomenon, if they are reliable indicators,
should produce similar results irrespective of their number in an in-
strument. The following method is commonly used for measuring the
reliability of an instrument in this way:
1 Split-half technique

► To correlate half of the items with the other half in a research


instruments.
► Questions are divided in half in such way that any two questions

intended to measure the same aspect fall into different halves.


► The scores obtained by administering the two halves are corre-

lated.
► Reliability is calculated using the correlation between scores ob-

tained from the two halves (Cronbach’s Alpha is commonly used


to measure correlation).
17 / 17
Unit 10: Selecting a Sample

1 / 24
Outline

1 Sampling
Concept of Sampling
Sampling Terminology
Sampling Process

2 Types of Sampling
Random/Probability SamplingDesigns
Non-random/non-probability Sampling Designs

3 Sampling in Qualitative Research

2 / 24
Outline

Research Journey

3 / 24
Sampling Concept of Sampling

The Concept of Sampling


Sampling is the process of selecting a few (a sample) from a
bigger group (the sampling population) to become the basis for
estimating or predicting the prevalence of an unknown piece of
information, situation or outcome regarding the bigger group.
The purpose of sampling is to draw conclusions about popula-
tions from samples, which enables us to determine a population’s
characteristics by directly observing only a portion (or sample) of
the population.

4 / 24
Sampling Concept of Sampling

The Concept of Sampling


Why Sampling?
1 Economy: requires fewer resources than a census.
2 Time: provides needed informationquickly.
3 Population size: many populations about which inferences must
be made are quite large.
4 Accessibility: there are some populations that are so difficult to
get access to that only a sample can be used.
5 Destructive nature of the observation.
6 Accuracy: A sample may be more accurate than a census.
Advantages and disadvantages of sampling:
► Advantages: it saves time as well as financial and human re-
sources.
► Disadvantage: you do not find out the information about the

population’s characteristics of interest to you but only estimate


or predict them.
5 / 24
Sampling Sampling Terminology

Sampling Terminology
Population/Study population: The entire group of people or
subjects of interest that the researcher wishes to investigate.
► Example: If a researcher is interested in measuring the satisfaction
level of the blue-collar workers in a company. Then all blue-collar
workers in the company will make up the population.
Sample: A smaller set of cases a researcher selects from a larger
group and generalizes to the population.
► Example: If 50 members are selected from a population of 1000
blue-collar workers to study the desire outcome, then 50 members
form the sample for the study.
Sample size: The number of selected cases in the sample and is
usually denoted by the letter (n).
► Example: The sample size is in the above example is n = 50.
Sampling design (sampling strategy): The method used to
select the sample.
6 / 24
Sampling Sampling Terminology

Sampling Terminology
Sampling unit/ sampling element: The name for a case or
single unit to be selected.
► Example (1): Each blue-collar worker in the organization will be
considered as an element.
► Example (2): If the sampling involves selecting some departments

randomly and then the workers in these departments are surveyed


then each department will be considered as a unit.
Sampling frame: The list of units composing a population from
which a sample is selected.
► Example: The list of all blue-collar workers in the company.
Sample statistic: A summary obtained from the sample.
► Example: The average satisfaction level of workers in the sample.
Population parameter: A summary of the entire population that
is estimated from a sample.
► Example: The mean satisfaction level of ALL workers in that
company.
7 / 24
Sampling Sampling Process

The Sampling Process


1 Define the population: Sampling begins with precisely defining
the target population. The target population must be defined in
terms of elements, geographical boundaries andtime.
2 Determine the sample frame: The sampling frame is a(physical)
representation of all the elements in the population from which the
sample is drawn.
3 Determine the sampling design: Two major types of samplingProb-
ability sampling and Non probability sampling.
4 Determine the appropriate sample size: Several factors affect the
sample size such as precision desired, variability in the population,
the cost and time constraints, and population size.
5 Execute the sampling process: Decisions with respect to the the
target population, the sampling frame, the sample technique, and
the sample size have to be implemented.
8 / 24
Sampling Sampling Process

Example

A satisfaction survey was conducted for a computer retailer in UAE.


The objective of this survey was to improve internal operations and thus
to retain more customers. The survey was transactional in nature;
service satisfaction and several related variables were measured follow-
ing a service encounter (i.e., a visit to the retailer). Hence customer
feedback was obtained while the service experience was still fresh. To
obtain a representative sample of customers of the computer retailer
(the study population), every tenth person leaving the store was ap-
proached during a one-week period (the sampling design). Trained
interviewers that were sent out with standardized questionnaires ap-
proached 732 customers leaving the stores (the sample size).

9 / 24
Sampling Sampling Process

Principles of Sampling
Principles of Sampling:
1 In a majority of cases of sampling there will be a difference be-
tween the sample statistics and the true population parameters,
which is attributable to the selection of the units in the sample.
2 The greater the sample size, the more accurate the estimate of
the true population parameter.
3 The greater the difference in the variable under study in a popu-
lation for a given sample size, the greater the difference between
the sample statistics and the true population parameter.
Factors affecting the inferences drawn from asample:
1 The size of the sample: Large samples have more certainty than
those based on smaller ones.
2 The extent of variation in the sampling population: The greater
the variation in the population will have greater uncertainty with
respect to its characteristics.
10 / 24
Types of Sampling

Types of Sampling

11 / 24
Types of Sampling Random/Probability Sampling Designs

Random/Probability Sampling Designs


Each element in the population has an equal and independent
chance of selection in the sample.
► Equal: means the probability of selection of each element in the
population is the same.
► Independent: means that the choice of one element is not de-

pendent upon the choice of another element in the sampling.


Advantages of randomsampling:
1 Representative of the sampling population, the inferences drawn
from such samples can be generalized to the whole population.
2 Some statistical procedures can be applied only to data collected
from random samples.
Common random/probability sampling designs:
1 Simple Random Sampling 3 Cluster Sampling
2 Stratified Sampling 4 Systematic Sampling
12 / 24
Types of Sampling Random/Probability Sampling Designs

(1) Simple Random Sampling


Simple Random Sampling (SRS) is a method of probability
sampling in which every unit has an equal nonzero chance of being
selected. Each element in the population has a known and equal
probability of selection.
How to select anSRS?
1 Identify by a number all elements or sampling units in the popu-
lation.
2 Decide on the sample size (n).
3 Select (n) using either the fishbowl draw, the table of random
numbers or a computer program.
Advantage: SRS is best when the generalizability of the findings to
the whole population is the main objective of the study.
Disadvantage: It could become quite expensive if the population is
very large and/or geographically dispersed.
13 / 24
Types of Sampling Random/Probability Sampling Designs

(2) Stratified Random Sampling


A stratified random sample is obtained by dividing the popu-
lation elements into different subgroups (called strata) and then
selecting asimple random sample from each stratum.
►The population within a stratum is homogeneous with respect to
the variable of interest.
► The stratification variable should be clearly identifiable in the

study population (gender, college, etc.).


Types of stratified Random Sampling:
► Proportionate stratified sampling: elements represented in the
sample from each stratum will be proportionate to the size of
the respective strata.
► Disproportionate stratified sampling: consideration is not given to

the size of the stratum.


A major objective of stratified sampling is to increase precision
without increasing cost.
14 / 24
Types of Sampling Random/Probability Sampling Designs

(3) Cluster Sampling


In cluster sampling, the study population is first divided into
mutually exclusive and collectively exhaustive groups, called clus-
ters. Then a random sample of clusters is selected using simple
random sampling technique.
Elements within a cluster should be as heterogeneous as possible,
but clusters themselves should be as homogeneous as possible.
Ideally, each cluster should be a small-scale representation of the
population.
Depending on the level of clustering, sometimes sampling may be
doneat different levels:
► One-stage: all the elements are included in the sample
► Two-stage: a sample of elements is drawn randomly from each
selected cluster.
► Multi-stage: sampling is carried out in stages using smaller and

smaller sampling units at each stage.


15 / 24
Types of Sampling Random/Probability Sampling Designs

(4) Systematic Sampling


The systematic sampling relies on arranging the target pop-
ulation according to some ordering scheme and then selecting
elements at regular intervals through that ordered list.
How to select asystematicsample?
1 Determine the width of the interval (k) as

total population (N)


k=
sample size (n)
2 Then, from the first interval, using the SRS technique, one ele-
ment is selected.
3 Select the same order element from each subsequent interval.
A simple example would be to select every 10th name from the
telephone directory (an ’every 10th’ sample, also referred to as
’sampling with a skip of 10’).
16 / 24
Types of Sampling Non-random/non-probability Sampling Designs

Non-random/non-probability Sampling Designs


These are used when the number of elements in a population is
either unknown or cannot be individually identified (No sampling
frame). In such situations elements of the sample are selected on
the basis of personal judgment or convenience.
In non-probability sampling, since elements are chosen arbitrarily,
there is no way to estimate the probability of any one element
being included in the sample.
Non-probability sampling is more common in qualitative research.
Disadvantage: Generalizability is questionable.
Types of Non-random Sampling Designs
1 Quota sampling
2 Accidental or convenience sampling
3 Judgmental or purpose sampling
4 Snowball sampling
17 / 24
Types of Sampling Non-random/non-probability Sampling Designs

Non-random/non-probability Sampling Designs


1 Quota sampling
► The researcher is guided by some visible characteristic, such as
gender or race, of the study population.
► The sample is selected from a location convenient to the re-

searcher, and whenever a person with this visible relevant char-


acteristic is seen that person is asked to participate in the study.
► The process continues until the researcher has been able to con-

tact the required number of respondents (quota).


2 Accidental or conveniencesampling
► Attempts to obtain a sample of convenient elements.
► Often, respondents are selected because they happen to be in the
right place at the right time.
► Examples: use of students in a class in a specific time, “peo-

ple on the street” interviews, mail intercept interviews without


qualifying the respondents.
18 / 24
Types of Sampling Non-random/non-probability Sampling Designs

Non-random/non-probability Sampling Designs


3 Judgmental or purpose sampling
► Judgmental sampling is a form of convenience sampling in which
the population elements are selected based on the judgment of
the researcher.
► The researcher only goes to those people who in his opinion are

likely to have the required information and be willing to share it.


► Examples: Test markets, expert sampling.

4 Snowball sampling
► It is the process of selecting a sample using networks.
► An initial group of respondents is selected, usually at random.
After being interviewed, these respondents are asked to identify
others who belong to the target population of interest. Subse-
quent respondents are selected based on the referrals.
► This process continued until the required numberor a “saturation

point” bas been researched.


19 / 24
Types of Sampling Non-random/non-probability Sampling Designs

Calculation of sample Size


The size of the sample is important for testing a hypothesis or
establishing an association, but for other studies the general rule is
the larger the sample size, the more accurate will bethe results.
In qualitative research the question of sample size is less important
as the main focus is to explore or describe a situation, issue,
processor phenomenon.
In quantitative research and particularly for cause-and-effect stud-
ies, you needto consider the following:
► Level of confidence
► Degree of accuracy
► Level of variation

In practice, the allocated budget determines the size of the sample


and the challenge lies in the way of selecting the elements so that
they effectively and adequately represent the sampling population.
20 / 24
Types of Sampling Non-random/non-probability Sampling Designs

Sampling in Quantitative Research


General rules for identifying sample size (Ormrod and Leedy, 2013)
1:

Sample Size Determination


For small populations (< 100), survey the entire population.
If the population size is around 500, sample 50% of the population
If the population size is around 1500, sample 20% of the popula-
tion.
Beyond a certain point, say 5000 units, a sample of about 400 may
be adequate.
Note: These sample sizes are conservative assuming 5% sampling error
and 95% confidence (Online Sample SizeCalculator.)
1Leedy, P. D., & Ormrod, J. E. (2013). Practical Research: Pearson New
International Edition: Planning and Design. Pearson Higher Ed.

21 / 24
Sampling in Qualitative Research

Sampling in Quantitative and Qualitative Research


In quantitative research you attempt to select a sample in such a
way that it is unbiased and represents the population from where it
is selected.
In qualitative research, number considerations may influence the
selection of asample suchas:
► The ease in accessing the potential respondents.
► Personal judgment about the potential respondents.
► Need for typical or atypical individual.

The purpose of sampling in quantitative research is to draw infer-


ences about the group from which you have selected the sample,
whereas in qualitative research it is designed either to gain in-
depth knowledge about a situation or to know as much as possible
about different aspects of an individual to provide insight into the
group.
22 / 24
Sampling in Qualitative Research

Sampling in Qualitative Research

In qualitative research the issue of sampling has little significance


as the main aim of most qualitative inquires is either to explore or
describe the diversity in situation, phenomenon orissue.
Qualitative research does not make attempt to either quantify or
determine the extent of diversity.
To explore the diversity in qualitative research you need to reach
what is known as ‘saturation point’ in terms of findings. For
instance, you go on interviewing or collecting information as long
asyou keepdiscovering new information.
Saturation point: The point at which no new information is
obtained and redundancy is achieved.
Keep in mind that ‘saturation point’ is a subjective judgment
which you, asresearcher, decide.

23 / 24
Sampling in Qualitative Research

Sampling in qualitative research

All non-probability sampling designs can also be used in qualitative


research with two differences:
1 In quantitative studies you collect information from a predeter-
mined number of people but, in qualitative research, you do not
have a sample size in mind. Data collection based upon a prede-
termined sample size and the saturation point distinguishes their
usein quantitative and qualitative research.
2 In quantitative research you are guided by your desire to select a
random sample, whereas in qualitative research you are guided by
your judgment as to who is likely to provide you with the ‘best’
information.

24 / 24

You might also like