0% found this document useful (0 votes)
70 views49 pages

Lecture 9: Variables: Muhammad Azeem Qureshi

This document defines and provides examples of key variable types in research: 1. Independent variables are inputs that influence dependent variables. Dependent variables are outcomes predicted by independent variables. 2. Moderating variables influence the strength of the relationship between independent and dependent variables. Mediating variables explain the mechanism of their relationship. 3. Categorical variables assign individuals to groups. Extraneous variables unintentionally influence relationships and should be controlled.

Uploaded by

Nadir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views49 pages

Lecture 9: Variables: Muhammad Azeem Qureshi

This document defines and provides examples of key variable types in research: 1. Independent variables are inputs that influence dependent variables. Dependent variables are outcomes predicted by independent variables. 2. Moderating variables influence the strength of the relationship between independent and dependent variables. Mediating variables explain the mechanism of their relationship. 3. Categorical variables assign individuals to groups. Extraneous variables unintentionally influence relationships and should be controlled.

Uploaded by

Nadir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Lecture 9: Variables

Muhammad Azeem Qureshi


Variable - Definition
• A variable is a measurable representation of an
abstract construct.
• As abstract entities, constructs are not directly measurable,
and hence, we look for proxy measures called variables.

OR (In simple words)

• A variable is one that varies, that does not remain constant,


fixed or stagnant.

• Example: Age, knowledge, FDI, temperature etc.


Independent Variable
• An independent variable is the input variable.
• A variable that is expected to influence the dependent
variable in some way.

If a researcher is studying the relationship between two


variables X and Y. If X is independent variable, then it affects
another variable Y:

So the characteristics of independent variables are:


(a) It is the cause for change in other variables.
(b) Independent variables are always interested only it affects
another variable, not in what affects it.
Independent Variable
An independent variable is also known as a
1. Predictor variable
2. manipulated variable
3. explanatory variable
4. Regressor and
5. controlled variable
Dependent Variable
• The dependent variable is response variable or output.
• A variable that is predicted and/or explained by other variables.
• A dependent variable is also known as a
1. Rresponse variable
2. Criterion variable
3. Regressand
4. Measured variable
5. Responding variable
6. Explained variable
7. Outcome variable
8. Experimental variable and
9. Output variable
Categorical Variable
• A categorical variable is a variable that can take on
one of a limited, and usually fixed, number of possible
values, thus assigning each individual to a particular
group or "category.“

• For example, people can be categorized as either male


or female.
A common categorical variable in consumer research is
adoption, meaning the consumer either did or did not
purchase a new product.
Moderating Variable (M)
A moderating variable is one that has a strong contingent effect
on the Independent and dependent variable relationship.

That is, the presence of a third variable (the moderating variable)


modifies the original relationship or affect the direction
and/or strength between the independent (Predictor) and the
dependent (Criterion) variable while measuring causality.

In other words, A moderator may increase the strength of a


relationship, decrease the strength of a relationship, or
change the direction of a relationship.
Moderating Variable (Continue…)
Example 1: For example, a strong relationship has been observed between
the quality of library facilities (X) and the performance of the students
(Y). Although this relationship is supposed to be true generally, it is
nevertheless contingent on the interest and inclination of the students. It
means that only those students who have the interest and inclination to
use the library will show improved performance in their studies.
In this relationship interest and inclination is moderating variable i.e.
which moderates the strength of the association between X and Y
variables.

Example 2: psychotherapy may reduce depression more for men than for
women, and so we would say that gender (M) moderates the causal
effect of psychotherapy (X) on depression (Y).
Mediating Variable
Mediators are more like translators which carry
forward the influence of independent variable
on the dependent variable.
Mediator variables explain why or how an effect
or relationship between variables occurs.
Mediating variables are statistically significant
when the relationship between two variables
becomes nonsignificant when the mediating
variable is introduced into the statistical
model.
Mediating Variable (Continue….)
• Example: Research on aggression shows that provocations
produce aggression via anger.

• Application: In the above example, the independent variable


is provocations, and the dependent variable is aggression. In
other words, when you provoke someone, they aggress
against you. But why does aggression occur? Does everyone
aggress? Some research has shown that anger is the mediator
variable. In other words, the provocation causes anger, and its
the anger that causes aggression. Thus, provocation (IV)
causes anger (M) which causes aggression (DV).
Mediating Variable (Continue….)
• Example: Imaginary research on social class, education, frequency
of check-ups.

• Application: For example, we can look at the relation between the


two variables, one’s social class and the frequency of physical check-
ups. In this case, age can be viewed as a moderator variable in
terms of having its effect on the strength of the relationship;
meaning for older people the frequency of physical check ups can
be stronger, while for younger people the frequency can be lower.
Mediator, on the other hand, explains the reason why there is a
relationship between social class and physical check-ups. So, money
can be mediator, explaining why there is any relationship between
the two variables. So, once the effect of mediator variable (money)
is removed, the relationship disappears.
Intervening Variables
An intervening variable is that factor which
affects the observed phenomenon but cannot
be seen and measured or manipulated, Its
effect must be inferred from the effects of the
Independent and moderator variables on the
observed phenomena. The attitude, learning
process, habit and interest function as
Intervening variables.
Combined Example
Among students of the same age and intelligence, skill
performance is directly related to the number of practice
trails particularly among boys but less directly among girls.
In such a hypothesis the variables which must be considered
are:
(i) Independent variable – number of practice trails.
(ii) Dependent variable – skill performance.
(iii) Moderator variable – sex.
(iv) Control variable – age, intelligence.
(v) Intervening variable – learning.
Combined Model
Moderating
variable

Effort

Academic Earning
Intelligence
Achievement Potential
Independent Mediating Dependent
variable variable variable
Combined Model
Extraneous Variables
An almost infinite number of extraneous variables (EV) exist that
might affect a given relationship. Some can be treated as
independent or moderating variables, but most must either be
assumed or excluded from the study. Such variables have to be
identified by the researcher. In order to identify the true
relationship between the independent and the dependent
variable, the effect of the extraneous variables may have to be
controlled. This is necessary if we are conducting an
experiment where the effect of the confounding factors has to
be controlled. Confounding factor is another name used for
extraneous variables when they make severe effect on the
results of the study.
Extraneous Variables (Continue…)
• Extraneous Variables are undesirable variables that
influence the relationship between the variables that
an experimenter is examining. Another way to think
of this, is that these are variables that influence the
outcome of an experiment (dependent variable),
though they are not the variables that are actually of
interest. These variables are undesirable because they
add error to an experiment. A major goal in research
design is to decrease or control the influence of
extraneous variables as much as possible.
Extraneous Variables (Continue…)
• For example, let’s say that an educational psychologist has
developed a new learning strategy and is interested in examining
the effectiveness of this strategy. The experimenter randomly
assigns students to two groups. All of the students study text
materials on a biology topic for thirty minutes. One group uses the
new strategy and the other uses a strategy of their choice. Then all
students complete a test over the materials. One obvious
confounding variable in this case would be pre-knowledge of the
biology topic that was studied. This variable will most likely
influence student scores, regardless of which strategy they use.
Because of this extraneous variable (and surely others) there will be
some spread within each of the groups. It would be better, of
course, if all students came in with the exact same pre-knowledge.
Types of Extraneous Variables
1) Situational variables:
These are aspects of environment that might affect
the participants’ behavior.
Example: Lightening conditions, noise, temperature
etc.
Standardized procedures need to be used to ensure
that the conditions are same for all participants.
Types of Extraneous Variables (Continue…)
2) Person/Participant variables:
This refers to the way each participant varies from
each other, and how it could affect the results.
For example mood, intelligence, anxiety, nerves
and concentration etc.
Example: If a participant that has performed the
memory test was tired. This could affect the
performance and the results of the experiment.
Types of Extraneous Variables (Continue…)
3) Experimenter effect/bias:
The experimenter unconsciously conveys to
participants how they should behave. This is
called experiment bias.

If an experimenter’s presence, actions, or comments


influence the subjects’ behavior or way the
subjects to slant their answers to cooperate with
the experimenter, the experiment has introduced
experimenter bias.
Types of Extraneous Variables (Continue…)
Some unintentional clues may be given to the
participants about what the experiment is about
and how they expect them to behave. This
affects the participants’ behavior.
Types of Extraneous Variables (Continue…)
3) Demand Characteristics/Participant Bias:
Similar to experimenter effect, demand
characteristics are all the clues in an experiment
which convey to the participant the purpose of
the research or the expectations of the
researcher from respondents.
The participant might take it to play the role of a
“good participant”, instead of behaving normally
as they would without having a clue.
Types of Extraneous Variables (Continue…)
Example: Dividing participants into two groups.
Ask first group to write about common cold
and flue.
Ask second group to write about common cold
and flue and how they feel when they are
down with the same.
Second group will write in more detail than first
group.
Types of Research Design
1) True-Experiment
A true experiment is one where participants can
be randomly assigned to the conditions of an
experiment. True experiments include
laboratory and field experiments.
In a true experiment, subjects are randomly
assigned to the treatment conditions (levels
of the independent variable).
Types of Research Design (Continue…)
The True Experiment: Attempts to establish cause & effect.

To be a True Experiment, you must have BOTH - manipulation of the IV


& Random Assignment (RA) of subjects/participants to groups.

1) manipulation of the IV - manipulation of the IV occurs when the


researcher has control over the variable itself and can make
adjustments to that variable.

For example, if I examine the effects of Advil on headaches, I can


manipulate the doses given, the strength of each pill, the time
given, etc.. But if I want to determine the effect of Advil on
headaches in males vs females, can I manipulate gender? Is gender
a true IV?
Types of Research Design (Continue…)
Random Assignment - randomly placing
participants into groups/conditions so that all
participants have an equal chance of being
assigned to any condition.

They tend to be high on internal validity. It is


clear what is being measured.
Types of Research Design (Continue…)
2) Quasi-experiment
A quasi (quasi means almost) experiment is not a true
experiment as the researcher is not able to randomly
allocate participants to different conditions of the
experiment. This is usually because the independent
variable is a quality of the participant.

For example, an experiment with an independent variable of


gender is described as a quasi experiment as the
researcher cannot randomly allocate participants to either
the male or female condition.
Types of Research Design (Continue…)
Other examples of a quasi experiment include
testing people with autism and Down’s syndrome
for their ability to understand other people’s
emotions, or comparing older and younger
participants’ ability to solve word search puzzles.
In both of these examples the researcher cannot
randomly allocate participants to one of the two
conditions as the independent variable is a quality
of the participant.
Types of Research Design (Continue…)
You might have people who smoke 1 pack a day
and 2 pack a day smokers, but you can't really
assign them into these groups (is it ethical to
make people who smoke 1 pack a day now
smoke 2?) You would then run your study, but
when you make conclusions, you can't make
any cause and effect conclusions.
Types of Research Design (Continue…)
3) Correlation:
It attempts to determine how much of a relationship exists between variables. It
can not establish cause & effect.
It is performed to show strength of a relationship we use the Correlation
Coefficient (r).

The coefficient ranges from -1.0 to +1.0:


-1.0 = perfect negative/inverse correlation
+1.0 = perfect positive correlation
0.0 = no relationship

positive correlation- as one variable increases or decreases, so does the other.


Example. studying & test scores.
negative correlation - as one variable increases or decreases, the other moves in
the opposite direction. Example. as food intake decreases, hunger increases.
THE BETWEEN vs WITHIN SUBJECTS DESIGN
1) Between-subjects design: in this type of design,
each participant participates in one and only one
group. The results from each group are then
compared to each other to examine differences,
and thus, effective of the IV.
For example, in a study examining the effect of Bayer
aspirin vs GSK on headaches, we can have 2
groups (those getting Bayer and those getting
Tylenol). Participants get either Bayer OR Tylenol,
but they do NOT get both.
THE BETWEEN vs WITHIN SUBJECTS DESIGN
(Continue…)
• 2) Within-subjects design: in this design,
participants get all of the
treatments/conditions.
For example, in the study presented above
(Bayer vs Tylenol), each participant would get
the Bayer, the effectiveness measured, and
then each would get Tylenol, then the
effectiveness measured.
VALIDITY
• Validity - does the test measure what we want
it to measure? If yes, then it is valid.
• For Example - does a stress inventory/test
actually measure the amount of stress in a
person's life and not something else.
RELIABILITY
Reliability - is the test consistent? If we get same results over and over,
then reliable.
For Example - an IQ test - probably won't change if you take it several
times. Thus, if it produces the same (or very, very similar) results each
time it is taken, then it is reliable.
However, a test can be reliable without being valid, so we must be
careful.
For Example - the heavier your head, the smarter you are. If I weighed
your head at the same time each day, once a day, for a week, it would
be virtually the same weight each day. This means that the test is
reliable. But, do you think this test is valid (that is indeed measures
your level of "smartness")? Probably NOT, and therefore, it is not valid.
Ways to Check for Reliability
How to check for reliability of measurement instruments
or the stability of measures and internal consistency
of measures?

Two methods are discussed to check the stability .


1. Stability
(a) Test – Retest
 Use the same instrument, administer the test
shortly after the first time, taking measurement in
as close to the original conditions as possible, to
the same participants.
36
 If there are few differences in scores between the
two tests, then the instrument is stable. The
instrument has shown test-retest reliability.
 Problems with this approach.
 Difficult to get cooperation second time
 Respondents may have learned from the first
test, and thus responses are altered
 Other factors may be present to alter results
(environment, etc.)

37
(b) Equivalent Form Reliability
 This approach attempts to overcome some of the
problems associated with the test-retest
measurement of reliability.
 Two questionnaires, designed to measure the same
thing, are administered to the same group on two
separate occasions (recommended interval is two
weeks).

38
 If the scores obtained from these tests are
correlated, then the instruments have equivalent
form reliability.
 Tough to create two distinct forms that are
equivalent.
 An impractical method (as with test-retest) and
not used often in applied research.

39
Validity
• Definition: Whether what was intended to be
measured was actually measured?

40
Face Validity
• The weakest form of validity
• Researcher simply looks at the measurement
instrument and concludes that it will measure what is
intended.
• Thus it is by definition subjective.

41
Content Validity

The degree to which the instrument items represent


the universe of the concepts under study.
In English: did the measurement instrument cover all
aspects of the topic at hand?

42
Criterion Related Validity
• The degree to which the measurement instrument
can predict a variable known as the criterion
variable.

43
• Two subcategories of criterion related validity
• Predictive Validity
– Is the ability of the test or measure to differentiate
among individuals with reference to a future criterion.
– E.g. an instrument which is suppose to measure the
aptitude of an individual, when used can be compared
with the future job performance of a different
individual. Good performance (Actual) should also have
scored high in the aptitude test and vise versa

44
• Concurrent Validity
– Is established when the scale discriminates
individuals who are known to be different that is
they should score differently on the test.
– E.g. individuals who are happy at availing welfare
and individuals who prefer to do job must score
differently on a scale/ instrument which measures
work ethics.
Construct Validity
• Does the measurement conform to some underlying
theoretical expectations. If so then the measure has
construct validity.
• i.e. If we are measuring consumer attitudes about
product purchases then do the measure adheres to
the constructs of consumer behavior theory.
• This is the territory of academic researchers

46
• Two approaches are used to measure construct
validity
• Convergent Validity
– A high degree of correlation among 2 different

measures intended to measure same construct


• Discriminant Validity
– The degree of low correlation among varaibles

that are assumed to be different.

4/26/19 47
• To check validity through Correlation analysis, Factor
Analysis, Multi trait , Multi matrix correlation etc

48
49

You might also like