0% found this document useful (0 votes)
27 views43 pages

Chapter 10 Multiple Regression

Uploaded by

Eslam Fahmy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views43 pages

Chapter 10 Multiple Regression

Uploaded by

Eslam Fahmy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

Multiple Linear Regression Analysis

Aim
Multiple regression is a statistical technique through which one
can analyze the relationship between a dependent or criterion
variable and a set of independent or predictor variables. As a
statistical tool, multiple regression is frequently used to
achieve three objectives.
1. To find the best prediction equation for a set of variables;
i.e., given X and Y (the predictors), what is Z (the criterion
variable)?
2. To control for confounding factors to evaluate the
contribution of a specific variable or set of variables, i.e.,
identifying independent relationships.
3. To find structural relationships and provide explanations for
seemingly complex multivariate relationships, such as is
done in path analysis.
Types of Multiple Regression techniques
There are three major types of multiple regression
technique:
Standard (classical) multiple regression,
Hierarchical regression, and
Statistical (stepwise) regression.
They differ in terms of how the overlapping
variability due to correlated independent variables is
handled, and who determines the order of entry of
independent variables into the equation (Tabachnick
& Fidell, 1989).
Standard Multiple Regression
For this regression model, all the study’s
independent variables are entered into the
regression equation at once. Each independent
variable is then assessed in terms of the unique
amount of variance it accounts for. The
disadvantage of the standard regression model
is that it is possible for an independent variable
to be strongly related to the dependent
variable, and yet be considered an unimportant
predictor, if its unique contribution in
explaining the dependent variable is small.
Hierarchical Multiple Regression
 This regression model is most flexible as it allows the researcher to
determine the order of entry of the independent variables into the regression
equation. Each independent variable is assessed at its own point of entry in
terms of the additional explanatory power it contributes to the equation. The
order of entry is normally dictated by logical or theoretical considerations.
For example, based on theoretical reasons, a researcher may decide that two
specific independent variables (from a set of independent variables) will be
the strongest predictors of the dependent variable. Thus, these two
independent variables will be accorded priority of entry, and their total
explanatory power (in terms of the total amount of variance explained)
evaluated. Then the less important independent variables are entered and
evaluated in terms of what they add to the explanation above and beyond that
afforded by the first two independent variables. It is also possible to take the
opposite tack in which less important independent variables are entered into
the equation first, to cull away “nuisance” variance. Then the important set of
independent variables is entered and evaluated in terms of what it adds to the
explanation of the dependent variable.
Statistical (Stepwise) Regression
For this statistical regression model, the order of entry of predictor
variables is based solely on statistical criteria. Variables that
correlate most strongly with the dependent variable will be afforded
priority of entry, with no reference to theoretical considerations. The
disadvantage of this type of regression is that the statistical criteria
used for determining priority of entry may be specific to the sample
at hand. For another sample, the computed statistical criteria may be
different, resulting in a different order of entry for the same
variables. The statistical regression model is used primarily in
exploratory work, in which the researcher is unsure about the relative
predictive power of the study’s independent variables. Statistical
regression can be accomplished through one of three methods:
Forward selection,
Backward deletion, and
Stepwise regression.
Forward selection
 In Forward selection, the variables are
evaluated against a set of statistical criteria,
and if they meet these criteria, are afforded
priority of entry based on their relative
correlations with the dependent variable. The
variable that correlates most strongly with the
dependent variable gets entered into the
equation first, and once in the equation, it
remains in the equation.
Backward deletion
In backward deletion, the equation starts
out with all the independent variables
entered. Each variable is then evaluated one
at a time, in terms of its contribution to the
regression equation. Those variables that
do not contribute significantly are deleted.
Stepwise Regression
In stepwise regression, variables are
evaluated for entry into the equation under
both forward selection and backward deletion
criteria. That is, variables are entered one at a
time if they meet the statistical criteria, but
they may also be deleted at any step where
they no longer contribute significantly to the
regression model.
Checklist of Requirements:
“Sample size”
The size of the sample has a direct impact on the statistical
power of the significance testing in multiple regression.
Power in multiple regression refers to the probability of
detecting as statistically significant a specific level of R-
square, or a regression coefficient at a specified significance
level and a specific sample size (Hair et al., 1995). Thus, for
a desired level of power and with a specified number of
independent variables, a certain sample size will be required
to detect a significant R-square at a specified significance
level (see Cohen and Cohen, 1983, for sample size
calculations). As a rule of thumb, there should be at least
20 times more cases than independent variables. That is,
if a study incorporates five independent variables, there
should be at least 100 cases.
Checklist of Requirements:
“measurement of the variables”
The measurement of the variables can be either continuous
(metric) or dichotomous (nonmetric). When the dependent
variable is dichotomous (coded 0-1), binary logistic regression or
discriminant analysis is appropriate. When the independent
variables are discrete, with more than two categories, they must
be converted into a set of dichotomous variables by dummy
variable coding . A dummy variable is a dichotomous variable
that represents one category of a nonmetric independent variable.
Any nonmetric variable with K categories can be represented as
K-1 dummy variables, i.e., one dummy variable for each degree
of freedom. For example, an ethnicity variable may originally be
coded as a discrete variable with 1 for Australian, 2 for Chinese,
3 for European, and 4 for others. The variable can be converted
into a set of three dummy variables (Australian vs. non-
Australian, Chinese vs. non-Chinese, and European vs. non-
European), one for each degree of freedom. This new set of
dummy variables can be entered into the regression equation.
Assumptions
Linearity: As regression analysis is based on the concept of
correlation, the linearity of the relationship between dependent
and independent variables is crucial. Linearity can easily be
examined by residual plots. For nonlinear relationships,
corrective action to accommodate the curvilinear effects of one
or more independent variables can be taken to increase both the
predictive accuracy of the model and the validity of the
estimated coefficients.
Homoscedasticity: The assumption of equal variances
between pairs of variables. Violation of this assumption can be
detected by either residual plots or simple statistical tests.
SPSS provides the Levene Test for Homogeneity of Variance
, which measures the equality of variances for a single pair of
variables.
Assumptions
Independence of error terms (autocorrelation) : In
regression, it is assumed that the predicted value is not related to
any other prediction; i.e., each predicted value is independent.
Violation of this assumption can be detected by plotting the
residuals against sequence of cases (sequence chart). If the
residuals are independent, the pattern should appear random.
Violations will be indicated by a consistent pattern in the
residuals. SPSS provides the Durbin-Watson statistic as a test
for serial correlation of adjacent error terms, and, if significant,
indicates nonindependence of errors.
Normality of the error distribution: It is assumed that errors
of prediction (differences between the obtained and predicted
dependent variable scores) are normally distributed. Violation of
this assumption can be detected by constructing a histogram of
residuals, with a visual check to see whether the distribution
approximates the normal distribution.
Multicollinearity
Multicollinearity refers to the situation in which the
independent/predictor variables are highly correlated. When
independent variables are multicollinear, there is “overlap” or
sharing of predictive power. This may lead to the paradoxical
effect, whereby the regression model fits the data well, but none
of the predictor variables has a significant impact in predicting
the dependent variable. This is because when the predictor
variables are highly correlated, they share essentially the same
information. Thus, together, they may explain a great deal of the
dependent variable, but may not individually contribute
significantly to the model. Thus, the impact of multicollinearity is
to reduce any individual independent variable’s predictive power
by the extent to which it is associated with the other independent
variables. That is, none of the predictor variables may contribute
uniquely and significantly to the prediction model after the others
are included.
Checking for multicollinearity
 In SPSS, it is possible to request the display of “Tolerance” and “VIF”
values for each predictor as a check for multicollinearity. A tolerance value
can be described in this way. From a set of three predictor variables, use X1
as a dependent variable, and X2 and X3 as predictors. Compute the R-
square (the proportion of variance that X2 and X3 explain in X1 ), and then
take 1 – (R-sqaure). Thus, the tolerance value is an indication of the
percentage of variance in the predictor that cannot be accounted for by the
other predictors. Hence, very small values indicate “overlap” or sharing of
predictive power (i.e., the predictor is redundant). Values that are less than
0.10 may merit further investigation. The VIF, which stands for Variance
Inflation Factor , is computed as “1/tolerance” and it is suggested that
predictor variables whose VIF values are greater than 10 may merit further
investigation. Most multiple regression programs have default values for
tolerance that will not admit multicollinear variables. Another way to
handle the problem of multicollinearity is to either retain only one
variable to represent the multicollinear variables, or combine the
highly correlated variables into a single composite variable.
Example 1 — Prediction Equation and Identification of
Independent Relationships (Forward Entry of Predictor Variables)
 Assume that a researcher is interested in predicting the level of responsibility
attributed to the battered woman for her fatal action, from the scores obtained
from the three defense strategies of self-defense, provocation, and insanity.
Specifically, the researcher is interested in predicting the level of
responsibility attributed by a subject who strongly believes that the battered
woman’s action was motivated by self-defense and provocation (a score of 8
on both scales) and not by an impaired mental state (a score of 1 on the
insanity scale). Attribution of responsibility is measured on an 8-point scale,
with 1 = not at all responsible to 8 = entirely responsible. In addition to
predicting the level of responsibility attributed to the battered woman, the
researcher also wanted to identify the independent relationships between the
three defense strategies and the dependent variable of responsibility
attribution (coded RESPON). Multiple regression will be used to achieve
these two objectives. Prior to multiple regression, the three defense strategies
will be computed from the variables identified through factor analysis .
These three defense strategies are coded:
 PROVOKE: Provocation defense
 SELFDEF: Self-defense defense
 INSANITY: Insanity defense
SPSS INPUTS
1. To predict the level of responsibility attributed to the battered woman’s
action from the three defense strategies of PROVOKE , SELFDEF , and
INSANITY , click Analyze, then Regression , and then Linear… from
the menu bar. The following Linear Regression window will open. Notice
that the list of variables now contains the newly created variables of
PROVOKE, SELFDE, and INSANITY
2. Click (highlight) the variable of RESPON and then transfer this variable to
the Dependent field. Next, click (highlight) the three newly created
variables of PROVOKE , SELFDEF , and INSANITY , and then transfer
these variables to the Independent(s) field. In the Method field, select
FORWARD from the drop-down list as the method of entry for the three
independent (predictor) variables into the prediction equation.
3. Click Statistics to open the Linear Regression Statistics window. Check the
fields to obtain the statistics required. For this example, check the fields for
Estimates , Confidence intervals, Model fit , R squared change , and
Collinearity diagnostics . Click Continue when finished. And then click
Ok
SPSS INPUTS
SPSS INPUTS
Results and Interpretation: Prediction Equation
The prediction equation is:
Y’ = A + B1X1 + B2X2 + ... BnXn
where Y’ = the predicted dependent variable, A = constant, B =
unstandardized regression coefficient, and X= value of the
predictor variable. The relevant information for calculating the
predicted responsibility attribution is presented in the Coefficients
table. An examination of this table shows that all three predictor
variables were entered into the prediction equation (Model 3),
indicating that the defense strategies of self defense, insanity, and
provocation are significant predictors of the level of responsibility
attributed to the battered woman for her fatal action. To predict
the level of responsibility attributed from these three defense
strategies, use the values presented in the Unstandardized
Coefficients column for Model 3. Using the Constant and B
(unstandardized coefficient) values, the prediction equation
would be: Predicted responsibility attribution = 5.91 + (– 0.41 ×
SELFDEF) + (0.16 × INSANITY) + (– 0.13 × PROVOKE)
Evaluating the Strength of the Prediction
Equation
 A measure of the strength of the computed prediction equation is R-square,
sometimes called the coefficient of determination. In the regression
model, R-square is the square of the correlation coefficient between Y, the
observed value of the dependent variable, and Y’, the predicted value of Y
from the fitted regression line. Thus, if for each subject, the researcher
computes the predicted responsibility attribution, and then calculates the
square of the correlation coefficient between predicted responsibility
attribution and observed responsibility attribution values, R-square is
obtained. If all the observations fall on the regression line, R-square is 1
(perfect linear relationship). An R-square of 0 indicates no linear relationship
between the predictor and dependent variables. To test the hypothesis of no
linear relationship between the predictor and dependent variables, i.e., R-
square = 0, the Analysis of Variance (ANOVA) is used. In this example,
the results from this test are presented in the ANOVA table. The F value
serves to test how well the regression model (Model 3) fits the data. If the
probability associated with the F statistics is small, the hypothesis that R-
square = 0 is rejected. For this example, the computed F statistic is 31.95,
with an observed significance level of less than 0.001. Thus, the hypothesis
that there is no linear relationship between the predictor and dependent
variables is rejected.
Identifying Multicollinearity
When the predictor variables are correlated among themselves, the
unique contribution of each predictor variable is difficult to assess.
This is because of the overlapped or shared variance between the
predictor variables, i.e., they are multicollinear. For this example,
both the “tolerance” values (greater than 0.10) and the “VIF” values
(less than 10) are all quite acceptable (see Coefficients table). Thus,
multicollinearity does not seem to be a problem for this example.
Another way of assessing if there is too much multicollinearity in
the model is to look at the Collinearity Diagnostics table. The
condition index summarizes the findings, and a common rule of
thumb is that a condition index of over 10 (some authors suggest 15)
indicates a possible multicollinearity problem and a condition index
of over 30 suggests a serious multicollinearity problem. For this
example, multicollinearity is not a problem.
Identifying Independent Relationships
 Once it has been established that multicollinearity is not a problem, multiple
regression can be used to assess the relative contribution (independent
relationship) of each predictor variable by controlling the effects of other predictor
variables in the prediction equation. The procedure can also be used to assess the
size and direction of the obtained independent relationships. In identifying
independent relationships, it is inappropriate to interpret the unstandardized
regression coefficients (B values in the Coefficients table) as indicators of the
relative importance of the predictor variables. This is because the B values are
based on the actual units of measurement, which may differ from variable to
variable, i.e., one variable may be measured on a five-point scale, whereas another
variable may be measured on an eight point scale. When variables differ
substantially in units of measurement, the sheer magnitude of their coefficients
does not reveal anything about their relative importance. Only if all predictor
variables are measured in the same units are their coefficients directly comparable.
One way to make regression coefficients somewhat more comparable is to
calculate Beta weights, which are the coefficients of the predictor variables when
all variables are expressed in standardized form (z-score).
Beta Coefficients
 In this example, the Beta weights (standardized regression coefficients)
for all three defense strategies are presented in the Coefficients table. The
size of the Beta weights indicates the strength of their independent
relationships. From the table, it can be seen that the strategy of self-defense
has the strongest relationship with responsibility attribution, whereas the
other two defense strategies — provocation and insanity — are weaker. The
direction of the coefficients also shed light on the nature of the relationships.
Thus, for the defense strategies of self-defense and provocation, the negative
coefficients indicate that the more the subjects interpreted the battered
woman’s motive for her fatal action as being due to self-defense and
provocation, the less they held her responsible for her action (self-defense:
Beta = – 0.39, t = 8.26, p < .001; provocation: Beta = – 0.12, t = – 2.46, p
< .05). Conversely, the positive coefficient associated with the insanity
variable shows that the more the interpreted the battered woman’s action as
being due to an impaired mental state, the more they held the woman
responsible for her fatal action? (Beta = 0.13, t = 2.84, p < .01).
Example 2 — Hierarchical Regression
 Another way of assessing the relative importance of predictor variables is to consider
the increase in R-square when a variable is entered into an equation that already
contains the other predictor variables. The increase in R-square resulting from the entry
of a subsequent predictor variable indicates the amount of unique information in the
dependent variable that is accounted for by that variable, above and beyond what has
already been accounted for by the other predictor variables in the equation. Suppose that
the researcher is interested in comparing the relative importance of two sets of variables
in predicting responsibility attribution to the battered woman in the previous example.
Specifically, the researcher wants to assess the relative importance of a set of variables
comprising the three defense strategies (self-defense, provocation, and insanity) and a
set of variables comprising the subjects’ demographic characteristics (sex, age,
educational level, and income) in predicting responsibility attribution. This task can be
accomplished by the use of Hierarchical Regression Procedure. For this model, the
researcher determines the order of entry for the two sets of predictor variables. Assume
that the researcher believes that the subjects’ demographics would be less strongly
related to the dependent variable than the set of defense strategies. On the basis of this
assumption, the researcher accords priority of entry into the prediction equation to the
set of demographic variables, followed by the set of defense strategies. This order of
entry will assess (1) the importance of the demographic variables in predicting the
dependent variable of responsibility attribution, and (2) the amount of unique
information in the dependent variable that is accounted for by the three defense
strategies.
SPSS INPUTS
1. From the menu bar, click Analyze, then Regression, and then Linear…. The following
Linear Regression window will open.
2. Click (highlight) the variable of RESPON and then transfer this variable to the
Dependent field. Because the set of demographic variables (SEX, AGE,
EDUCATIONAL LEVEL, and INCOME) will be entered first into the prediction
equation (Block1), click (highlight) these variables and then transfer these variables
to the Independent(s) field. In the Method cell, select ENTER from the drop-down
list as the method of entry for this set of demographic variables into the prediction
equation. Next, click Next to open Block 2 in the Independent(s) field for entry of the
second set of independent variables. Click (highlight) the variables of PROVOKE,
SELFDEF, and INSANITY, to transfer these variables to the Independent(s) field. In
the Method field, select ENTER from the drop-down list as the method of entry for
this set of independent (predictor) variables into the prediction equation.
3. Click Statistics to open the Linear Regression: Statistics window. Check the fields to
obtain the statistics required. For this example, check the fields for Estimates,
Confidence intervals, Model fit, R squared change, and Collinearity diagnostics.
Click Continue when finished. And then click Ok
SPSS INPUTS
SPSS INPUTS
SPSS INPUTS
Results and Interpretation
 In the Model Summary table, Model 1 represents entry of the first set of
demographic variables, and Model 2 represents entry of the second set of self-
defense strategy variables. The results show that Model 1 (demographics)
accounted for 5.8% of the variance (R Square) in the subjects’ responsibility
attribution. Entry of the three defense strategy variables (Model 2) resulted in
an R Square Change of 0.162. This means that entry of the three defense
strategy variables increased the explained variance in the subjects’
responsibility attribution by 16.2% to a total of 22%. This increase is
significant by the F Change test, F(3,369) = 25.48, p < .001 (a test for the
increase in explanatory power). These results suggest that the defense strategy
variables represent a significantly more powerful set of predictors than the set
of demographic variables. In the ANOVA table, the results show that entry
of the set of demographic variables alone (Model 1) yielded a significant
prediction equation, F(4,372) = 5.73, p < .001. Addition of the three defense
strategy variables (Model 2) resulted in an overall significant prediction
equation, F(7,369) = 14.84, p < .001.
Results and interpretation
 Looking at Model 2 in the Coefficients table, it can be seen that
multicollinearity is not a problem — all tolerance values are above 0.10;
all VIF values are below 10; and the condition indices for the seven
predictor variables are below 15. In examining the Beta weights
(standardized regression coefficients), it can also be seen that all three
defense strategy variables are significant predictors of responsibility
attribution (p < .05), whereas subjects’ sex is the only demographic
variable that was found to be significant. Thus, the more the subjects
perceived the battered woman’s fatal action was due to provocation (β =
–0.11, t = – 2.30, p < 0.05), and to self-defense (β = – 0.36, t = – 7.59, p
< .001), the less responsibility they attributed to the woman for her
action. Conversely, the more the subjects perceived the battered woman’s
fatal action was due to insanity, the greater the responsibility they
attributed to the woman for her action (β = 0.11, t = 2.29, p < .05). The
finding that subjects’ sex was a significant predictor (β = – 0.13, t = –
2.81, p < .01), indicated that males attributed greater responsibility to the
woman for her fatal action than females did (code: 1 = male, 2 = female).
Example 3 — Path Analysis
With Path Analysis, multiple regression is often used in
conjunction with a causal theory, with the aim of describing
the entire structure of linkages between independent and
dependent variables posited from that theory. For example,
based on theoretical considerations of the domestic violence
example, a researcher has constructed the path model presented
in the following Figure to represent the hypothesized structural
relationships between the three defense strategies of
provocation, self-defense, and insanity, and the attribution of
responsibility. The model specifies an “ordering” among the
variables that reflects a hypothesized structure of cause-effect
linkages. Multiple regression techniques can be used to
determine the magnitude of direct and indirect influences that
each variable has on other variables that follow it in the
presumed.
Example: Cont.

Provocation

Self- defense Responsibility


Attribution

Insanity
Example: Continue
The model specifies an “ordering” among the variables that
reflects a hypothesized structure of cause-effect linkages.
Multiple regression techniques can be used to determine the
magnitude of direct and indirect influences that each variable
has on other variables that follow it in the presumed causal
order (as indicated by the directional arrows). Each arrow in
the model represents a presumed causal linkage or path of causal
influence. Through regression techniques, the strength of each
separate path can be estimated. This analysis actually involves
three regression equations because (1) responsibility attribution is
a dependent variable for the three defense variables, (2) the self-
defense variable is a dependent variable for the defense variables
of provocation and insanity, and (3) the insanity defense variable
is a dependent variable for the provocation defense variable.
SPSS INPUTS
1. The first regression equation involves predicting the level of responsibility
(RESPON) attributed to the battered woman’s action from all three
defense strategies (PROVOKE, SELFDEF, and INSANITY). Click
Analyze, then Regression, and then Linear… from the menu bar.
The following Linear Regression window will open.
2. Click (highlight) the variable of RESPON, and then transfer this
variable to the Dependent field. Next, click (highlight) the three
variables of PROVOKE, SELFDEF, and INSANITY, and then
transfer these variables to the Independent(s) field. In the Method
field, select FORWARD from the drop-down list as the method of
entry for the three independent (predictor) variables into the prediction
equation.
3. Click Statistics to open the Linear Regression Statistics window. Check
the fields to obtain the statistics required. For this example, check the
fields for Estimates, Confidence intervals, Model fit, R squared
change, and Collinearity diagnostics. Click Continue when finished.
And then click Ok
SPSS INPUTS
SPSS INPUTS
SPSS INPUTS
4. The second prediction equation involves predicting the defense strategy of SELFDEF
from the defense strategies of PROVOKE and INSANITY. To do this, repeat step 1
to step 3. In step 1, transfer SELFDEF to the Dependent field, and the variables of
PROVOKE and INSANITY to the Independent(s) field. Click Ok to complete the
analysis.
SPSS INPUTS
5. The third prediction equation involves predicting the defense strategy of
INSANITY from the defense strategy of PROVOKE. To do this, repeat step 1 to
step 3. In step 1, transfer INSANITY to the Dependent field, and the variable of
PROVOKE to the Independent(s) field. Click Ok to complete the analysis.
Results and Interpretation
 The path model depicted in the previous Figure hypothesizes that
subjects’ interpretation of the battered woman’s motives as provocation
will have both direct and indirect influences on their responsibility
attribution; the indirect influence being mediated by their endorsement of
the insanity and self-defense strategies. The direction of the arrows
depicts the hypothesized direct and indirect paths. To estimate the
magnitude of these paths, a series of regression analyses were carried out.
1. The path coefficients between responsibility attribution and the three
defense strategies were obtained by regressing the former on the latter.
The results from the Coefficients table generated from the first
regression analysis show that all three defense strategies entered the
prediction equation (Model 3) (i.e., all three defense strategies are
significant predictors). The Beta values presented in the Standardized
Coefficients column represent the standardized regression
coefficients between responsibility attribution and the three defense
strategies (SELFDEF: Beta = –0.39; INSANITY: Beta = 0.13; PROVOKE:
Beta = –0.12).
Results and Interpretation
2. The path coefficients between the self-defense strategy
and the other two defense strategies of provocation and
insanity were obtained by regressing the former on the
latter. The results from the Coefficients table generated
from the second regression analysis show that both
provocation and insanity defenses are significant
predictors of the defense of self-defense (PROVOKE: Beta
= 0.18; INSANITY: Beta = – 0.13).
3. The path coefficient between the defense strategies of
insanity and provocation is presented in the
Coefficients table generated from the third
regression analysis, and is significant (PROVOKE: Beta
= 0.17).
Results and interpretation
 The following Figure presents the path model together with the estimated
regression coefficients (Beta values) associated with the hypothesized paths. It
can be concluded that the defense strategies of provocation and insanity have
direct influences on the subjects’ responsibility attribution. The direction of the
regression coefficients indicates that (1) the more the subjects endorsed the
provocation defense, the less responsibility they attributed to the battered
woman for her fatal action, and (2) the more they endorsed the insanity defense,
the more they held the battered woman responsible. The results also show that at
least part of these influences is indirect, being mediated by the subjects’
endorsement of the self-defense strategy. Thus, the more the subjects endorsed
the provocation defense, the more they believed that the woman acted in self-
defense, and which in turn is associated with a lower level of responsibility
attribution. Endorsing the provocation defense also led to an increased belief
that the battered woman’s action was due to an impaired mental state, which in
turn decreased the willingness to accept the plea of self-defense. This, in turn, is
associated with a higher level of responsibility attributed to her.
Example: Cont.

Provocation - 0.12

0.18

0.17 -0.39 Responsibility


Self- defense
Attribution
- 0.13

Insanity
0.13

You might also like