Bethlehem University Faculty of Business Project

Download as rtf, pdf, or txt
Download as rtf, pdf, or txt
You are on page 1of 27

Bethlehem University

Faculty of Business
Project
Econ 234
By
Saif Jaber
&

Ahmad Abdallah
Fall 2013

Content:
No. Content
1

Testing the difference between two means of two dependent normal populations

Testing the differences between two means of two independent normal populations

Tests the Independence of two Variables (Chi-Square)

One-Way Analysis of variance (One-way ANOVA)

Benferroni, Tukey post hoe and Scheffe

One Way Repeated Measures ANOVA

Two-Way Analysis of Variance (Two-Way ANOVA)

ANCOVA analysis of covariance

One-Way Analysis of Covariance for Independent samples


Two way repeated measures ANOVA (SPSS only)

10

Multi-way repeated measures ANOVA (SPSS only)

11

Cranachs alpha

1.

Testing the difference between two means of two dependent


normal populations.

Example:
To discover the result of training program on the chairs production
department employees, we took a 12 employees randomly, and their
outcomes of chairs during the month after and before the training are
given in the following table:
Employees No.
1
2
3
4
5
6
7
8
9
10 11 12
Outcomes before training 110 116 125 103 109 102 113 118 105 112 109 111
Y
Outcomes after training X 115 118 129 116 114 105 119 120 115 116 118 113

Test the hypotheses that the training increases the outcomes of


employees by using 0.04 as a level of significant?
Solution:
a.
b.

Claim: md = mx-my > 0.


H0: mx my , H1: mx > my.

Procedure: analyze ---> compare mean paired samples t


test paired sample(insert after and before) option
confident
interval insert 95% continue ok

Paired Samples Statistics

Pair 1

after
before

Mean
116.5000
111.0833

N
12
12

Std. Deviation
5.51856
6.50117

Std. Error
Mean
1.59307
1.87672

Paired Samples Correlations


N
Pair 1

Before & After

12

Correlation
.840

Sig.
.001

Paired Samples Test


Paired Differences
95% Confidence
Interval of the
Difference

Pair 1 After - Before

c.
d.

2.

Mean Std. Deviation


5.41667
3.52803

Std. Error
Mean
1.01845

Lower
3.17507

Upper
7.65827

t
5.319

df
11

Sig. (2-tailed)
.000

Test statistic: 5.319


Report: The mean of outcomes before and after the training are
111.0833 and 116.5000 respectively with standard deviations 6.50117
and 5.51856.Person correlation is 0.840. Difference mean is -5.41667
with standard deviation are 3.52803. The statistic value is -5.319. Pvalue is 0.000<0.04 also 0 is not from (-7.38068, -3.45265) so we
reject H0, that is to say the training increased the outcomes of
employees.

Testing the differences between two means of two independent


normal populations:

Example:

At a level of significance = 0.04 test the claim that the average


income for the year 2012 in Al- Jenedi company is the same as in
Hammoda company. A sample of 20 employees income in Al-Jenedi
company 2100, 2000, 2500, 2000, 2100, 2000, 2500, 1700, 2500, 2700,
3000, 2000, 2200, 1800, 2000, 2700, 2500, 2000, 3000, 2200 NIS. And a
sample of 22 employees income from Hammoda company 2100, 2000,
2500, 2000, 2100, 2000, 2500, 1700, 2700, 2500, 3000, 2000, 2100,
3000, 2500, 2700, 3000, 2000, 2500, 2500, 2100, 3000 NIS.
Solution:
a.
b.

Claim: md = 0
Hypotheses: H0: md = 0 , H1: md 0

Procedure: analyze compare mean independent sample


T test test variables (dependent variable)grouping
variable (independent factor)define group()use specified
valuesgroup 1(code of group one)group two(code of
group two) ontinueoptionsconfidence
interval(95%)continueok

Group Statistics

company
1.00
2.00

income

N
20
22

Mean
Std. Deviation Std. Error Mean
2275.0000
376.79395
84.25369
2386.3636
397.96561
84.84655

Independent Samples Test


Levene's Test for Equality of
Variances

Sig.

t-test for Equality of Means

df

Sig. (2-tailed) Mean Difference

Std. Error
Difference

96% Confidence Interval of the


Difference

Lower

Upper

income

Equal variances
assumed

.147

Equal variances not


assumed

c.
d.

.704

-.929

40

.359

-111.36364

119.89180

-365.88312

143.15585

-.931 39.926

.357

-111.36364

119.57266

-365.22143

142.49416

Test statistic: -0.929


Report: The mean of Al-Jenedi and Hammoda are 2275.0000 and
2386.3636 respectively with standard deviations 376.79395 and
397.96561. Difference mean is -111.36364. The statistic value is
-0.929. P-value is 0.359>0.04 so we fail to reject H0, that is to say that
the average income for the year 2012 in Al- Jenedi company is the
same as in Hammoda company.

Tests the Independence of two Variables (Chi-Square)


Example:
2600 customers are buying a BMW 2012 from different branches and
different types shown in the following table:
3.

Branch

Ramallah

Type of car
X-5

I-535

I-318

I-320

Total

57

63

34

17

171

Amman

120

86

97

81

384

Dubai

345

408

274

312

1339

Cairo

100

104

202

300

706

Total

622

661

607

710

2600

Study the claim that the type of car (Y) and the Branch (X) are
independent under 0.05 as a level of Significance?
Solution:
a. Claim: The two variables car types and Branches are independent.
b. Hypotheses:
Case Processing Summary
Cases
Valid
N
Branch * CarType

2600

Missing
Percent
100.0%

Total

Percent
0

0.0%

N
2600

Percent
100.0%

H0: The two variables car types and Branches are independent.
H1: The two variables car types and Branches are dependent.
Procedure: Data weight cases()weight case by frequency
variable (observed value)continue ok. Analyzedescriptive
statisticscrosstabs rows(branch position)columns(student living
place)cells ()observed ()expected continue statistics()chisquarecontinueok.

Branch * CarType Crosstabulation

CarType
X-5
Ramallah

Amman
Branch
Dubai

Cairo

d.

I-318

I-320

57

63

34

17

171

Expected Count

40.9

43.5

39.9

46.7

171.0

Count

120

86

97

81

384

Expected Count

91.9

97.6

89.6

104.9

384.0

Count

345

408

274

312

1339

320.3

340.4

312.6

365.7

1339.0

100

104

202

300

706

168.9

179.5

164.8

192.8

706.0

622

661

607

710

2600

622.0

661.0

607.0

710.0

2600.0

Expected Count
Count
Expected Count
Count

Total

c.

Count

I-535

Total

Expected Count

Test statistic: 206.708.


Report: The Statistic value is 206.708, degree of freedom is 9.
P-value is 0.00 < 0.05, so we reject H 0 (the claim). That is to say
that the two variables car types and Branches are dependent.
Chi-Square Tests
Value

df

Asymp. Sig. (2sided)

206.708a

.000

Likelihood Ratio

211.929

.000

Linear-by-Linear Association

134.794

.000

Pearson Chi-Square

N of Valid Cases

2600

a. 0 cells (0.0%) have expected count less than 5. The minimum


expected count is 39.92.

One-Way Analysis of variance (One-way ANOVA)


Example:
Five different department are used to produce printers the number of
defects are recorded for randomly selected weeks. Use a = 0.04
significance level to test the claim that the department produce the same
mean number of defects:
4.

Department Week1 Week2 Week3 Week4 Week5 Week6 Week7 Week8 Week9
First

12

16

13

10

11

Second

14

14

11

16

12

10

16

Third

13

16

10

15

13

11

11

11

Fourth

11

12

15

12

15

14

13

10

Fifth

15

10

11

14

14

16

Solution:
a. Claim: u1 = u2 = u3 = u4 = u5.
b. Hypotheses: H0: u1 = u2 = u3 = u4 = u5.
H1: Not all means are equal.

10

Procedure: Analyze compare means One Way ANOVA


dependent list (successful mean)factor(lecturer)ok
ANOVA
DefectN
Sum of Squares
Between Groups

df

Mean Square

7.154

1.788

Within Groups

143.017

30

4.767

Total

150.171

34

Sig.
.375

.824

Statistic test: 0.375


d. Report: this table provides Statistic value is 0.375 degree of
freedom (4, 30) , and P-value is 0.824.
Since the P-value 0.078 > 0.05 we fail to reject the null hypothesis
(the claim). That is to say the means of groups are equals.
5. Benferroni, Tukey post hoe and Scheffe
Example:
At the Accounting Department at Al-Quds electricity Company, Saif
predicts that employees will produce most effectively with a constant
temperature of air condition. He randomly divides twenty seven
employees into three groups. A u of employees were in evaluation week
for their preparing financial statements. Those in group 1 worked with a
constant temperature. Those in group 2 worked with high degree of
temperature. And the last group worked in low temperature. After
evaluating employees their scores as follow:
c.

Constant 59 51 56 48 57 55 44 51 53
High

52 43 54 46 52 53 38 50 51

Low

56 57 52 45 54 49 40 53 38
a.
b.

Claim: Not all means are equal.


Hypotheses:

H0: u1 = u2 = u3.
H1: Not all means are equal.
Procedure: Analyzecompare mean One-Way
ANOVAdependent list(purchases number)
factor(purchase method) post-hoc ()
Benferroni ()Tuky ()Scheffesignificant level
()continue Ok.
ANOVA
Score
Sum of Squares
Between Groups

df

Mean Square

79.630

39.815

Within Groups

787.556

24

32.815

Total

867.185

26

c.

Sig.

1.213

.315

Report: this table provides SSb = 79.630, SSw = 787.556, SSt


= 867.185, MSb = 39.815, MSw = 432.815. the statistic value
is 1.213, degree of freedom (2, 24) and P-vaule is 0.315.
since the P-value 0.315 > 0.05 we failed to teject H0. That is
to say the means of groups are equal.
Multiple Comparisons

Dependent Variable: Score


(I) Tempreature

(J) Tempreature

Mean Difference

Std. Error

Sig.

(I-J)
Constant

Tukey HSD

High

Low
Scheffe

Constant
High
Low

95% Confidence Interval


Lower Bound

Upper Boun

High

3.88889

2.70040

.337

-2.8548

10.63

Low

3.33333

2.70040

.445

-3.4104

10.07

-3.88889

2.70040

.337

-10.6326

2.85

-.55556

2.70040

.977

-7.2992

6.18

-3.33333

2.70040

.445

-10.0770

3.41

High

.55556

2.70040

.977

-6.1881

7.29

High

3.88889

2.70040

.370

-3.1558

10.93

Low

3.33333

2.70040

.478

-3.7114

10.37

-3.88889

2.70040

.370

-10.9336

3.15

-.55556

2.70040

.979

-7.6003

6.48

-3.33333

2.70040

.478

-10.3781

3.71

Constant
Low
Constant

Constant
Low
Constant

Constant

Bonferroni

High

.55556

2.70040

.979

-6.4892

7.60

High

3.88889

2.70040

.488

-3.0610

10.83

Low

3.33333

2.70040

.687

-3.6165

10.28

-3.88889

2.70040

.488

-10.8388

3.06

-.55556

2.70040

1.000

-7.5054

6.39

-3.33333

2.70040

.687

-10.2832

3.61

.55556

2.70040

1.000

-6.3943

7.50

Constant

High

Low
Constant

Low

High

Tuckey ost hoc, Sheffe and Benferroni tests indicated that working with
a constant temperature and working with low temperature or working
with constant temperature and working with high temperature or
working with high temperature and working with low temperature are
the same (P-values are 0.445, 0.478, 0.687 / 0.337, 0.370, 0.488 / 0.977,
0.979, 1 respectively).
One Way Repeated Measures ANOVA
Example:
We made seven machines annual maintenance we check number of
product produced every three months:
6.

Machines

After 3 After 6 After 9 Machine mean

14

13

12

13

19

18

17

18

12

11

10

11

Level means 10

Solution:
a. Claim: At least two means are significantly different.
b. Hypotheses: H0: u1 = u2 = u3

H1: At least two means are significantly different.

Procedure: Analyzegeneral linear modelrepeated measures


number of levels ( 3) adddefine within-subjects variables
factor (1){pre, 3 months, 6months}optionsdisplay mean
for:{factor} {}compare main effect {benferroni}{}
{descriptive statistic significant level{}continue ok.

Mauchly's Test of Sphericitya


Measure: MEASURE_1
Within Subjects Effect

Mauchly's W

Approx. Chi-

df

Epsilonb

Sig.

Square

Greenhouse-

Huynh-Feldt

Lower-

Geisser
factor1

.000

.500

.500

Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to an i
matrix.
a. Design: Intercept
Within Subjects Design: factor1

b. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in the Tests of Wit
Subjects Effects table.

theMauchly Sphericity test is significant (P=0.0 ,which is smaller than 0.05 so we take the third
line [Huynh-Feldt]).
Tests of Within-Subjects Effects
Measure: MEASURE_1
Source

Type III Sum of

df

Mean Square

Sig.

Squares

factor1

Error(factor1)

Sphericity Assumed

10.381

5.190

27.250

.000

Greenhouse-Geisser

10.381

1.000

10.381

27.250

.002

Huynh-Feldt

10.381

1.000

10.381

27.250

.002

Lower-bound

10.381

1.000

10.381

27.250

.002

Sphericity Assumed

2.286

12

.190

Greenhouse-Geisser

2.286

6.000

.381

Huynh-Feldt

2.286

6.000

.381

Lower-bound

2.286

6.000

.381

In this case we take Huynh-Feldt and as you can see there is a high significant effect of the
level Variable since the P-value is 0.005 < 0.05. The statistic value is F = 27.250.

Pairwise Comparisons
Measure: MEASURE_1
(I) factor1

(J) factor1

Mean Difference

Std. Error

Sig.b

95% Confidence Interval for


Differenceb

(I-J)

Lower Bound
1

Upper Bound

1.000

.000

1.000

1.000

1.714*

.286

.003

.775

2.654

-1.000

.000

-1.000

-1.000

.714

.286

.140

-.225

1.654

-1.714

.286

.003

-2.654

-.775

-.714

.286

.140

-1.654

.225

Based on estimated marginal means


*. The mean difference is significant at the .05 level.
b. Adjustment for multiple comparisons: Bonferroni.

There is a significant difference between 3 months and 6 months (P =0) and between 3months and 9
months (P = 0.003), but no significant difference between 6 months and 9 months ( P = 0.140).

Two-Way Analysis of Variance (Two-Way ANOVA).


Example:
Suppose you want to determine whether the age of buyers (Children,
young, old) and the target market (Asia, Africa, America, and Europe)
affects the test. To do this, we took seven buyers randomly, and the
result as in the following table:
7.

Asia

Africa

America

Europe

Children

5, 2, 3, 7, 10,3, 2, 5, 7, 8, 6,12, 10, 9, 8,11, 9 7, 8, 11,


11, 4.
1.
11, 11, 9.
12, 9.

Young

9, 7, 5, 10, 5,7, 5, 6, 9, 10,14, 12, 15, 13,14, 13, 15, 9,


8, 9.
3, 6.
10, 8, 9.
8, 10, 11.

Old

7, 5, 4, 8, 8, 5,5, 3, 4, 8, 12,13, 9, 14, 11,12, 8, 15, 10,


9.
7, 8.
10, 9 ,10.
11, 8, 11.

Use = 0.04 as a level of significant.


Solution:
a.
b.

Claim: a) the means of tests according to the age are equal.


Hypotheses: H0A: the test does not depend on the age.
H0B: the test does not depend on the target market.
H0AB: there is no interaction between the age and the target
market.
H1A: the test depends on the age.
H1B: the test depends on the target market.

H1AB: there is an interaction between the age and the target


market.
Procedure: Analyze general linear modelUnivariate dependent variable
{boxesbought} fixed factore{softdrink &pricescategory}plotshorizontal axis
{softdrink}separate lines{pricescategory} add continueoptionsdisplay
means for{sofdrink, prices category &boxes bought}{}descriptive statistics
{}homogeneity test {}continueok.
Between-Subjects Factors
Value Label

age

Target Market

1.00

Children

28

2.00

Young

28

3.00

Old

28

1.00

Asia

21

2.00

Africa

21

3.00

America

21

4.00

Europe

21

Descriptive Statistics
Dependent Variable: test
age

Children

Young

Old

Total

Target Market

Mean

Std. Deviation

Asia

6.0000

3.46410

Africa

4.5714

2.63674

America

10.0000

1.41421

Europe

9.5714

1.81265

Total

7.5357

3.30524

28

Asia

7.5714

1.98806

Africa

6.5714

2.37045

America

11.5714

2.63674

Europe

11.4286

2.63674

Total

9.2857

3.23015

28

Asia

6.5714

1.90238

Africa

6.7143

3.03942

America

10.8571

1.95180

Europe

10.7143

2.42997

Total

8.7143

3.07748

28

Asia

6.7143

2.51282

21

Africa

5.9524

2.74729

21

10.8095

2.06444

21

America

Europe

10.5714

2.33605

21

8.5119

3.25056

84

Total

Levene's Test of Equality of Error Variancesa


Dependent Variable: test
F
1.085

df1

df2
11

Sig.
72

.386

Tests the null hypothesis that the error variance of


the dependent variable is equal across groups.
a. Design: Intercept + age + targetM + age *
target

we have homogeneity of variance for the dependent variable across groups, since p-value =
0.486 > =0.04. Statistic value: F = 1.085.
Tests of Between-Subjects Effects
Dependent Variable: test
Source

Type III Sum of

df

Mean Square

Sig.

Squares
Corrected Model

455.274a

11

41.389

7.066

.000

Intercept

6086.012

6086.012

1039.075

.000

44.595

22.298

3.807

.027

405.369

135.123

23.070

.000

5.310

.885

.151

.988

Error

421.714

72

5.857

Total

6963.000

84

876.988

83

Age
targetM
age * targetM

Corrected Total

a. R Squared = .519 (Adjusted R Squared = .446)

8.

ANCOVA analysis of covariance


One-Way Analysis of Covariance for Independent samples.
Example:
One Accounting Program Al-Shamel is given to three groups from three different accounting
department ( Arab Bank, Palestine Bank, Islamic Bank). Each group consists of 14
accountant, then they received a four-week course of training on that program. The following
table show how each group learn from different methods:

First (theoretical)

Second (Practical)

Third (Theo & Pract)

No.

Xi1

Yi1

No.

Xi2

Yi2

No.

Xi3

Yi3

10

15

16

25

12

30

10

18

10

33

12

20

14

35

12

16

11

16

15

40

10

14

15

22

11

11

20

17

11

20

11

12

11

20

10

22

10

13

17

10

15

22

10

16

a.
b.

11

11

14

11

12

20

11

12

33

12

10

15

12

10

17

12

15

37

13

13

13

18

13

29

14

11

14

14

21

14

30

Claim: Means of three Methods are equal.


Hypotheses: H0: 1 = 2 = 3.
H1: At least two means are significantly Different.

Procedure:
AnalyzegenerallinearmodelUnivariatedependentvariable{
depenent variable( y) fixd
factore{code}coveriate{independent variables(x)
contrasts {simple instead of none}{}fiest or last change
continue options display means for{code} {}compare
main effects {}homogeneity {}descriptive statistics
significant level {} continue ok.
Levene's Test of Equality of Error Variancesa
Dependent Variable: LearndMeasure
F

df1

10.031

df2
2

Sig.
39

.000

Tests the null hypothesis that the error variance of


the dependent variable is equal across groups.
a. Design: Intercept + PreMeasure + Method

c.

Report: since the p-value is 0.000 which is less than 0.04 then the
variance of dependent variables are not equal across groups).

Test Results
Dependent Variable: LearndMeasure
Source

Sum of Squares

Contrast

df

Mean Square

1542.209

771.104

320.337

38

8.430

Error

Sig.

91.472

.000

Tests of Between-Subjects Effects


Dependent Variable: LearndMeasure
Source

Type III Sum of

df

Mean Square

Sig.

Squares
2422.734a

807.578

95.799

.000

Intercept

272.844

272.844

32.366

.000

PreMeasure

650.877

650.877

77.210

.000

1542.209

771.104

91.472

.000

Error

320.337

38

8.430

Total

18249.000

42

2743.071

41

Corrected Model

Method

Corrected Total

a. R Squared = .883 (Adjusted R Squared = .874)

d. Statistic value: F= 91.472 and p-value = 0.000 < 0.04. upon this result we reject the null

hypotheses that the dependant has equal means across the groups.
Contrast Results (K Matrix)
Method Simple Contrast

Dependent
Variable
LearndMeasure

Level 2 vs. Level 1

Contrast Estimate
Hypothesized Value

3.678
0

Difference (Estimate - Hypothesized)

3.678

Std. Error

1.103

Sig.

.002

95% Confidence Interval for

Lower Bound

1.445

Difference

Upper Bound

5.911

Contrast Estimate

14.360

Hypothesized Value

Difference (Estimate - Hypothesized)


Level 3 vs. Level 1

14.360

Std. Error

1.105

Sig.

.000

95% Confidence Interval for

Lower Bound

12.123

Difference

Upper Bound

16.597

a. Reference category = 1

This table to show that group 2and 1 / 3and 1 are significantly different (0.002 <0.04 and
0.000 <0.04) respectively.
9.

Two way repeated measures ANOVA (SPSS only)

Example:
If we check the performance for 10 employees about how to work in three different
companies ( X, Y, Z) in the morning and afternoon. Each employee performance is
tested six times. And the scores of the test was as follows:

Morning
Subject
1
2
3
4
5
6
7
8
9
10
Solution:

X
9
8
8
9
10
7
9
10
8
9

Y
12
11
10
12
12
10
14
13
12
13

Z
15
14
14
13
15
14
15
16
15
16

X
7
9
10
8
9
10
7
8
10
9

Afternoon
Y
9
10
12
10
11
13
10
11
14
10

Z
12
12
13
12
13
15
13
14
15
13

Claim: H0: no effect


H1: effect
SPSS Solution:
Analyze general linear model repeated measures within subjects factor name{day
instead of factor 1}-number of levels {2} add within subject factor name
{(company) instead of factor 2 number of levels {3}adddefinewithin subjects
variables(day, company){morningx, morningy, morningz, afternoonx, afternoony,
afternoonz}optionsdisplay means {OVERAL}{}descriptive statistics
continue{}continueok.

Within-Subjects Factors
Measure: MEASURE_1
day
1

com
1
2

Dependent
Variable
Morningx
Morningy

Morningz

Afternoonx

Afternoony

Afternoonz
Descriptive Statistics

Morningx
Morningy
Morningz

14.70

.949

Afternoonx

8.70

1.160

Afternoony

11.00

1.563

Afternoonz

13.20

1.135

Mauchly's Test of Sphericity(b)


Measure: MEASURE_1

Mean
Std. Deviation
8.70
.949
11.90
1.287

Epsilon(a)
Within Subjects Effect Mauchly's W
day
1.000
com
.807
day * com
.868

Approx. ChiSquare
.000
1.711

df

GreenhouseGeisser
Huynh-Feldt Lower-bound
.
1.000
1.000
1.000
.425
.839
1.000
.500

Sig.
0
2

1.129
2
.569
.884
1.000
.500
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is
proportional to an identity matrix.
a May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed
in the Tests of Within-Subjects Effects table.
b Design: Intercept
Within Subjects Design: day+com+day*com

Report:
shows that for each of three effects ( day, company, and interaction ) pvalue > 0.05 so there is no effect.
Mauchly's Test

10. Multi-way

repeated measures ANOVA (SPSS only)

Example:
If we want to determine the effect between four printers ( so3, so5, MH, HK) and the
ink that we used ( cartridge, toner) on 24 papers , and we check the quality from
the lowest quality(1) to the highest (5) . the data are given as follows:
Printer

Paper
Number

cartridge

toner

p. No

t1

t2

t3

t4

t5

t1

t2

t3

t4

t5

So3

p1

p4

So5

p2
p3
p7

2
1
2

1
0
1

1
2
2

2
1
2

1
1
2

p5
p6
p10

0
1
3

1
1
2

1
2
2

2
2
4

2
3
1

p8
p9

2
1

1
1

2
0

1
1

3
3

p11
p12

1
2

1
2

2
3

2
2

2
3

MH

p13

p16

HK

p14
p15
p19

2
2
5

1
1
4

4
2
4

2
4
3

3
1
4

p17
p18
p22

2
2
3

2
4
4

4
1
4

2
3
1

0
2
1

p20
p21

3
3

5
5

3
3

1
0

2
1

p23
p24

0
1

5
2

1
4

2
0

3
1

Clime:.
H0: significant effect
H1: no significant effect
Spss solution:
Analyze general linear model repeated measures within- subject factor
name{weeks} number of levels {5} add define within- subject variables trial(1)
{W1, W2, W3, W4, W5} between subject factors {printer and ink} ok.

Mauchly's Test of Sphericity(b)


Measure: MEASURE_1
Epsilon(a)
Approx. ChiGreenhouseWithin Subjects Effect Mauchly's W
Square
df
Sig.
Geisser
Huynh-Feldt Lower-bound
test
.694
5.267
9
.812
.849
1.000
.250
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is
proportional to an identity matrix.
a May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed
in the Tests of Within-Subjects Effects table.
b Design: Intercept+ink+printer+ink * printer
Within Subjects Design: test

Tests of Within-Subjects Effects


Measure: MEASURE_1
Source
test

Sphericity Assumed

Type III Sum of


Squares
9.450

df
4

Mean Square
2.363

Sig.
3.489

.012

test * ink

test * printer

test * ink * printer

Error(test)

Greenhouse-Geisser

9.450

3.395

2.784

3.489

.018

Huynh-Feldt

9.450

4.000

2.363

3.489

.012

Lower-bound

9.450

1.000

9.450

3.489

.080

Sphericity Assumed

1.917

.479

.708

.590

Greenhouse-Geisser

1.917

3.395

.565

.708

.568

Huynh-Feldt

1.917

4.000

.479

.708

.590

Lower-bound

1.917

1.000

1.917

.708

.413

Sphericity Assumed

6.283

12

.524

.773

.675

Greenhouse-Geisser

6.283

10.185

.617

.773

.656

Huynh-Feldt

6.283

12.000

.524

.773

.675

Lower-bound

6.283

3.000

2.094

.773

.526

Sphericity Assumed

31.417

12

2.618

3.867

.000

Greenhouse-Geisser

31.417

10.185

3.085

3.867

.001

Huynh-Feldt

31.417

12.000

2.618

3.867

.000

Lower-bound

31.417

3.000

10.472

3.867

.030

Sphericity Assumed

43.333

64

.677

Greenhouse-Geisser

43.333

54.319

.798

Huynh-Feldt

43.333

64.000

.677

Lower-bound

43.333

16.000

2.708

Report:
There is no significant effect between test and ink p-value 0.812 > 0.05.

11. Cranachs

alpha:

Example:
Al- Watanya Company want to evaluate the performance of the first level
employees, for 2012 year, so it made a ten question questionnaire from five scale
from ( high) to ( low) as in the following table, and the company test it on 16
supervisor:

Question
NO.
1
2
3
4
5
6
7
8
9

Degre of performe
High
low
Degree of customer satisfaction5 4 3 2 1
Degree of committiment in the job days5 4 3 2 1
How quickly the jobs done5 4 3 2 1
Degree of respect between employees
5 4 3 2 1
Degree of sharing information between employees
5 4 3 2 1
Are the customer satisfied
5 4 3 2 1
Degree of how important the time for employees
5 4 3 2 1
Degree of jobs efficiency
5 4 3 2 1
Degree of love between employees
5 4 3 2 1
Questions

10

The customers are satisfied with our offers.

The responses were like that:


Q10

Q9

Q8

Q7

Q6

Q5

Q4

Q3

Q2

Q1

NO

10

11

12

13

14

15

16

Analyze scale reliability analysis items {Q1,Q2,Qn} statistics reliability analysis :


statistics {}item, {} scale, {} scale if item deleted and {} correlation
continue ok.

Reliability Statistics

Cronbach's Alpha
Based on
Cronbach's
Standardized
Alpha
Items
N of Items
.829
.835
10

Inter-Item Correlation Matrix

Q1
Q2

Q1
1.000
.119

Q3
Q4

Q2

Q3

Q4

.119
1.000

.016
.135

.016

.135

.276

.000

Q5

.069

Q6

.061

Q7
Q8
Q9

Q5

Q6

Q7

Q8

Q9

.276
.000

.069
.000

.061
.308

.064
.180

.460
.258

.437
.160

1.000

.244

.545

.484

.509

.313

.367

.244

1.000

.149

.715

.232

.600

.372

.000

.545

.149

1.000

.415

.726

.149

.462

.308

.484

.715

.415

1.000

.461

.556

.345

.064

.180

.509

.232

.726

.461

1.000

.325

.546

.460

.258

.313

.600

.149

.556

.325

1.000

.620

.437

.160

.367

.372

.462

.345

.546

.620

1.000

.164
-.126
.221
.683
The covariance matrix is calculated and used in the analysis.

.364

.633

.339

.618

.545

Q10

Report: Cronbachs alpha is 0.829, which indicates a good level of internal


consistency for our scale with this specific sample.

You might also like