0% found this document useful (0 votes)
24 views13 pages

Psychology Statistic Note

This document provides an overview of statistical concepts including why statistics are studied, different types of statistics, samples, variables, and scales of measurement. It also discusses topics like measures of center, hypothesis testing, and statistical procedures such as t-tests, ANOVA, chi-square, correlation, and simple linear regression.

Uploaded by

chinmengen0928
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views13 pages

Psychology Statistic Note

This document provides an overview of statistical concepts including why statistics are studied, different types of statistics, samples, variables, and scales of measurement. It also discusses topics like measures of center, hypothesis testing, and statistical procedures such as t-tests, ANOVA, chi-square, correlation, and simple linear regression.

Uploaded by

chinmengen0928
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

1

Table of Contents
1.0 Basic concepts .............................. 1
1.1 ...............................Why study statistics?
..................................................................................... 1

1.2 Type of stats depends on ................. 1


1.3 Purpose of stats ...................................... 1
1.4 Type of samples ...................................... 1
1.5 Type of variables..................................... 1
1.6 Stats & scale of measurement ...... 1
1.7 Symbols ......................................................... 1
1.8 SPSS Windows .......................................... 1
1.9 Data preparation .................................... 2
1.10 Measures of center in a
numerical data set ......................................... 2
1.11 Scale of measurement ..................... 2
1.12 Data type................................................... 2
1.13 Median deviation (MD) ................... 2
1.14 Hypothesis ............................................... 2
2.0 T-test ............................................ 3
2.1 One sample t-test .................................. 3
2.2 Paired sample t-test ............................. 4
2.3 Independent sample t-test .............. 4
3.0 ANNOVA ....................................... 6
4.0 CHI-SQUARE ................................. 7
5.0 SPEARMAN RHO CORRELATION ... 8
6.0 PEARSON PRODUCT-MOMENT
CORRELATION .................................... 9
7.0 SIMPLE LINEAR REGRESSION ...... 10

2
1.0 Basic concepts

1.1 Why study statistics? 1.5 Type of variables


Tool of making decision to answer research objectives
and hypothesis
Quantitative / continuous Qualtitative / categorical
Statistics is the process to collect, analyze, present,
/ metric / non-metric
interpret – make informed decision
Measured numerically:
1.2 Type of stats depends on Discrete var (integer)
Alpha numeric
Continous var
N/O
1) Purpose (decimal)
Descriptive: to describe data (MCT, MD) I/R
Inferential: to infer finding of sample in
population and involve hypo testing (t-test,
1.6 Stats & scale of measurement
anova, chi-square)

2) Assumption of normality Analysis DV IV


Parametric – normal (t-test, anova, simple linear
t-test I/R N/O
regression)
Anova I/R N/O
Non-parametric – not normal distribution (chi
Chi-square N/O N/O
square, Spearman’s rho)
SLR I/R I/R
Correlation I/R I/R
3) Number of variables
Univariate: 1 var Spearman O O
Bivariate: 2 var
Multivariate: > 2 var 1.7 Symbols

1.3 Purpose of stats


Parameter Statistics
Measures
(population) (sample)
1) Describe phenomenon: Frequency/Percent, MCT,
Number of cases N n
MD
Mean µ y
2) Comparison between groups: t-test, anova, man-
Variance σ2 S2
whitney, Kruskal wallis
SD deviation σ s
3) Relationship between variables: chi square,
Correlation
spearman rho, pearson, regression p r
coefficient

1.4 Type of samples


1.8 SPSS Windows

Probability Non-probability 1) SPSS data editor


Sample random Convenient Define var (var view)
Stratified Purposive Enter data set (data view)
Systematic .sav
Cluster
2) SPSS stats viewer
Open automatically when run any procedure
Display data analysis and results
.spv

1
3) SPSS syntax editor 1.12 Data type
Command, recommended (.sps)
1) Raw data
1.9 Data preparation 2) Frequency distribution
3) Grouped frequency distribution
1) Define variable
SPSS data editor 1.13 Median deviation (MD)
The average of the absolute differences between the
2) Enter data
data points and their mean.
Data view
How dispersed the data
Range
3) Run frequency
Variance: squared of MD
Check wrong data
SD: measure of deviation from mean
Dispersion is normal if 3 SD above or below mean
4) Data editing
Clean up data
1.14 Hypothesis
5) Realiability test Types of error
Testing internal consistency of instrument (alpha
cronbach) Type I Type II
Reject H0 when its true Fail to reject H0 when its
6) Data transformation false
Compute (create new var)
Recode (categorised data & change value system) Reject H0
Reject H0 at 0.1, sure reject at 0.5
7) Exploratory data analysis Reject H0 at two tailed, sure reject one tailed
Normality
Sig value of kolmogrov-smirnov @ shapiro wilk ≥ Reject H0 at N=20, sure reject N=50
α or skewness between -2 - +2
Normal distribution
1.10 Measures of center in a numerical data set
1) Right skewed distribution
1) Mode: the most frequent value in a data set. Not Most of the data falls to the left of the mean
influence by extreme value Mode<median<mean

2) Median: the middle value in a data set. Not influence 2) Left skewed distribution
by extreme value Most of the data falls to the right of the mean
Mean<median<mode
3) Mean: the average value of data set. Much affected
by the presence of extreme value -z > + = smaller portion
-z > - = smaller portion
-z < - = larger
1.11 Scale of measurement
-z > - = larger

Scales Mode Median Mean


N /
O / /
I / / /
R / / /

2
2.0 T-test Decision

2.1 One sample t-test Manual: Reject Ho t cal ≥ tcritical


Purpose / requirement
SPSS: Reject Ho : sig-t ≤ ∝ (for one tailed, divide sig-t
by two)
one sample t-test is to compare differences between
sample mean and a test value
score of the sample meet the assumption of
normality (parametric) CI: Reject Ho: 0 outside CI
sig value of kolmogrov smirnov or shapiro wilk ≥ α or
skewness between -2 – 2 +
d t∝, df (Sȳ)

Variables only relevant for two tailed

sample mean: I/R Effect size


test vaue: I/R 𝑡
d=
√𝑛
Hypotheses
Conclusion
Ho: µ = µ0 Reject Ho: sample mean is significantly different
Ha: µ ≠ µ0 from the test value at ∝ level of sig
: µ > µ0
: µ < µ0 Fail to reject Ho: sample mean is not significantly dif
fr test value at ∝ level of sig
Calculation
CI:
y y LL 0 UL Dec
Σy Σy2 + 0 + Outside CI
- 0 - Outside CI
n = sample size - 0 + In between
Σy = y+y+y+y + 0 - Impossible
Σy2 = y2+y2

𝛴𝑦 Effect size
ȳ=(𝑛) <0.2 – trivial
0.2 – small
0.5 – medium
(𝛴𝑦)2 0.8 – large
𝛴𝑦2−
Sy = √ 𝑛
𝑛−1

tcal
ȳ − µ0
t= 𝑠𝑦
√𝑛

tcritical
𝛼
t , df (n-1) (two tailed)
2

t , df (n-2) (one tailed)
3
2.2 Paired sample t-test
Conclusion
Purpose/requirement

Use to compare diff btw pre & post test Reject Ho: if there is significant mean difference between
Scores of the pre & post meet the assumption of pre & post test at α level of sig
normality (parametric) (sig value of kolmogrov-
smirnov @ shapiro wilk ≥ ∝ @ skewness btw -2 – 2+ Fail to reject Ho: if there is no significant mean difference
between pre & post test at α level of sig
Variables

Pre-scores: I/R Eta square:


Post-scores: I/R < 0.10 – trivia
> 0.10 – small
Hypotheses
> 0.25 – medium
Ho: µd = µ0 > 0.40 – large
Ha: µd ≠ µ0
: µd > µ0 2.3 Independent sample t-test
: µd < µ0 Purpose / requirement

Calculation To compare differences btw two independent group


means
Pre Post d d2 Scores of the DV must be normally distributed for
each group of IV (parametric)
t-cal: Sig value of kolmogrov smirnov @ shapiro-wilk ≥ α
đ− µ𝑑 @ skewness btw -2 - +2
t= 𝑠𝑦
√𝑛 Variables

DV – I/R
t-critical IV – N/0
-sda
Hypothesis
Decision

CI Ho: µ1 = µ2
Reject Ho: 0 outside CI
+
đ− tα,df (sȳ)(only relevant for two tailed) Ha: µ1 ≠ µ2
:µ1 > µ2
:µ1 < u2
Effect size (stats)
-sda-
Calculation
Eta-square (practical)
𝑡2 Fcal
𝑡 2 +𝑛−1

𝑠2 𝑙𝑎𝑟𝑔𝑒
F=
𝑠2 𝑠𝑚𝑎𝑙𝑙

𝑛𝐿−1
Fcritical = F
𝑛𝑆−1

tcal
4
unequal variance Eta-square (practical)
ý1−ý2
t= 2 2 𝑡2
√ 𝑠1 + 𝑠2 2
𝑛1 𝑛2
n = 𝑡 2 =𝑛1=𝑛2−2

equal variance

2 (𝑛1−1)𝑠12 +(𝑛2−1)𝑠22 Conclusion


sp = 𝑛1=𝑛2−2 Reject Ho: if there is sig difference btw two
independent group means at α level of sig
t-critical
𝛼 Fail to reject Ho: there is no sig difference btw two
t ,df (n1+n2-2) (two-tailed) independent group means at α level of sig
2
𝛼
t ,df (n1+n2-2) (one-tailed)

Decision

Manual

Fcal > Fcri : equal variance formula


Fcal ≤ Fcri : unequal variance formula spss

SPSS

Sig f > α: equal variance formula


Sig f ≤ α: unequal variance formula

Manual

Reject Ho : tcal ≥ tcri

SPSS

Reject Ho : sig-t ≤ α

CI

+
đ − tα,df, (sx̄1- x̄s2)
+ 𝑠12 𝑠22
đ − tα,df, √ 𝑛1 + 𝑛2

Effect size (statistics)

𝑛1+𝑛2
d=√ 𝑛1𝑛2

5
3.0 ANNOVA

Decision
Purpose / requirement
Manual: Reject Ho if Fcal > Fcritical
To compare differences btw more than two group SPSS: reject Ho if sig-F ≤ α
(normally in exam 3)
Assumptions must be:
Parametric
𝑀𝑆𝑊
Sig value of lavene statistics > α
𝑘
T/f, data distribution has met the assumption of qk,dfw,α√ 1
𝛴𝑛
homogeneity of variance
Involves ΣY and its total
Ȳ (use if post hoc apply) I J MD
n&N 1 2 Ȳ1-Ȳ2
sum of squares (SST, SSB, SSW) 1 3 Ȳ1-Ȳ3
df (dfT, dfB, dfW) 2 3 Ȳ2-Ȳ3
Mean squares (MSB, MSW)

Conclusion
Summary of Anova table
Reject Ho if there is a sig diff in DV among groups at
Source SS df MS F α level of sig
Btw SSB K-1 MSB Fail to reject Ho if there is no sig diff in DV among
Within SSW N-K MSW groups at α level of sig
Total SST N-1 Post Hoc: there is sig dif btw gp? & gp? And btw gp?
And gp?, but there is no sig diff btw gp? And gp? At
α level of sig
Eta-square: about…..% variance in DV is explained by
Variables groups @ the effect size of the diff is considered to
DV – I/R be small S/M/L
IV – N/O

Calculation

Fcal:
𝑀𝑆𝐵
F=
𝑀𝑆𝑊

Fcritical:
𝑑𝑓𝑏
F ,α
𝑑𝑓𝑤

Eta square:
<0.10 – trivia
0.10 – small
0.25 – moderate
0.40 – large

6
4.0 CHI-SQUARE
Use Guilford’s rule of thumb
Purpose / requirement
< 0.2 negligible relationship
1. Goodness of fit 0.2 – 0.4 – Low
2. Test of independence (final exam) 0.4 – 0.7 – moderate
3. Test of homogeneity 0.7 – 0.9 – high
> 0.9 – very high
For test of independence:
2 variables (IV & DV) Conclusion
To test relationship btw two categorical variables
and determine the strength of the relationship Reject Ho: DV is significantly dependent on IV at α
Involve contingency tables: level of sig
(Column (y), Rows (x)) Fail to reject Ho: DV is not significantly dependent
on IV at α level of sig
X Y RT
Symmetric measures (Measure of association):
Low Mod High
A O O O
1. Phi Coefficient (2x2 contingency table)
B O O O
2. Contingency coefficient (>2)
CT GT
3. Cramer V coefficient (>2)
(𝑅𝑇)(𝐶𝑇)
E=
𝐺𝑇
(𝑂 − 𝐸)2
O E (O-E) (O-E)2
𝐸

Variables

DV – N/O
IV – N/O

Hypotheses

Ho: DV is independent of IV
Ha: DV is dependent on IV

Calculation

X2Cal
2
2 (𝑂−𝐸)
X= , (always (+))
𝐸

X2 critical
(Refer table X2 distribution)
df = (R-1) (C-1)

Decision

Manual: Reject Ho if X2 cal ≥ X2 critical


SPSS: Reject Ho is sig-X2 ≤ α
7
5.0 SPEARMAN RHO CORRELATION Decision

Purpose / requirement Manual: Reject Ho: rs cal ≥ rs critical


SPSS: Reject Ho: sig-rs ≤ α
1. To determine relationship between two rank
ordered variables
Conclusion
2. Non-parametric (no assumption)
3. At least one of the IV/DV meet the assumption of
normality (sig value of kolmogrov-smirnov @ Reject Ho: there is significant relationship btw IV
Shapiro-wilk ≥ α @ skewness btw -2 to +2 & DV at α level of sig
4. Range -1 to +1 Fail to reject Ho: there is no significant
5. Susun X (IV) & Y(DV) descendent @ ascendant relationship btw IV & DV at α level of sig
6. rx & ry (jumlah rank order ÷ jumlah value x/y)
Inferential part:
d = rx-ry / ry-rx rs Cal:
6𝛴𝑑2
Rank X rx y ry d d2 rs = 1 -
𝑛(𝑛2 −1)
1 24 1÷1=1
2 21 rs critical (spearman rho table & two tailed final
2+3=5
exam)
3 21 ÷2
rs α, df (n)

Variables

DV – O
IV – O

Or

DV – I/R
IV – I/R

Hypotheses

Ho: Ps = 0
Ha: Ps ≠ 0 (two-tailed final exam)

Calculation

Descriptive part
6𝛴𝑑 2
rs = 1 - 𝑛(𝑛2 −1)
dpr rs (either (-) or (+) and
refer to Guildford
describe nature of relationship =
positivity/negative & n/i/m/h/vh relationship
between IV & DV

8
6.0 PEARSON PRODUCT-MOMENT CORRELATION Inferential:
tcal:
Purpose / requirement 𝑟−𝑝
t= 2
√1−𝑟
1. to determine relationship btw two variables (IV 𝑛−2
& DV)
2. Parametric (assumption of normality)
3. Both IV/DV meet the assumption of normality tcritical: (refer t-table, α bahagi dua)
where sig value of kolmogrov @ Shapiro ≥ α @ 𝛼
skewness btw -2 to +2
t 2 , 𝑑𝑓 (𝑛 − 2)
4. Range -1 to +1

Summary: Decision

ΣX, ΣX2, ΣY, ΣY2, ΣXY, n Manual: Reject Ho if t-cal ≥ t-critical


SPSS: Reject Ho if sig-t ≤ α
Variables
Conclusion
DV – I/R
IV – I/R Reject Ho if there is significant relationship btw IV &
DV at α level of sig
Fail to reject Ho if there is no relationship btw IV &
Hypotheses
DV at α level of sig

Ho: p = 0
Ha: p ≠ 0
P>0
P<0

If data is nominal, must transfer into dummy


variable (two-tailed final exam)

Descriptive part:

(𝛴𝑋)(𝛴𝑌)
𝛴𝑋𝑌− 𝑛
r= 2 2
√(𝛴𝑋 2 −(𝛴𝑋) )(𝛴𝑌 2 − (𝛴𝑌) )
𝑛 𝑛

dapat r (either (-) atau (+) dan refer to Guildford


describe nature of relationship = positive/negative &
n/I/m/h/vh relationship between IV & DV

9
7.0 SIMPLE LINEAR REGRESSION 𝑑𝑓𝑅
F 𝑑𝑓𝐸 , α
Purpose / requirement
Slope, (t)
1. To determine relationship btw two variables (IV &
DV)
t cal
2. To make prediction of DV based on IV
𝑏1−𝐵1
3. Extension of Pearson correlation t=
4. Must meet assumption where the data is parametric 𝑀𝑆𝐸

(normally distributed) 𝑆𝑆𝑋

Ȳ = bo +/- b1x t critical


bo (Constant) 𝛼
b1 (slope) t 2 , 𝑑𝑓 (𝑛 − 2)
Interpretation: for every I unit increase in X, Y will
increase or decrease Decision

Variables Decision for regression model

DV – I/R Manual: Reject Ho if F cal ≥ F critical


IV - I/R SPSS: Reject Ho if sig-F ≤ F critical
If data in N, transfer into dummy variable
F-ratio range = 0 to ꚙ (infinity)
Hypotheses
Relationship btw F & T
Inferential
t2 = F
Regression model
Ho: Y = Bo + ei √𝐹 = t
Ha: Y = Bo + B1 + ei
Decision for slope

Slope (t) Manual: reject Ho if t cal ≥ t critical


Ho: B1 = 0 SPSS: Reject Ho if sig-t ≤ α
Ha: B1 ≠ 0 t-range = -1 to ꚙ
To test contribution of X towards Y (Final exam,
two tailed) 1. Coefficient of determination (-1 - +1)

𝑆𝑆𝑅
Calculation
R2 = 𝑆𝑆𝑇
Regression model 2. Multiple correlation coefficient (0-1), R = √𝑅 2

Sum of squares (SST, SSR, SSE)


df (dfT, dfR, dfE)
MS (MSR, MSE)

F cal
𝑀𝑆𝑅
F = 𝑀𝑆𝐸
F critical
10
Conclusion

Conclusion for regression model

Reject Ho if the regression model fits the data at α


level of sig
Fail to reject Ho when the regression model does not
fit the data at α level of sig

Conclusion for slope

Reject Ho if X contributes significantly towards Y


Fail to reject Ho if X does not contribute significantly
towards y

Bila dapat nilai R2 x 100 utk dapatkan percentage


Interpretation: Percentage (...%) variance is
explained by X
The higher the value the better…

11

You might also like