0% found this document useful (0 votes)
6 views3 pages

Interobserver Reliability

The document discusses various statistical concepts including descriptive statistics, measures of central tendency, nominal ordinal interval and ratio data, measures of variability, standard deviation, variance, interquartile range, range, p-values, t-tests, scatter plots, interobserver reliability, intraclass correlation, Pearson correlation, ANOVA, Bonferroni correction, Tukey test and Kruskal-Wallis test.

Uploaded by

Isaac Chua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views3 pages

Interobserver Reliability

The document discusses various statistical concepts including descriptive statistics, measures of central tendency, nominal ordinal interval and ratio data, measures of variability, standard deviation, variance, interquartile range, range, p-values, t-tests, scatter plots, interobserver reliability, intraclass correlation, Pearson correlation, ANOVA, Bonferroni correction, Tukey test and Kruskal-Wallis test.

Uploaded by

Isaac Chua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

RAD4160 Statistics

1. What are descriptive statistics


- To summarise data and describe how they are centred or spread out
- Measures of central tendency and variability
2. What is a measure of central tendency. What are the 3?
- Describes where data tends to pile up
- Mean median mode
3. When do u used mean, median and mode? What are they?
- Mean: average. For int/ratio data when data is symmetrical (bell shape)
- Median: value separating the top and bottom 50%. Ordinal data OR int/ratio data when
skewed
- Mode: most frequent. Nominal data.
4. What is nominal, ordinal, interval and ratio data
- N: descriptive names: e.g., brown hair, white hair
- O: also descriptive names, but categories follow a hierarchy
- I: equally spaced intervals (Fahrenheit)
- R: same as I, but with true 0. (kg)
5. What are measures of variability
- SD, var, IQR, range
6. What is SD and when to use?
- Avg distance from the mean
- For symmetrical bell shaped curve
7. What is var?
- Avg squared distance from mean
8. What is IQR
- Distance between the 25% and 75% percentile
- When median is used
9. What is range
- Distance between highest and lowest
- For extreme data
10. What does p value mean?
- Probability of obtaining a test statistic as large at the one observed if there is no
difference between the groups
11. How to interpret p
- <0.05: rule out sampling error. Sig.
- >0.05: could be sampling error. Insig.
12. How to tell how different 2 samples are?
- Cohen’s d
- Proportion of variance accounted for
13. What is difference between pair and independent t test
- One for related data, 1 for independent data
14. Degree of freedom formula for paired t test
- n-1
15. 1 or 2 tail tests?
- 1 tail: when we know the relationship direction (is A higher than B?). higher power.
Higher chance of type 1 error (saying that there is a diff when there isn’t)
- 2 tail: can be either way.
16. T test reporting format
- t(df)=t stat, p<>alpha level
17. independent t test df
- n-2
18. what info does a scatter plot gives us
- form and shape
- direction (neg or pos)
- degree

Interobserver reliability

- Degree of agreement between 2 or more independent observers


- Kappa: 0.40 - 0.59 = moderate agreement
0.60 - 0.79 = substantial agreement
0.80 - 0.99 = outstanding agreement

Intraclass correlation coefficient

- Within a group of data, how similar are the data points

Pearson correlation coefficient

- Measure of linear correlation between 2 variables. -1 to 1. 0 is no correlation.

ANOVA

- Analysis of variance
- Tells us how much of the variance is between groups, or within groups
- F ratio = between/within

Bonferroni correction

- When observing differences between multiple variables, the chance of type 1 error is greater
- Type 1 error: rejecting the null hypothesis when It is actually true (FP)
- Type 2 error: not rejecting the null H when it is false (FN)

Tukey test

- Similar to Bonferroni correction


-
T test

- Find difference in means between 2 groups


- Null hypothesis: means are equal, NO DIFF

Kruskal wallis test

- Determine difference between >>2 groups for continuous independent variables OR ordinal
dependent data

You might also like