0% found this document useful (0 votes)
14 views

Assignment Module 4 Bioststics

Uploaded by

Ashish Singh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Assignment Module 4 Bioststics

Uploaded by

Ashish Singh
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

ASSIGNMENT

Course Name: EpidemilogyAnd Bio Statistics


Module 4 :Bioststics

1. Short Notes: Effect Size

Effect size is a quantitative measure of the strength or magnitude of a relationship or difference


between groups in a study. It goes beyond just stating whether there is a statistically significant
difference and gives insight into how substantial that difference is.

 Types of Effect Size:


o Cohen's d: Measures the difference between two means in standard deviation
units and is commonly used in comparing two groups.
o Pearson’s r: Used for correlation, representing the strength and direction of a
linear relationship between two variables.
o Odds Ratio and Relative Risk: Common in medical research, these measures
assess the likelihood of an event occurring in one group compared to another.
 Interpretation:
o A larger effect size indicates a stronger relationship or greater difference, while a
smaller effect size suggests a weaker association.
o Effect size helps to understand the practical significance of findings, aiding
researchers and practitioners in making informed decisions based on results.

Effect size is essential in interpreting results and is often used in sample size calculations to
determine the number of participants needed.

2. Types of Sample Hypothesis Tests in Practice

Hypothesis tests are statistical tests used to determine if there is enough evidence to reject a null
hypothesis. There are various types, each suitable for different types of data and research
questions:

 One-Sample t-Test: Used when comparing the mean of a single sample to a known
population mean.
 Two-Sample t-Test (Independent t-Test): Compares the means of two independent
groups to see if there is a significant difference.
 Paired t-Test: Compares the means of two related groups, often used in before-and-after
studies.
 Chi-Square Test: Tests associations between categorical variables, such as observing if
there’s a relationship between gender and smoking habits.
 ANOVA (Analysis of Variance): Used when comparing means among three or more
groups to determine if at least one group mean is significantly different.
 Mann-Whitney U Test: A non-parametric test for comparing two independent groups
when data does not follow a normal distribution.
 Wilcoxon Signed-Rank Test: A non-parametric test for comparing two related groups in
cases where data is not normally distributed.

Choosing the right hypothesis test depends on the research question, data type, and sample
characteristics.

3. Describe Significance Level

Significance level, denoted as alpha (α), is the threshold at which we decide whether to reject the
null hypothesis. It represents the probability of making a Type I error—that is, rejecting the null
hypothesis when it is actually true.

 Common Significance Levels:


o 0.05: The most commonly used threshold, indicating a 5% chance of rejecting the
null hypothesis by error.
o 0.01 or 0.001: Used in more conservative studies where stronger evidence is
needed, such as clinical trials with higher stakes.
 Interpretation:
o A significance level of 0.05 implies that if the p-value is less than 0.05, the result
is statistically significant, meaning the observed effect is unlikely to be due to
chance.
o Lower significance levels reduce the likelihood of a Type I error but may require
a larger sample size to maintain statistical power.

The significance level is chosen based on the research context, balancing the risk of false
positives with the need for reliable results.

4. What is Variability?
Variability refers to how spread out or dispersed the data points are in a dataset. It shows the
extent to which individual data points differ from each other and the mean of the data.

 Measures of Variability:
o Range: The difference between the highest and lowest values, showing the span
of data.
o Variance: The average squared deviation from the mean, providing a measure of
how much the data points differ.
o Standard Deviation: The square root of the variance, indicating the typical
distance of data points from the mean.
 Importance:
o High variability indicates that data points are spread out over a wide range, while
low variability suggests that data points are closer to the mean.
o In research, understanding variability helps assess the consistency of the data and
influences sample size and hypothesis testing.

Variability is crucial for statistical analysis, as it impacts the reliability of conclusions drawn
from the data.

5. Explain Sample Size

Sample size is the number of participants or observations included in a study. Determining an


adequate sample size is crucial for achieving statistically meaningful results and maintaining the
study's power.

 Importance:
o An appropriate sample size increases the reliability of findings, ensuring that the
study can detect a true effect if one exists.
o Insufficient sample size can lead to underpowered studies, which might miss
significant findings (Type II error).
 Factors Influencing Sample Size:
o Effect Size: Smaller expected effects require larger sample sizes.
o Significance Level (Alpha): Lower significance levels increase the sample size
needed.
o Power (1 - Beta): Typically set at 80% or 90%, power indicates the probability of
correctly rejecting a false null hypothesis.
o Variability: Greater variability in the data requires a larger sample size to
accurately estimate effects.

You might also like