0% found this document useful (0 votes)
2 views7 pages

How Do We Decide If The Medication Was Successful in Lowering The Patient's Concentration of Blood Glucose?

The document discusses the methodology for determining the effectiveness of a medication in lowering blood glucose levels through significance testing and hypothesis testing. It outlines the steps for constructing a significance test, including formulating null and alternative hypotheses, choosing a confidence level, and using statistical tests such as t-tests and ANOVA for comparing means. Additionally, it explains the F-test and the analysis of variance (ANOVA) for assessing differences among multiple group means, along with the assumptions and calculations involved in these tests.

Uploaded by

trixieabecia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views7 pages

How Do We Decide If The Medication Was Successful in Lowering The Patient's Concentration of Blood Glucose?

The document discusses the methodology for determining the effectiveness of a medication in lowering blood glucose levels through significance testing and hypothesis testing. It outlines the steps for constructing a significance test, including formulating null and alternative hypotheses, choosing a confidence level, and using statistical tests such as t-tests and ANOVA for comparing means. Additionally, it explains the F-test and the analysis of variance (ANOVA) for assessing differences among multiple group means, along with the assumptions and calculations involved in these tests.

Uploaded by

trixieabecia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Let's consider the following problem: To determine if a medication is

effective in lowering blood glucose concentration, we collect two sets of


blood samples from a patient. We collect one set of samples immediately
before we administer the medication, and collect the second set of samples
several hours later. After analyzing the samples, we report their respective
means and variances. How do we decide if the medication was successful in
lowering the patient's concentration of blood glucose?
One way to answer this question is to construct a normal distribution curve
for each sample, and to compare the two curves to each other. Three
possible outcomes are shown in Figure 4.5.1. In Figure 4.5.1 a, there is a
complete separation of the two normal distribution curves, which suggests
the two samples are significantly different from each other. In Figure 4.5.1
b, the normal distribution curves for the two samples almost completely
overlap, which suggests that the difference between the samples is
insignificant. Figure 4.5.1 c, however, presents us with a dilemma. Although
the means for the two samples seem different, the overlap of their normal
distribution curves suggests that a significant number of possible outcomes
could belong to either distribution. In this case, the best we can do is to
make a statement about the probability that the samples are significantly
different to each other.
The process by which we determine the probability that there is a
significant difference between two samples is called significance testing or
hypothesis testing.
Constructing a Significance Test
The purpose of a significance test is to determine whether the difference
between two or more results is sufficiently large that it cannot be explained
by indeterminate errors. The first step in constructing a significance test is
to state the problem as a yes or no questions, such as " Is this medication
effective at lowering a patient's blood glucose levels? "
A null hypothesis and an alternative hypothesis define the two possible
answers to our yes or no question. The null hypothesis, H0, is that
indeterminate errors are sufficient to explain any differences between our
results. The alternative hypothesis, HA, is that the differences in our
results are too great to be explained by random error and that they must be
determinate in nature. We test the null hypothesis, which we either retain
or reject. If we reject the null hypothesis, then we must accept the
alternative hypothesis and conclude that the difference is significant.
Failing to reject a null hypothesis is not the same as accepting it. We retain
a null hypothesis because we have insufficient evidence to prove it
incorrect. It is impossible to prove that a null hypothesis is true.
The four steps for a statistical analysis of data using a significance test:
1. Pose a question, and state the null hypothesis, H0, and the alternative
hypothesis, HA.
2. Choose a confidence level for the statistical analysis.
3. Calculate an appropriate test statistic and compare it to a critical value.
4. Either retain the null hypothesis, or reject it and accept the alternative
hypothesis.
A t-test is a statistical test that is used to compare the means of two groups.
It is often used in hypothesis testing to determine whether a process of
treatment actually has an effect on the population of interest, or whether
two groups are different from one another.
For example: You want to know whether the mean petal length of iris
flowers differs according to their species. You find two different species of
irises growing in a garden and measure 25 petals of each species. You can
test the difference between these two groups using a t-test.
 The null hypothesis (H0) is that the true difference between these group
means is zero.
 The alternate hypothesis (Ha) is that the true difference is different from
zero.
When to use a t-test?
A t-test can only be used when comparing the means of two groups
(pairwise comparison). If you want to compare more than two groups, or if
you want to do multiple pairwise comparisons, use an ANOVA test or a post-
hoc test.
The t-test is a parametric test. The t-test assumes your data:

1. are independent
2. are (approximately) normally distributed
3. have a similar amount of variance within each group being compared
(homogeneity of variance)
If your data do not fit these assumptions, you can try a non-parametric
alternative to the t-test, such as the Wilcoxon Signed-Rank test for data with
unequal variances.
What type of t-test should be used?
When choosing a t-test, you will need to consider two things: whether the
groups being compared come from a single population or two different
populations, and whether you want to test the difference in a specific
direction.
One-sample, two-sample, or paired t-test?
 If the groups come from a single population (e.g. measuring before and
after an experimental treatment), perform a paired t-test.
 If the groups come from two different populations (e.g. two different
species, or people from separate cities), perform a two-sample t-
test or independent t-test.
 If these is one group being compared against a standard value (e.g.
comparing the acidity of a liquid to a neutral pH of 7), perform a one-
sample t-test.
One -tailed or two-tailed t-test?
 If you only care whether the two populations are different from one another,
perform a two-tailed t-test.
 If you want to know whether one population mean is greater than or less
than the other, perform a one-tailed t-test.

Watch the video about t-test

F Test
F test is a test that uses the F-distribution to test a hypothesis. In most
cases, it refers to the F-test to compare two variances. It also uses a variety
of test including regression analysis, the Chow test, and the Scheffe test.
F test is used to compare two variances by dividing them. If variances are
equal, the ratio of variances will be equal to 1.
General Steps for F-Test:

1. State the null hypothesis and the alternate hypothesis.


2. Calculate the F-value.
3. Find the F-statistic (critical value for this test). The F statistic formula is
Fstat= variance of the group means/ mean of the within group variances.
4. Also consider the p-value.
5. Support or reject the null hypothesis.

The Analysis of Variance (ANOVA)

One-Way ANOVA

The purpose of a one-way ANOVA test is to determine the existence of a statistically significant
difference among several group means. The test actually uses variances to help determine if the
means are equal or not. In order to perform a one-way ANOVA test, there are five
basic assumptions to be fulfilled:

1. Each population from which a sample is taken is assumed to be normal.


2. All samples are randomly selected and independent.
3. The populations are assumed to have equal standard deviations (or variances).
4. The factor is a categorical variable.

The Null and Alternative Hypotheses

The null hypothesis is simply that all the group population means are the same. The alternative
hypothesis is that at least one pair of means is different. For example, if there are k groups:

: At least two of the group means are not equal. That is, for
some .

The graphs, a set of box plots representing the distribution of values with the group means
indicated by a horizontal line through the box, help in the understanding of the hypothesis test. In
the first graph (red box plots), H0: μ1 = μ2 = μ3 and the three populations have the same
distribution if the null hypothesis is true. The variance of the combined data is approximately the
same as the variance of each of the populations.

If the null hypothesis is false, then the variance of the combined data is larger which is caused by
the different means as shown in the second graph (green box plots).
(a) H0 is true. All means are the same; the differences are due to random variation.
(b) H0 is not true. All means are not the same; the differences are too large to be
due to random variation.
Analysis of Variance
also referred to as ANOVA, is a method of testing whether or not the means of three or more
populations are equal. The method is applicable if:

 all populations of interest are normally distributed.


 the populations have equal standard deviations.
 samples (not necessarily of the same size) are randomly and independently selected from
each population.
 there is one independent variable and one dependent variable.
The test statistic for analysis of variance is the F-ratio.
One-Way ANOVA
a method of testing whether or not the means of three or more populations are equal; the
method is applicable if:
 all populations of interest are normally distributed.
 the populations have equal standard deviations.
 samples (not necessarily of the same size) are randomly and independently selected from
each population.
The test statistic for analysis of variance is the F-ratio.
Variance
mean of the squared deviations from the mean; the square of the standard deviation. For a set
of data, a deviation can be represented as x – where x is a value of the data and is the
sample mean. The sample variance is equal to the sum of the squares of the deviations
divided by the difference of the sample size and one.

The F Distribution and the F-Ratio

The distribution used for the hypothesis test is a new one. It is called the F distribution, invented
by George Snedecor but named in honor of Sir Ronald Fisher, an English statistician.
The F statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the
numerator and one for the denominator.

For example, if F follows an F distribution and the number of degrees of freedom for the
numerator is four, and the number of degrees of freedom for the denominator is ten,
then F ~ F4,10.

To calculate the F ratio, two estimates of the variance are made.

1. Variance between samples: An estimate of σ2 that is the variance of the sample means multiplied
by n (when the sample sizes are the same.). If the samples are different sizes, the variance
between samples is weighted to account for the different sample sizes. The variance is also
called variation due to treatment or explained variation.
2. Variance within samples: An estimate of σ2 that is the average of the sample variances (also
known as a pooled variance). When the sample sizes are different, the variance within samples is
weighted. The variance is also called the variation due to error or unexplained
variation.
 SSbetween = the sum of squares that represents the variation among the different samples
 SSwithin = the sum of squares that represents the variation within samples that is due to chance.

To find a “sum of squares” means to add together squared quantities that, in some cases, may be
weighted. We used sum of squares to calculate the sample variance and the sample standard
deviation in (Figure).

MS means “mean square.” MSbetween is the variance between groups, and MSwithin is the variance
within groups.

Calculation of Sum of Squares and Mean Square


 k = the number of different groups
 nj = the size of the jth group
 sj = the sum of the values in the jth group
 n = total number of all the values combined (total sample size: ∑nj)
 x = one value: ∑x = ∑sj
 Sum of squares of all values from every group combined: ∑x2

 Between group variability: SStotal = ∑x2 –

 Total sum of squares: ∑x2 –


 Explained variation: sum of squares representing variation among the different samples:

SSbetween =
 Unexplained variation: sum of squares representing variation within samples due to
chance:
 df‘s for different groups (df‘s for the numerator): df = k – 1
 Equation for errors within samples (df‘s for the denominator): dfwithin = n – k

 Mean square (variance estimate) explained by the different groups: MSbetween =

 Mean square (variance estimate) that is due to chance (unexplained): MSwithin =

MSbetween and MSwithin can be written as follows:

The one-way ANOVA test depends on the fact that MSbetween can be influenced by population
differences among means of the several groups. Since MSwithin compares values of each group to
its own group mean, the fact that group means might be different does not affect MSwithin.

The null hypothesis says that all groups are samples from populations having the same normal
distribution. The alternate hypothesis says that at least two of the sample groups come from
populations with different normal distributions. If the null hypothesis is
true, MSbetween and MSwithin should both estimate the same value.
Note

The null hypothesis says that all the group population means are equal. The
hypothesis of equal means implies that the populations have the same normal
distribution, because it is assumed that the populations are normal and that they
have equal variances.
F-Ratio or F Statistic

If MSbetween and MSwithin estimate the same value (following the belief that H0 is true), then the F-
ratio should be approximately equal to one. Mostly, just sampling errors would contribute to
variations away from one. As it turns out, MSbetween consists of the population variance plus a
variance produced from the differences between the samples. MSwithin is an estimate of the
population variance. Since variances are always positive, if the null hypothesis is
false, MSbetween will generally be larger than MSwithin.Then the F-ratio will be larger than one.
However, if the population effect is small, it is not unlikely that MSwithin will be larger in a given
sample.

The foregoing calculations were done with groups of different sizes. If the groups are the same
size, the calculations simplify somewhat and the F-ratio can be written as:

F-Ratio Formula when the groups are the same size


where …
 n = the sample size
 dfnumerator = k – 1
 dfdenominator = n – k
 s2 pooled = the mean of the sample variances (pooled variance)
 = the variance of the sample means

You might also like