Chapt 6 - Statistic Data

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

STATISTICAL ANALYSIS:

Evaluating the Data

1
• Experimentalist use statistical calculations
to sharpen their judgments concerning the
quality of experimental measurements.

• Most common applications of statistical test


to the treatment of analytical results
including;

2
1. Defining a numerical interval around the
mean of a set of replicate analytical results
within which the population mean can be
expected to lie with a certain probability.
This interval is called the confidence
interval.

2. Determining the number of replicate


measurements required to ensure at a
given probability that an experimental
mean falls within a certain confidence
interval
3
3. Estimating the probability that
(a) an experimental mean and a true value
or
(b) two experimental means are different,
that is, whether the differences is real or
simply the result of random error.
This test is particularly important for
discovering systematic errors in a method
and for determining whether two samples
come from the same sources.
4
4. Deciding whether what appears to be an
outlier in a set of replicate measurements is
with a certain probability the result of a
gross error and thus be rejected or whether
it is a legitimate result that must be
retained in calculating the mean of the set.

5. Using the least-squares method for


constructing calibration curves.

5
Types of Laboratory Errors
• Systematic Error
• Random Error

Systematic Error
represented by a constant bias between your
results and the true answer.

when you are making a solution with an


improperly calibrated volumetric flask.
6
Systematic Error….
Are produced by a consistent problem in a
scientist’s technique, an instrument, or a
procedure.

Possible to eliminate them through good


laboratory practice.

7
Random Error
result of random variation in experimental
data.

Present in all measurements such as in


instrumental readings or experimental
condition that cannot be controlled.

For instance, fluctuations due to static


electricity or air currents can cause random
errors when you using a balance. 8
Random Error….

no measurements is totally free from


random errors,

However, it is possible to reduce the size of


this errors through the proper design of
experiment and the correct choice of
method for comparing data.

9
• Individual results from a set of
measurements are seldom the same, so a
central or “best” value is used for the set.
• 1st – the central value of a set should be
more reliable than any of the individual
results.
• 2nd – variation in the data should provide
a measure of the uncertainty associated
with the central result.
– Mean or median may serve as the central
10
value for a set of replicated measurements
• Mean – average value of two or more
measurements.

• Median – is the middle value in a set of


data that has been arranged in order of
size.
• Eg – Mean and median for the six
replicate determination for iron in
aqueous samples of a standard solution
containing 20.00 ppm of iron (III) 11
19.4, 19.5, 19.6, 19.8, 20.1 and 20.3 ppm

Mean = 19.8,

Median = (19.6 + 19.8)/2 = 19.7 ppm

12
• What is Precision?

– The closeness of the results to others that


have been obtained exactly in the same
way.

– Terms are widely used to describe the


precision of a set of replicate data:
• Standard deviation,
• Variance, and
• Coefficient of variation.

13
– All these terms are a function of the
deviation from the mean di, or just the
deviation, which is defined as

14
• The relationship between deviation from
the mean and the three precision terms is
given by the following equation

• Standard deviation, s (or s for the


population of standard deviation)

15
when N is small;

when N ∞;

16
spooled - s for more than 1 set of data

Sx -Standard deviation of the mean

30

17
• Variance
= s2

• Coefficient of Variation (CV) or


percent relative standard deviation
(%RSD).

18
How about Accuracy?

Accuracy indicates the closeness of the


measurement to its true or accepted
value and is expressed by the error.

• What is the basic difference between


accuracy and precision?

19
• We may determine precision just by
replicating and repeating a measurement.

• On the other hand, we can never


determine accuracy exactly because the
true value of a measured quantity can
never be known exactly. We must used
an accepted value instead.

• Accuracy is expressed in terms of either


absolute or relative error.
20
Absolute Error (E) in the measurement of a
quantity xi is given by the equation

; where
xt = the true or accepted value
of the quantity
Eg: Data from determination for iron
concentration (in slide 12)
- Absolute error of the result immediately to the
left of the true value of 20.00 ppm is -0.2 ppm
Fe, the result at 20.10 ppm has an error of 0.1
ppm Fe.
21
Relative Error (Er)
The percent of relative error is given by
the expression.

The Er is a more useful quantity than the E.

Er may be expressed in percent, ppt, or


ppm depending on the magnitude of the
result.
22
CONFIDENCE LIMIT (CL)
and INTERVAL
The true value of the mean, m for a population of
data can never be determined exactly because such a
determination requires that an infinite number of
measurements be made.

Statistical theory, however, allows us to set limits


around an experimentally determined mean, within
which the population mean m lies with a given
degree of probability.
23
• These limits are called confidence limits
(CL), and the interval they define is known
as the confidence interval (CI).

CL – Define a numerical interval around


that contains m with a certain
probability.
Where
m - the true value of the mean
- experimentally determined mean
24
Random errors follow a Gaussian or normal distribution.
We are 95% certain that the true value falls within 2σ (infinite population),
IF there is no systematic error.

©Gary Christian,
Analytical Chemistry,
6th Ed. (Wiley) Fig. 3.2 Normal error curve.
The quantity z represents the deviation of
a result from the mean for a population
of data measured relative to the standard
deviation.

26
CL for SINGLE n FEW
MEASUREMENTS
Single
CL = x ± zs
CI
For N Measurements

27
Example 2.1
The mercury in samples of seven fish
taken from Chesapeake Bay was
determined by a method based on the
absorption of radiation by gaseous
elemental mercury (Table 2.1).

28
Table 2.1 Result for Example 2.1
Number of Sum of squares
Mean, ppm
Specimen samples Hg content, ppm of deviations
Hg
measured from mean
1 3 1.80, 1.58, 1.64 1.673 0.0258

2 4 0.96, 0.98, 1.02, 1.10 1.015 0.0115

3 2 3.13, 3.35 3.240 0.0242


2.06, 1.93, 2.12,
4 6 2.018 0.0611
2.16, 1.89, 1.95

5 4 0.57, 0.58, 0.64, 0.49 0.570 0.0114

2.35, 2.44, 2.70,


6 5 2.482 0.0685
2.48,2.44
1.11, 1.15, 1.22,
7 4 1.130 0.0170
1.04
Total = 28 Sum of squares = 0.2196 29
Example 2.1
Calculate
a) A pooled estimate of the deviation for the
method based on the first three columns of
data.

b) The 80% and 95% confidence limits for


i. The first entry (1.80 ppm Hg)
ii. The mean value (1.67 ppm Hg) for specimen 1
in the sample. Assume that in each part s = 0.1
is a good estimate of s (s s)

30
From these calculations, we conclude that it is
80% probable that m, the population mean (and,
Solution in the absence if determinate error, the true
value) lies in the interval between 1.67 and 1.93
ppm Hg.
i)

From table CL
for various
values of z
ii) (Table 2.2)

Furthermore, there is a 95% chance


that is lies in the interval between
1.60 and 2.00 ppm Hg. 31
Table 2.2 Confidence Levels for Various Values of z

Confidence Levels, % z
50 0.67
68 1.00
80 1.29
90 1.64
95 1.96
95.4 2.00
99 2.58
99.7 3.00
99.9 3.29

32
b) For the three measurements, From table CL
for various
values of z
Solution (Table 2.2)

33
Example 2.2
How many replicate measurements of specimen 1 in example
2.1 are needed to decrease the 95% confidence interval to ±
0.07 ppm Hg?

Solution
1.96

34
We conclude the eight measurements would provide a
slightly better than 95% chance of the population mean
lying within ± 0.07 ppm of the experimental mean.

Table 2.3 Relationship between N and CI


Number of Relative Size of
Measurements Confidence
averaged, N Interval
1 1.00
2 0.71
3 0.58
4 0.50
5 0.45
6 0.41
10 0.32
35
Finding the CI when s is
unknown
Often, we are faced with limitations in time or the
amount of available sample that prevent us from
accurately estimating s.
In such cases, a single set of replicate measurements
must provide not only a mean but also an estimate
precision.
As indicated earlier, s calculated from small set of
data may be quite uncertain. Thus, CI are necessarily
broader when a good estimate s is not available.
36
Thus we used important statistical parameter, t, which is
defined in the same way as z except that s is substituted
for s.

The CL for the mean of N replicate measurements can be


calculated from t by an equation similar to for CL before
(t is substituted for z)

37
Select a confidence level (95% is good) for the number of samples analyzed
(= degrees of freedom +1).
Confidence limit = x ± ts/√N.
It depends on the precision, s, and the confidence level you select.

Table 2.4

38
©Gary Christian, Analytical Chemistry, 6th Ed. (Wiley)
Example 2.3

You have obtained the following data for the alcohol


content of a sample of blood: % C2H5OH: 0.084,
0.089 and 0.079. Calculate the 95% confidence
limits for the mean assuming (a) that you know
nothing about precision of the method and (b) that on
the basis of previous experience, you know that s =
0.005% C2H5OH and that s is a good estimate of s.

39
Here, = 0.252/3 = 0.084. Table 2.4
indicates that t = 4.30 for two degrees of
freedom and 95% confidence. Thus
40
(b) Because s = 0.005% is a good estimate of
s,

Note that a sure knowledge of s decreases the


confidence interval by a significant amount
41
STATISTICAL AIDS TO
HYPOTHESIS TESTING
• Much of scientific and engineering
endeavor is based on hypothesis testing.
• To explain an observation – a hypothetical
model is advanced and tested
experimentally to determine its validity.
– If the result from the experiment do not support
the model – we reject it and seek a new
hypothesis.

42
– If agreement is found – the hypothetical model
serves as the basis for further experiments.
• Experimental results seldom agree exactly
with those predicted from a theoretical
model.
• Consequently, scientist and engineers
frequently must judge whether a numerical
difference is a manifestation of the random
errors inevitable in all measurements.
• Certain statistical test are useful in
sharpening these judgments.
43
Tests of this kind make use of a null
hypothesis.

Null hypothesis – postulates that two


observed quantities are the same.

The probability of the observed differences


appearing as a result of random error is then
computed from a probability distribution.

44
These probability level are often called
significant levels (a).

The confidence level as a percentage is


related to a and is given by
(1- a)100%

The kinds of testing that chemists use most


often include the comparison of

45
1) The mean of an experimental data ( ) with
what is believed to be the true value (m);

2) The means ( 1 and 2) or the standard


deviations (s1and s2 ) from two sets of
data;

3) The mean ( ) to a predicted or theoretical


value.

46
Comparing an Experimental
Mean with the True Value
• A common way of testing for bias in an
analytical method is to use the method to
analyze a sample whose composition is
accurately known.
• Bias in analytical method is illustrated in
Figure 3.2.
• Method A has no bias, so the population mean
mA is the true value (xt).
• Method B has a systematic error, or bias that is
given by 47
bias = mB – xt = mB – mA

Note that bias effects all the data in the set in


the same way and that it can be either positive
or negative.

In testing for bias by analyzing a sample


whose analyte concentration is known
exactly, it is like that the experimental mean
will differ from the accepted value xt as
shown in Fig.
48
mA = xt mB

Relative frequency, dN/N


bias

A B

Analytical result, xt

Figure 3.2 Illustration of bias:


bias = mB – xt = mB - mA

49
• The judgment must then be made whether
this difference is the consequence of
random error or, alternatively a systematic
error.
• In treating this type of problem statistically,
the difference - xt is compared with the
difference that could be caused by random
error.
• If the observed difference ( - xt ) < that
computed for a chosen probability – null
hypothesis that and xt are the same and
cannot be rejected (or not significantly50
difference)
• If - xt is significantly larger than either the
expected or the critical value, we may assume
that the difference is real and the systematic
error is significant.
• The critical value for rejecting the null
hypothesis is calculated by rewriting for CL in
slide 32.

• If a good estimate of s is available. Equation


can be modified by replacing t with z and s
with s. 51
Example 2.4
A new procedure for the rapid determination
of sulfur (S) in kerosenes was tested on a
sample known from its method of preparation
to contain 0.123% S (xt). The result were % S
= 0.112, 0.118, 0.115 and 0.119. Do the data
indicate that there is bias in the method?

52
53
• From Table 2.4 (slides 31), we find that at
the 95% confidence level, t has a value of
3.18 for the three degrees of freedom.
• Thus we can calculate a test value of t from
our data and compare it to the values given
in the Tables 2.4 at the desired confidence
level.
• The t test value is calculated from

54
Since 4.375 > 3.18 (tcrit), so we can conclude
that a difference this large is significant an
reject the null hypothesis at the confidence
level chosen (95%).

At the 99% confidence level, tcrit 5.84 (Table


2.4). Since 4.375 < 5.84, we would accept the
null hypothesis at the 99% confidence level
and conclude that there is no difference
between results. 55
• Note that the probability (significance) level
(0.05 and 0.01 is the probability of making
error by rejecting the null hypothesis.

56
Comparing Two Experimental
Means
• Use to judge whether a difference in the
means of two sets of identical analyses is
real and constitutes evidence that the
samples are different or whether the
discrepancy is simply a consequence of
random errors in the two sets.

57
Eg
set data of Material 1 Set Data of Material 2
x1 x1
x2 x2
x3 x3
x4 x4
. .
. .
. .
. .

N1 N2

58
• Thus, the variance of the difference (d =
x1 – x2) between the means is given by

• By substituting the values of sd, sm1, and sm2


into this equation, we have

59
If we then assume that the pooled standard
deviation spooled is a good estimate of both
sm1 and sm2, then

and

60
Substituting this equation into in slide 45 (and
also for xt), we find that

Or the test value of t given by

where

61
We then compare our test value of t with the critical
value obtained from the Table 2.5 for the
particular confidence level desired.
The number of degrees freedom (F) for finding the
critical value of t in Table 2.5 is N1 + N2 – 2.
If the absolute value (from calculation) of the test
statistic is smaller than the critical value, the null
hypothesis is accepted and no significant
difference between the means has been
demonstrated or vice versa.
If a good estimate of s is available, equation can be
modified by inserting z for t and s for s.

62
F = s12/s22.
You compare the variances of two different methods to see if there is a
significant difference in the methods, at the 95% confidence level.

Table 2.5

©Gary Christian, Analytical Chemistry, 6th Ed. (Wiley)


63
Exercise
Two barrels of wine were analyzed for their
alcohol content to determine whether they
were from different sources. On the basis of
six analyses, the average content of the first
barrel was established to be 12.61% ethanol.
Four analyses of the second barrel gave a
mean of 12.53% alcohol. The ten analyses
yielded a pooled value of s = 0.070%. Do the
data indicate a difference between the wines?
64
tcrit = 2.306; so texp < tcrit

Null hypothesis is accepted or there is no


significantly difference between two
barrels. The barrels are from same sources. 65
Detecting Gross Errors
• A data point that differs excessively from
the mean in a data set is termed an outlier.
• When a data contain outlier - a decision
must be made whether to retain or reject it.
• It is an unfortunate fact that no universal
rule can be invoked to settle the question of
retention or rejection.

66
Using the Q test

The Q test is a simple and widely used


statistical test.
In this test, the absolute value of the
difference between the questionable result xq
and its nearest neighbor xn is divided by the
spread w of the entire set to give the quantity
Qexp.

67
• This ratio is then compared with rejection
values Qcrit found in Table 2.6.

• If Qexp is greater than Qcrit, the questionable


result can be rejected with the indicated
degree of confidence (Table 2.6)

68
QCalc = outlier difference/range.
If QCalc > QTable, then reject the outlier as due to a systematic error.

©Gary Christian, Analytical Chemistry,


69
6th Ed. (Wiley)
Example 2.5
The analysis of a calcite sample yielded CaO
percentages of 55.95, 56.00. 56.04, 56.08, and
56.23. The last value appears anomalous;
should be it be retained or rejected?

70
71
72
73
74
75
76
77
78
79
80
81
Eg:
To test the null hypothesis at 5% probability
(observation differences occur 5 times in 100)

5% < observation differences


– null hypothesis considered questionable
and the difference is just to be significant.

observation differences < 5%


- null hypothesis can be accepted and the
result of data is not significantly
difference. 82

You might also like