3 Classification of Quantitative Method of Analysis
3 Classification of Quantitative Method of Analysis
The results of a typical quantitative analysis from two measurements are computed as to:
One, the mass or the volume of sample being analyzed.
Second, measurement is of some quantity that is proportional to the amount of analyte in the
sample such as mass, volume, intensity of light, or electrical charge.( usually completes the
analysis, and we usually classify analytical methods according to the nature of this final
measurement).
1. Gravimetric Method- determines the mass of the analyte or some compound chemically related to
it.
2. Volumetric Method - measure the volume of a solution containing sufficient reagent to react
completely with the analyte.
3. Electroanalytical Methods - measure electrical properties such as potential, current, resistance,
and quantity of electrical charge.
4. Spectroscopic Methods - explore the interaction between electromagnetic radiation and analyte
atoms or molecules or the emission of radiation by analytes.
5. Miscellaneous Methods - measure quantities as mass-to-charge ratio of ions by mass
spectrometry, rate of radioactive decay, heat of reaction, rate of reaction, sample thermal
conductivity, optical activity, and refractive index.
1.Choosing a Method
The first step in any quantitative analysis is the selection of a method. The choice can be difficult
and requires experience as well as intuition.
An important consideration in selection process is the level of accuracy required. Unfortunately,
high reliability nearly always requires a large investment of time. The selected method usually represents
a compromise between the accuracy and economics (money available for the analysis).
A second consideration related to economic factors is the number of samples that will be
analyzed. If there are many samples, we can afford to spend a good deal of time in such preliminary
operations as assembling and calibrating instruments and equipment, as well as in preparing standard
solutions. If we have only a single sample or a just a few samples, it may be more appropriate to select a
procedure that avoids or minimizes such preliminary steps.
The complexity of the sample and the number of components in the sample always influence the
choice of method to some degree.
Assay- is the process of determining how much of a given sample is the material indicated by its name.
Example: A zinc alloy is assayed for its zinc content.
Sampling – involves obtaining a small mass of a material whose composition accurately represents the
bulk of the material being sampled.
Replicate samples – refers to the portions of the material of approximately the same size that are carried
through an analytical procedure at the same time and in the same way.
6. Eliminating Interferences
Once we have the sample in solution and converted the analyte to an appropriate form for
measurement, the next step is to eliminate substances from the sample that may interfere with
measurement.
Few chemical or physical properties of importance in chemical analysis are unique to a single
chemical species. Instead, the reactions used and the properties measured are characteristic of a group of
elements or compounds. Species other than the analyte that affect the final measurement are called
interferences, or interferents. A scheme must be devised to isolate the analytes from interferences before
the final measurement is made. No hard and fast rules can be given for eliminating interference. This
problem can certainly be the most demanding aspect of an analysis.
Interference – is a species that causes an error by enhancing or attenuating (making smaller) the quantity
being measured in an analysis.
7. Calibration and Measurement
All analytical results depend on final measurement X of a physical or chemical property of the
analyte. This property must vary in a known and reproducible way with the concentration cA of the analyte.
Ideally, the measurement of the property is directly proportional to the concentration, that is,
cA = kX
where k is a proportionality constant. With a few exceptions, analytical methods require the empirical
determination of k with chemical standards for which cA is known. The process of determining k is thus an
important step in most analyses; this step is called a calibration.
Calibration – is referred to as the empirical determination of the relationship between the quantity
measured in an analysis and the analyte concentration.
8. Calculating Results
Computing analyte concentrations from experimental data is ordinarily a simple and straightforward
task, particularly with modern calculators or computers. These computations are based on the raw
experimental data collected in the measurement step, the characteristics of the measurement instruments,
and the stoichiometry of the analytical reaction.
Illustration 1: Effect of errors in analysis which shows results for the quantitative determination of iron. Six
equal portions of an aqueous solution with a “known” concentration of 20.00 ppm of iron(III)
were analyzed in exactly the same way. The mean value of 19.78 ppm (19.8).
.
The results range from a low of 19.4 ppm to a high of 20.3 ppm of iron. The average, or mean
value, x, of the data is 19.78 ppm, which rounds to 19.8 ppm. Every measurement is influenced by many
uncertainties, which combine to produce a scatter of results. Because measurement uncertainties can
never be completely eliminated, measurement data can only give us an estimate of the “true” value.
However, the probable magnitude of the error in a measurement can often be evaluated. It is then
possible to define limits within which the true value of a measured quantity lies with a given level of
probability. Estimating the reliability of experimental data is extremely important whenever we collect
laboratory results because data of unknown quality are worthless. On the other hand, results that might not
seem especially accurate may be of considerable value if the limits of uncertainty are known. Unfortunately,
there is no simple and widely applicable method for determining the reliability of data with absolute
certainty. Often, estimating the quality of experimental results requires as much effort as collecting the data.
Reliability can be assessed in several ways. Experiments designed to reveal the presence of errors
can be performed. Standards of known composition can be analyzed, and the results compared with the
known composition. A few minutes consulting the chemical literature can reveal useful reliability
information. Calibrating equipment usually enhances the quality of data. Finally, statistical tests can be
applied to the data.
In order to improve the reliability and to obtain information about the variability of results, two to five
portions (replicates) of a sample are usually carried through an entire analytical procedure. Individual
results from a set of measurements are seldom the same so, we usually consider the “best” estimate to be
the central value for the set. We justify the extra effort required to analyze replicates in two ways. First, the
central value of a set should be more reliable than any of the individual results.
Usually, the mean or the median is used as the central value for a set of replicate measurements.
Second, an analysis of the variation in the data allows us to estimate the uncertainty associated with the
central value.
The Mean and the Median
where xi represents the individual values of x making up the set of N replicate measurements.
The median is the middle result when replicate data are arranged in increasing or decreasing order. There
are equal numbers of results that are larger and smaller than the median. For an odd number of results, the
median can be found by arranging the results in order and locating the middle result. For an even number,
the average value of the middle pair is used.
Example: Calculate the mean and median for the data shown in Illustration1 .
Because the set contains an even number of measurements, the median is the average of the central pair:
In ideal cases, the mean and median are identical. However, when the number of measurements in the set
is small, the values often differ.
Replicates are samples of about the same size that are carried through an analysis in exactly the same
way.
The mean of two or more measurements is their average value.
The median is the middle value in a set of data that has been arranged in numerical order. The median is
used advantageously when a set of data contain an outlier, a result that differs significantly from
others in the set.
An outlier can have a significant effect on the mean of the set but has no effect on the median.
‘
Precision and Accuracy
Illustration 2: Illustration of accuracy and precision using the pattern of darts on a dartboard. Note that we
can have very precise results (upper right) with a mean that is not accurate and an accurate
mean (lower left) with data points that are imprecise.
Precision describes the reproducibility of measurements; the closeness of results that have been obtained
in exactly the same way. Generally, the precision of measurement is readily determined by
repeating the measurement.
The precision of a set of data are described as standard deviation, variance and coefficient of
variation which are all function of the deviation from the mean (di).
Data: xi /dt/
19.4 0.4
19.5 0.3
19.6 0.2
20.0 0.2
20.3 0.5
Accuracy is the closeness of the measurement to its true or accepted value; expressed in terms of either
absolute or relative error.
Absolute Error
The absolute error (E) in the measurement of a quantity x is given by the equation:
x
where i is measurement of the quantity
xt is true or accepted value of a quantity
From the data in Illustration 1, the absolute error of the result immediately to the left of the true value of 20.0
ppm is 20.2 ppm Fe; the result at 20.1 ppm is in error by 10.1 ppm Fe.
Note that we keep the sign in stating the error. The negative sign in the first case shows that the
experimental result is smaller than the accepted value, and the positive sign in the second case shows that
the experimental result is larger than the accepted value.
The absolute error of a measurement is the difference between the measured value and the true
value. The sign of the absolute error tells you whether the value in question is high or low. If the
measurement result is low, the sign is negative; if the measurement result is high, the sign is positive.
Relative Error
The relative error (Er ) is often a more useful quantity than the absolute error. The percent relative
error is given by the expression
Relative error is also expressed in parts per thousand (ppt). Such that, the relative error for the mean of the
data in Illustration 1 is:
Precision Accuracy
- describes the agreement between a result and its - can never be determined exactly because the true
true value value of a quantity can never be known exactly.
- is determined by simply replicating a
measurement
Illustration 3. Absolute error in the micro-Kjeldahl determination of nitrogen. Each dot represents the error
associated with a single determination. Each vertical line labeled (xi - xt) is the absolute
average deviation of the set from the true value. (Data from C. O. Willits and C. L. Ogg, J.
Assoc. Offic. Anal. Chem., 1949, 32, 561.)
The dots show the absolute errors of replicate results obtained by four analysts. Note that analyst 1
obtained relatively high precision and high accuracy. Analyst 2 had poor precision but good accuracy. The
results of analyst 3 are surprisingly common. The precision is excellent, but there is significant error in the
numerical average for the data. Both the precision and the accuracy are poor for the results of analyst 4.
Chemical analyses are affected by two types of errors: random (indeterminate) error and
systematic (determinate) error.
2. Random (or indeterminate) error, causes data to be scattered more or less symmetrically around a
mean value. In general, then, the random error in a measurement is reflected by its
precision. It affects measurement precision.
Gross errors differ from indeterminate and determinate errors. They usually occur only occasionally, are
often large, and may cause a result to be either high or low. They are often the product of human
errors.
For example, if part of a precipitate is lost before weighing, analytical results will be low. Touching
a weighing bottle with your fingers after its empty mass is determined will cause a high mass reading for a
solid weighed in the contaminated bottle.
Gross errors lead to outliers, results that appear to differ markedly from all other data in a set of
replicate measurements. Various statistical tests can be performed to determine if a result is an outlier.
An outlier is an occasional result in replicate measurements that differs significantly from the other results.
Systematic Errors
Systematic errors have a definite value, an assignable cause, and are of the same magnitude for
replicate measurements made in the same way. They lead to bias in measurement results.
Bias measures the systematic error associated with an analysis. It affects all of the data in a set in the
same way and that it bears a sign. It has a negative sign if it causes the results to be low and a
positive sign otherwise.
Instrumental Errors
All measuring devices are potential sources of systematic errors.
For example, pipets, burets, and volumetric flasks may hold or deliver volumes slightly different
from those indicated by their graduations. These differences arise from using glassware at a
temperature that differs significantly from the calibration temperature, from distortions in container
walls due to heating while drying, from errors in the original calibration, or from contaminants on
the inner surfaces of the containers.
Calibration eliminates most systematic errors of this type.
Electronic instruments are also subject to systematic errors. These can arise from several
sources. For example, errors may emerge as the voltage of a battery-operated power supply
decreases with use.
Errors can also occur if instruments are not calibrated frequently or if they are calibrated
incorrectly.
The experimenter may also use an instrument under conditions where errors are large. For
example, a pH meter used in strongly acidic media is prone to an acid error.
Temperature changes cause variation in many electronic components, which can lead to drifts and
errors.
Some instruments are susceptible to noise induced from the alternating current (ac) power lines,
and this noise may influence precision and accuracy.
In many cases, instrumental errors are detectable and correctable.
Method Errors
The nonideal chemical or physical behavior of the reagents and reactions on which an analysis is
based often introduce systematic method errors. Such sources of nonideality include the slowness
of some reactions, the incompleteness of others, the instability of some species, the lack of
specificity of most reagents, and the possible occurrence of side reactions that interfere with the
measurement process.
As an example, a common method error in volumetric analysis results from the small excess of
reagent required to cause an indicator to change color and signal the equivalence point. The
accuracy of such an analysis is thus limited by the very phenomenon that makes the titration
possible.
Errors inherent in a method are often difficult to detect and are thus the most serious of the three
types of systematic error.
Of the three types of systematic errors encountered in a chemical analysis, method errors are
usually the most difficult to identify and correct.
Personal Errors
Many measurements require personal judgments.
Examples include estimating the position of a pointer between two scale divisions, the color of a
solution at the end point in a titration, or the level of a liquid with respect to a graduation in a pipet
or buret.
Judgments of this type are often subject to systematic, unidirectional errors. For example, one
person may read a pointer consistently high, while another may be slightly slow in activating a
timer. Yet, a third may be less sensitive to color changes, with an analyst who is insensitive to color
changes tending adjusted so that any known physical limitations of the analyst cause negligibly
small errors.
Automation of analytical procedures can eliminate many errors of this type.
A universal source of personal error is prejudice, or bias. Most of us, no matter how honest, have a
natural, subconscious tendency to estimate scale readings in a direction that improves the
precision in a set of results. Alternatively, we may have a preconceived notion of the true value for
the measurement. We then subconsciously cause the results to fall close to this value.
Number bias is another source of personal error that varies considerably from person to person.
The most frequent number bias encountered in estimating the position of a needle on a scale
involves a preference for the digits 0 and 5. Also common is a prejudice favoring small digits over
large and even numbers over odd. Again, automated and computerized instruments can eliminate
this form of bias.
Color blindness is a good example of a limitation that could cause a personal error in a volumetric
analysis. A famous color-blind analytical chemist enlisted his wife to come to the laboratory to help
him detect color changes at end points of titrations.
Effect of Systematic Errors on Analytical Results
Systematic errors may be either constant or proportional. The magnitude of a constant error
stays essentially the same as the size of the quantity measured is varied. With constant errors, the
absolute error is constant with sample size, but the relative error varies when the sample size is
changed. Proportional errors increase or decrease according to the size of the sample taken for
analysis. With proportional errors, the absolute error varies with sample size, but the relative error stays
constant when the sample size is changed.
Constant Errors
The effect of a constant error becomes more serious as the size of the quantity measured
decreases.
The effect of solubility losses on the results of a gravimetric analysis, illustrates this behavior.
The excess of reagent needed to bring about a color change during a titration is another example
of constant error. This volume, usually small, remains the same regardless of the total volume of
reagent required for the titration.
The relative error from this source becomes more serious as the total volume decreases.
One way of reducing the effect of constant error is to increase the sample size until the error
is acceptable.
Proportional Errors
A common cause of proportional errors is the presence of interfering contaminants in the sample.
For example, a widely used method for the determination of copper is based on the reaction of copper(II) ion with
potassium iodide to give iodine. The quantity of iodine is then measured and is proportional to the
amount of copper. Iron(III), if present, also liberates iodine from potassium iodide. Unless steps are
taken to prevent this interference, high results are observed for the percentage of copper because
the iodine produced will be a measure of the copper(II) and iron(III) in the sample. The size of this
error is fixed by the fraction of iron contamination, which is independent of the size of sample
taken. If the sample size is doubled, for example, the amount of iodine liberated by both the copper
and the iron contaminant is also doubled. Thus, the magnitude of the reported percentage of
copper is independent of sample size.
2. Independent Analysis
If standard samples are not available, a second independent and reliable analytical method can be
used in parallel with the method being evaluated. The independent method should differ as much as
possible from the one under study. This practice minimizes the possibility that some common factor in the
sample has the same effect on both methods. Again, a statistical test must be used to determine whether
any difference is a result of random errors in the two methods or due to bias in the method under study
3. Blank Determinations
A blank contains the reagents and solvents used in a determination, but no analyte. Often, many of the
sample constituents are added to simulate the analyte environment, which is called the sample matrix. In a blank
determination, all steps of the analysis are performed on the blank material. The results are then applied as a
correction to the sample measurements. Blank determinations reveal errors due to interfering contaminants from the
reagents and vessels employed in the analysis. Blanks are also used to correct titration data for the volume of
reagent needed to cause an indicator to change color.
A blank solution contains the solvent and all of the reagents in an analysis. Whenever feasible,
blanks may also contain added constituents to simulate the sample matrix.
The term matrix refers to the collection of all the constituents in the sample.