0% found this document useful (0 votes)
28 views14 pages

3 Classification of Quantitative Method of Analysis

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views14 pages

3 Classification of Quantitative Method of Analysis

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

- Classification of Quantitative Methods of Analysis

The results of a typical quantitative analysis from two measurements are computed as to:
 One, the mass or the volume of sample being analyzed.
 Second, measurement is of some quantity that is proportional to the amount of analyte in the
sample such as mass, volume, intensity of light, or electrical charge.( usually completes the
analysis, and we usually classify analytical methods according to the nature of this final
measurement).
1. Gravimetric Method- determines the mass of the analyte or some compound chemically related to
it.
2. Volumetric Method - measure the volume of a solution containing sufficient reagent to react
completely with the analyte.
3. Electroanalytical Methods - measure electrical properties such as potential, current, resistance,
and quantity of electrical charge.
4. Spectroscopic Methods - explore the interaction between electromagnetic radiation and analyte
atoms or molecules or the emission of radiation by analytes.
5. Miscellaneous Methods - measure quantities as mass-to-charge ratio of ions by mass
spectrometry, rate of radioactive decay, heat of reaction, rate of reaction, sample thermal
conductivity, optical activity, and refractive index.

Typical Quantitative Analysis


A typical quantitative analysis includes the sequence of steps shown in the flow diagram. In some
instances, one or more of these steps can be omitted. For example, if the sample is already a liquid, we
can avoid the dissolution step. In the measurement step, we measure one of the physical properties. In the
calculation step, we find the relative amount of the analyte present in the samples. In the final step, we
evaluate the quality of the results and estimate their reliability.

1.Choosing a Method
The first step in any quantitative analysis is the selection of a method. The choice can be difficult
and requires experience as well as intuition.
An important consideration in selection process is the level of accuracy required. Unfortunately,
high reliability nearly always requires a large investment of time. The selected method usually represents
a compromise between the accuracy and economics (money available for the analysis).
A second consideration related to economic factors is the number of samples that will be
analyzed. If there are many samples, we can afford to spend a good deal of time in such preliminary
operations as assembling and calibrating instruments and equipment, as well as in preparing standard
solutions. If we have only a single sample or a just a few samples, it may be more appropriate to select a
procedure that avoids or minimizes such preliminary steps.
The complexity of the sample and the number of components in the sample always influence the
choice of method to some degree.

2. Acquiring the Sample/Sampling


To produce meaningful information, an analysis must be performed on a sample that has the same
composition as the bulk of material from which it was taken. When the bulk is large and heterogeneous,
great effort is required to get a representative sample.
Consider, for example, a railroad car containing 25 tons of silver ore. Buyer and seller of the ore
must agree on a price based primarily on the silver content of the shipment. The ore itself is inherently
heterogeneous, consisting of many lumps that vary in size as well as in silver content.
The assay of this shipment will be performed on a sample that weighs about one gram. For the
analysis to have significance, this small sample must have a composition that is a representative of the 25
tons (or approximately 22,700,000 g) of ore in the shipment. Isolation of one gram of material that
accurately represents the average composition of the nearly 23,000,000 g of bulk sample is a difficult
undertaking that requires a careful, systematic manipulation of the entire shipment.
Many sampling problems are easier to solve than the one just described. Whether sampling is
simple or complex, however, the analyst must be sure that the laboratory sample is representative of the
whole before proceeding with the analysis. Sampling is frequently the most difficult step in an analysis and
the source of greatest error.
The final analytical result will never be any more reliable than the reliability of the sampling step.

Assay- is the process of determining how much of a given sample is the material indicated by its name.
Example: A zinc alloy is assayed for its zinc content.
Sampling – involves obtaining a small mass of a material whose composition accurately represents the
bulk of the material being sampled.

3. Preparing a Laboratory Sample


The third step in an analysis is to process the sample. Under certain circumstances, no sample
processing is required prior to the measurement step.
For example, once a water sample is withdrawn from a stream, a lake, or an ocean, the pH of the
sample can be measured directly. Under most circumstances, we must process the sample in one of
several different ways.
 Solid sample: is ground to decrease particle size, mixed to ensure homogeneity, and stored for
various lengths of time before analysis begins. Absorption or desorption of water may occur during
each step, depending on the humidity of the environment. Because any loss or gain of water
changes the chemical composition of solids, it is a good idea to dry samples just before starting an
analysis. Alternatively, the moisture content of the sample can be determined at the time of the
analysis in a separate analytical procedure.
 Liquid samples present a slightly different but related set of problems during the preparation
step. If such samples are not allowed to stand in open containers, the solvent may evaporate and
change the concentration of the analyte.
 If the analyte is a gas dissolved in a liquid, as in our blood gas example, the sample container
must be kept inside a second sealed container, perhaps during the entire analytical procedure, to
prevent contamination by atmospheric gases. Extraordinary measures, including sample
manipulation and measurement in an inert atmosphere, may be required to preserve the integrity of
the sample.

4. Defining Replicate Samples


Most chemical analyses are performed on replicate samples whose masses or volumes have
been determined by careful measurements with an analytical balance or with a precise volumetric device.
Replication improves the quality of the results and provides a measure of their reliability.
Quantitative measurements on replicates are usually averaged, and various statistical tests are
performed on the results to establish their reliability.

Replicate samples – refers to the portions of the material of approximately the same size that are carried
through an analytical procedure at the same time and in the same way.

5. Preparing Solutions of the Samples: Physical and Chemical Changes


Most analyses are performed on solutions of the sample made with a suitable solvent. Ideally, the
solvent should dissolve the entire sample, including the analyte, rapidly and completely. The conditions of
dissolution should be sufficiently mild that loss of the analyte cannot occur. Unfortunately, many materials
that must be analyzed are insoluble in common solvents.
Examples include silicate minerals, high- molecular-mass polymers, and specimens of animal
tissue. Conversion of the analyte in such materials into a soluble form is often the most difficult and time-
consuming task in the analytical process. The sample may require heating with aqueous solutions of strong
acids, strong bases, oxidizing agents, reducing agents, or some combination of such reagents. It may be
necessary to ignite the sample in air or oxygen or to perform a high-temperature fusion of the sample in the
presence of various fluxes. Once the analyte is made soluble, we then ask whether the sample has a
property that is proportional to analyte concentration and that we can measure.

6. Eliminating Interferences
Once we have the sample in solution and converted the analyte to an appropriate form for
measurement, the next step is to eliminate substances from the sample that may interfere with
measurement.
Few chemical or physical properties of importance in chemical analysis are unique to a single
chemical species. Instead, the reactions used and the properties measured are characteristic of a group of
elements or compounds. Species other than the analyte that affect the final measurement are called
interferences, or interferents. A scheme must be devised to isolate the analytes from interferences before
the final measurement is made. No hard and fast rules can be given for eliminating interference. This
problem can certainly be the most demanding aspect of an analysis.

Interference – is a species that causes an error by enhancing or attenuating (making smaller) the quantity
being measured in an analysis.
7. Calibration and Measurement
All analytical results depend on final measurement X of a physical or chemical property of the
analyte. This property must vary in a known and reproducible way with the concentration cA of the analyte.
Ideally, the measurement of the property is directly proportional to the concentration, that is,
cA = kX
where k is a proportionality constant. With a few exceptions, analytical methods require the empirical
determination of k with chemical standards for which cA is known. The process of determining k is thus an
important step in most analyses; this step is called a calibration.

Calibration – is referred to as the empirical determination of the relationship between the quantity
measured in an analysis and the analyte concentration.

8. Calculating Results
Computing analyte concentrations from experimental data is ordinarily a simple and straightforward
task, particularly with modern calculators or computers. These computations are based on the raw
experimental data collected in the measurement step, the characteristics of the measurement instruments,
and the stoichiometry of the analytical reaction.

9. Evaluating Results by Estimating Reliability


Analytical results are complete only when their reliability has been estimated. The experimenter
must provide some measure of the uncertainties associated with computed results if the data are to have
any value.
An analytical result without an estimate of reliability is of no value.

ERRORS in CHEMICAL ANALYSES


Measurements invariably involve errors and uncertainties. Only a few of these are due to mistakes
on the part of the experimenter. More commonly, errors are caused by faulty calibrations or
standardizations or by random variations and uncertainties in results.
Frequent calibrations, standardizations, and analyses of known samples can sometimes be used to
lessen all but the random errors and uncertainties. However, measurement errors are an inherent part of
the quantized world in which we live. Because of this, it is impossible to perform a chemical analysis that is
totally free of errors or uncertainties. We can only hope to minimize errors and estimate their size with
acceptable accuracy.

Illustration 1: Effect of errors in analysis which shows results for the quantitative determination of iron. Six
equal portions of an aqueous solution with a “known” concentration of 20.00 ppm of iron(III)
were analyzed in exactly the same way. The mean value of 19.78 ppm (19.8).
.

The results range from a low of 19.4 ppm to a high of 20.3 ppm of iron. The average, or mean
value, x, of the data is 19.78 ppm, which rounds to 19.8 ppm. Every measurement is influenced by many
uncertainties, which combine to produce a scatter of results. Because measurement uncertainties can
never be completely eliminated, measurement data can only give us an estimate of the “true” value.
However, the probable magnitude of the error in a measurement can often be evaluated. It is then
possible to define limits within which the true value of a measured quantity lies with a given level of
probability. Estimating the reliability of experimental data is extremely important whenever we collect
laboratory results because data of unknown quality are worthless. On the other hand, results that might not
seem especially accurate may be of considerable value if the limits of uncertainty are known. Unfortunately,
there is no simple and widely applicable method for determining the reliability of data with absolute
certainty. Often, estimating the quality of experimental results requires as much effort as collecting the data.
Reliability can be assessed in several ways. Experiments designed to reveal the presence of errors
can be performed. Standards of known composition can be analyzed, and the results compared with the
known composition. A few minutes consulting the chemical literature can reveal useful reliability
information. Calibrating equipment usually enhances the quality of data. Finally, statistical tests can be
applied to the data.
In order to improve the reliability and to obtain information about the variability of results, two to five
portions (replicates) of a sample are usually carried through an entire analytical procedure. Individual
results from a set of measurements are seldom the same so, we usually consider the “best” estimate to be
the central value for the set. We justify the extra effort required to analyze replicates in two ways. First, the
central value of a set should be more reliable than any of the individual results.
Usually, the mean or the median is used as the central value for a set of replicate measurements.
Second, an analysis of the variation in the data allows us to estimate the uncertainty associated with the
central value.
The Mean and the Median

Mean, arithmetic mean, and average, x.


- is obtained by dividing the sum of replicate measurements by the number of measurements in the
set:

where xi represents the individual values of x making up the set of N replicate measurements.

The median is the middle result when replicate data are arranged in increasing or decreasing order. There
are equal numbers of results that are larger and smaller than the median. For an odd number of results, the
median can be found by arranging the results in order and locating the middle result. For an even number,
the average value of the middle pair is used.

Example: Calculate the mean and median for the data shown in Illustration1 .

Data: 19.4, 19.5, 19.6, 19.8, 20.1, 20.3, N = 6

Solution: mean (x) = 19.4 + 19.5 + 19.6 + 19.8 + 20.1 + 20.3


6
= 19.78 = 19.8 ppm Fe

Because the set contains an even number of measurements, the median is the average of the central pair:

median = 19.6 + 19.8


2
= 19.7 ppm Fe

Arrange the data in an ascending or descending order:


 Data where N is even, 6
X
19.4
19.5
19.6 median = 19.6 + 19.8 = 19.7
19.8 2
20.1
20.3

 Data where N is odd, 9


19.4
19.5
19.6
19.7
19.8 median
20.1
20.3
20.4
20.5

In ideal cases, the mean and median are identical. However, when the number of measurements in the set
is small, the values often differ.

Replicates are samples of about the same size that are carried through an analysis in exactly the same
way.
The mean of two or more measurements is their average value.
The median is the middle value in a set of data that has been arranged in numerical order. The median is
used advantageously when a set of data contain an outlier, a result that differs significantly from
others in the set.
An outlier can have a significant effect on the mean of the set but has no effect on the median.


Precision and Accuracy

Illustration 2: Illustration of accuracy and precision using the pattern of darts on a dartboard. Note that we
can have very precise results (upper right) with a mean that is not accurate and an accurate
mean (lower left) with data points that are imprecise.
Precision describes the reproducibility of measurements; the closeness of results that have been obtained
in exactly the same way. Generally, the precision of measurement is readily determined by
repeating the measurement.

The precision of a set of data are described as standard deviation, variance and coefficient of
variation which are all function of the deviation from the mean (di).

Deviation from the Mean =

Data: xi /dt/
19.4 0.4
19.5 0.3
19.6 0.2
20.0 0.2
20.3 0.5

19.4 – 19.8 = 0.4


19.5- 19.8 = 0.3……

Accuracy is the closeness of the measurement to its true or accepted value; expressed in terms of either
absolute or relative error.

Absolute Error
The absolute error (E) in the measurement of a quantity x is given by the equation:

x
where i is measurement of the quantity
xt is true or accepted value of a quantity
From the data in Illustration 1, the absolute error of the result immediately to the left of the true value of 20.0
ppm is 20.2 ppm Fe; the result at 20.1 ppm is in error by 10.1 ppm Fe.

E = 19.78 ppm - 20.00 ppm


= - 0.2 ppm Fe

Note that we keep the sign in stating the error. The negative sign in the first case shows that the
experimental result is smaller than the accepted value, and the positive sign in the second case shows that
the experimental result is larger than the accepted value.

The absolute error of a measurement is the difference between the measured value and the true
value. The sign of the absolute error tells you whether the value in question is high or low. If the
measurement result is low, the sign is negative; if the measurement result is high, the sign is positive.

Relative Error
The relative error (Er ) is often a more useful quantity than the absolute error. The percent relative
error is given by the expression

Relative error is also expressed in parts per thousand (ppt). Such that, the relative error for the mean of the
data in Illustration 1 is:

Basic Difference between Precision and Accuracy

Precision Accuracy
- describes the agreement between a result and its - can never be determined exactly because the true
true value value of a quantity can never be known exactly.
- is determined by simply replicating a
measurement

TYPES OF ERRORS IN EXPERIMENTAL DATA/CHEMICAL ANALYSES


The precision of a measurement is readily determined by comparing data from carefully replicated
experiments. Unfortunately, an estimate of the accuracy is not as easy to obtain. To determine the
accuracy, we have to know the true value, which is usually what we are seeking in the analysis.
Results can be precise without being accurate and accurate without being precise. The danger of
assuming that precise results are also accurate is illustrated in Illustration 3, which summarizes the results
for the determination of nitrogen in two pure compounds.

Illustration 3. Absolute error in the micro-Kjeldahl determination of nitrogen. Each dot represents the error
associated with a single determination. Each vertical line labeled (xi - xt) is the absolute
average deviation of the set from the true value. (Data from C. O. Willits and C. L. Ogg, J.
Assoc. Offic. Anal. Chem., 1949, 32, 561.)

The dots show the absolute errors of replicate results obtained by four analysts. Note that analyst 1
obtained relatively high precision and high accuracy. Analyst 2 had poor precision but good accuracy. The
results of analyst 3 are surprisingly common. The precision is excellent, but there is significant error in the
numerical average for the data. Both the precision and the accuracy are poor for the results of analyst 4.

Chemical analyses are affected by two types of errors: random (indeterminate) error and
systematic (determinate) error.

TWO TYPES OF ERRORS


1. Systematic (or determinate) error, causes the mean of a data set to differ from the accepted value. In
general, a systematic error in a series of replicate measurements causes all the results to be
too high or too low. It affects the accuracy of results.

2. Random (or indeterminate) error, causes data to be scattered more or less symmetrically around a
mean value. In general, then, the random error in a measurement is reflected by its
precision. It affects measurement precision.

Gross errors differ from indeterminate and determinate errors. They usually occur only occasionally, are
often large, and may cause a result to be either high or low. They are often the product of human
errors.
For example, if part of a precipitate is lost before weighing, analytical results will be low. Touching
a weighing bottle with your fingers after its empty mass is determined will cause a high mass reading for a
solid weighed in the contaminated bottle.
Gross errors lead to outliers, results that appear to differ markedly from all other data in a set of
replicate measurements. Various statistical tests can be performed to determine if a result is an outlier.

An outlier is an occasional result in replicate measurements that differs significantly from the other results.

Systematic Errors
Systematic errors have a definite value, an assignable cause, and are of the same magnitude for
replicate measurements made in the same way. They lead to bias in measurement results.

Bias measures the systematic error associated with an analysis. It affects all of the data in a set in the
same way and that it bears a sign. It has a negative sign if it causes the results to be low and a
positive sign otherwise.

Sources/Types of Systematic Errors


1.) Instrumental errors are caused by nonideal instrument behavior, by faulty calibrations, or by use
under inappropriate conditions.
2.) Method errors arise from nonideal chemical or physical behavior of analytical systems.
3.) Personal errors result from the carelessness, inattention, or personal limitations of the experimenter.

Instrumental Errors
 All measuring devices are potential sources of systematic errors.
 For example, pipets, burets, and volumetric flasks may hold or deliver volumes slightly different
from those indicated by their graduations. These differences arise from using glassware at a
temperature that differs significantly from the calibration temperature, from distortions in container
walls due to heating while drying, from errors in the original calibration, or from contaminants on
the inner surfaces of the containers.
 Calibration eliminates most systematic errors of this type.
 Electronic instruments are also subject to systematic errors. These can arise from several
sources. For example, errors may emerge as the voltage of a battery-operated power supply
decreases with use.
 Errors can also occur if instruments are not calibrated frequently or if they are calibrated
incorrectly.
 The experimenter may also use an instrument under conditions where errors are large. For
example, a pH meter used in strongly acidic media is prone to an acid error.
 Temperature changes cause variation in many electronic components, which can lead to drifts and
errors.
 Some instruments are susceptible to noise induced from the alternating current (ac) power lines,
and this noise may influence precision and accuracy.
 In many cases, instrumental errors are detectable and correctable.
Method Errors
 The nonideal chemical or physical behavior of the reagents and reactions on which an analysis is
based often introduce systematic method errors. Such sources of nonideality include the slowness
of some reactions, the incompleteness of others, the instability of some species, the lack of
specificity of most reagents, and the possible occurrence of side reactions that interfere with the
measurement process.
 As an example, a common method error in volumetric analysis results from the small excess of
reagent required to cause an indicator to change color and signal the equivalence point. The
accuracy of such an analysis is thus limited by the very phenomenon that makes the titration
possible.
 Errors inherent in a method are often difficult to detect and are thus the most serious of the three
types of systematic error.
Of the three types of systematic errors encountered in a chemical analysis, method errors are
usually the most difficult to identify and correct.

Personal Errors
 Many measurements require personal judgments.
 Examples include estimating the position of a pointer between two scale divisions, the color of a
solution at the end point in a titration, or the level of a liquid with respect to a graduation in a pipet
or buret.
 Judgments of this type are often subject to systematic, unidirectional errors. For example, one
person may read a pointer consistently high, while another may be slightly slow in activating a
timer. Yet, a third may be less sensitive to color changes, with an analyst who is insensitive to color
changes tending adjusted so that any known physical limitations of the analyst cause negligibly
small errors.
 Automation of analytical procedures can eliminate many errors of this type.
 A universal source of personal error is prejudice, or bias. Most of us, no matter how honest, have a
natural, subconscious tendency to estimate scale readings in a direction that improves the
precision in a set of results. Alternatively, we may have a preconceived notion of the true value for
the measurement. We then subconsciously cause the results to fall close to this value.
 Number bias is another source of personal error that varies considerably from person to person.
The most frequent number bias encountered in estimating the position of a needle on a scale
involves a preference for the digits 0 and 5. Also common is a prejudice favoring small digits over
large and even numbers over odd. Again, automated and computerized instruments can eliminate
this form of bias.
 Color blindness is a good example of a limitation that could cause a personal error in a volumetric
analysis. A famous color-blind analytical chemist enlisted his wife to come to the laboratory to help
him detect color changes at end points of titrations.
Effect of Systematic Errors on Analytical Results
Systematic errors may be either constant or proportional. The magnitude of a constant error
stays essentially the same as the size of the quantity measured is varied. With constant errors, the
absolute error is constant with sample size, but the relative error varies when the sample size is
changed. Proportional errors increase or decrease according to the size of the sample taken for
analysis. With proportional errors, the absolute error varies with sample size, but the relative error stays
constant when the sample size is changed.

Constant Errors
 The effect of a constant error becomes more serious as the size of the quantity measured
decreases.
 The effect of solubility losses on the results of a gravimetric analysis, illustrates this behavior.
 The excess of reagent needed to bring about a color change during a titration is another example
of constant error. This volume, usually small, remains the same regardless of the total volume of
reagent required for the titration.
 The relative error from this source becomes more serious as the total volume decreases.
 One way of reducing the effect of constant error is to increase the sample size until the error
is acceptable.

Proportional Errors
 A common cause of proportional errors is the presence of interfering contaminants in the sample.
 For example, a widely used method for the determination of copper is based on the reaction of copper(II) ion with
potassium iodide to give iodine. The quantity of iodine is then measured and is proportional to the
amount of copper. Iron(III), if present, also liberates iodine from potassium iodide. Unless steps are
taken to prevent this interference, high results are observed for the percentage of copper because
the iodine produced will be a measure of the copper(II) and iron(III) in the sample. The size of this
error is fixed by the fraction of iron contamination, which is independent of the size of sample
taken. If the sample size is doubled, for example, the amount of iodine liberated by both the copper
and the iron contaminant is also doubled. Thus, the magnitude of the reported percentage of
copper is independent of sample size.

Detection of Systematic Instrument and Personal Errors


 Some systematic instrument errors can be found and corrected by calibration.
 Periodic calibration of equipment is always desirable because the response of most instruments
changes with time as a result of component aging, corrosion, or mistreatment.
 Many systematic instrument errors involve interferences where a species present in the sample
affects the response of the analyte and simple calibration does not compensate for these effects.
 Most personal errors can be minimized by careful, disciplined laboratory work.
 It is a good habit to check instrument readings, notebook entries, and calculations systematically.
 Errors due to limitations of the experimenter can usually be avoided by carefully choosing the
analytical method or using an automated procedure.
Detection of Systematic Method Errors
Bias in an analytical method is particularly difficult to detect. One or more of the following steps can be
taken to recognize and adjust for a systematic error in an analytical method.

 1. Analysis of Standard Samples


The best way to estimate the bias of an analytical method is by analyzing standard reference
materials (SRMs), materials that contain one or more analytes at known concentration levels. Standard
reference materials are obtained in several ways.
- Standard materials can sometimes be prepared by synthesis.
- SRMs can be purchased from a number of governmental and industrial sources.

Standard reference materials (SRMs)


- are substances sold by the National Institute of Standards and Technology (NIST) and certified
to contain specified concentrations of one or more analyte.

 2. Independent Analysis
If standard samples are not available, a second independent and reliable analytical method can be
used in parallel with the method being evaluated. The independent method should differ as much as
possible from the one under study. This practice minimizes the possibility that some common factor in the
sample has the same effect on both methods. Again, a statistical test must be used to determine whether
any difference is a result of random errors in the two methods or due to bias in the method under study

 3. Blank Determinations
A blank contains the reagents and solvents used in a determination, but no analyte. Often, many of the
sample constituents are added to simulate the analyte environment, which is called the sample matrix. In a blank
determination, all steps of the analysis are performed on the blank material. The results are then applied as a
correction to the sample measurements. Blank determinations reveal errors due to interfering contaminants from the
reagents and vessels employed in the analysis. Blanks are also used to correct titration data for the volume of
reagent needed to cause an indicator to change color.
A blank solution contains the solvent and all of the reagents in an analysis. Whenever feasible,
blanks may also contain added constituents to simulate the sample matrix.
The term matrix refers to the collection of all the constituents in the sample.

 4.Variation in Sample Size


As the size of a measurement increases, the effect of a constant error decreases. Thus, constant
errors can often be detected by varying the sample size.

You might also like