Chem Biol Lesson 4-6
Chem Biol Lesson 4-6
Chem Biol Lesson 4-6
Analytical Chemistry
Unit 01: A review on the basic concepts of measurement
SI Units
Kilogram (kg) – A measurement of mass of a substance.
Meter (m) – A measurement of length, perimeter or distance. The meter was redefined by
international agreement in 1983 as the length of the path traveled by light in a vacuum in
1/299,792,458 of a second.
Moles (mol) – A unit of measurement for amount of substance. A mole of a substance or a mole
of particles is defined as containing exactly 6.02214076×1023 particles, which may be atoms,
molecules, ions, or electrons. In short, 1 mol contains 6.02214076×1023 of the specified particles.
Second (s) – The base unit of time. This was historically defined as 1⁄86400 of a day from the
division of the day first into 24 hours, then to 60 minutes and finally to 60 seconds each.
Kelvin (K) – A measure of temperature on an absolute scale, meaning that 0 kelvin is relative to
zero heat content.
Ampere (A) – A measure of electric current, which is closely related to the measure of charge,
Coulomb flowing through a conducting medium per second.
Candela (cd) – the base unit of luminous intensity of radiation such as light; that is, luminous
power per unit solid angle emitted by this point light source in a fixed direction.
Conversions and the Factor Label Method
Quantities in science have both magnitude (the number) and unit (the dimension). The factor
label method, sometimes called dimensional analysis, is a solving technique using conversion
factors to transform units from one system to the another.
The factor can be multiplied to a magnitude because it is equivalent to 1. Given that the two
quantities are equal, such as 2.54cm = 1inch,
Sample Problem 1: Given that 1ft = 12in and 1cm = 2.54cm, Convert 0.02515 ft3 to cm3.
Sample Problem 2: Convert 3.25 lb/ft3 into g/cm3 knowing that 2.2 lb = 1 kg.
Difference of Mass
from Weight
4. Parts per Million (ppm) – A concentration of a solution that contained 1 g solute and
1,000,000 mL solution would create a very small percentage concentration.
Calculate the molar concentration of ethanol in an aqueous solution that contains 2.30 g of
C2H5OH (46.07 g/mol) in 3.50 L of solution.
Sample Problem 6:
58 cm3 of ethyl alcohol was dissolved in 400 cm3 of water to form 454cm3 of a solution of
ethyl alcohol. Calculate the percentage by Volume of ethyl alcohol in water.
Sample Problem 7:
A 5 gallon volume of benzene is accidentally added to an aquifer whose volume is 3.79×108⋅L.
If homogeneous mixing is assumed, what is the ppm concentration with respect to benzene?
(Density of benzene = 0.874g/mL ; 1 US gal = 3.78541)
Sample Problem 8:
Calculate the p-value for each ion ( [H+], [Na+], [Cl-] ) in a solution that is 2.00 x 10-3 M in
NaCl and 5.4 x 10-4 M in HCl.
Density and Specific Gravity
Matter is something that has mass and occupies space – or in other words, volume.
Therefore, in this definition you could also see that all matter has density since it all has both
mass and volume. The density of a substance is its mass per unit volume and often
symbolized as ρ, the Greek letter rho. Mathematically, density is defined as mass divided by
volume:
Density is an intensive property in that increasing the amount of a substance does not
increase its density; rather it increases its mass. Its reciprocal is occasionally called its
specific volume, a term sometimes used in thermodynamics. The density of a material would
vary with different temperatures and pressure
Specific gravity is the ratio of the density of the material to that of a standard material,
usually water which has a density of 1g/mL.
If water is the reference substance, it will just be equal to Density
1. Density of Regular Solids – If we wanted to
measure the density of a rectangularly shaped
solid, we would measure the mass in grams and
then measure the dimensions of the solid—the
length, width, and height—in centimeters to find
the volume. The density would be calculated
from these measurements.
2. Density of Irregularly Shaped Solids – The
density of an irregularly shaped solid is usually
determined by measuring the mass and then
measuring the volume of liquid that it displaces.
The volume of liquid in a graduated cylinder is
measured before the object is submerged and
then measured again with the object
submerged.
3. Density of Liquids – To determine the density
of a liquid, the mass in grams of a measured
volume of liquid in milliliters is determined. The
density is calculated from these measurements.
The volume may be measured by a variety of
devices, such as a graduated cylinder, pipet, or
burette. Very precise determinations of volume
are measured with pycnometers
4. Bulk Density – Bulk density, or the apparent density, is the total mass per unit of
total volume. It is not an intrinsic property of a material since it varies with the size
distribution of the particles and their environment. The porosity of the solid and the
material with which the pores, or voids, are filled also affect the bulk density. Only
for a single nonporous particle that the true density equals the bulk density.
Sample Problem 9
A sample of quartz sand weighs 2.65 g and occupies a volume of 2.0 cm3. What is its bulk
density?
Sample Problem 10
A chunk of metal has a mass of 4.8 g when weighed in air. Using the displacement method,
the initial and final volumes of the water submerged by the metal were 7.0mL to 9.0mL
respectively. What is the density of the metal chunk?
LESSON 04: Calculations Used in
Analytical Chemistry
Unit 03: Preparation of Standard Solutions through the Direct Method
The Standard Solution
Titrant is the solution of known concentration that is added to a solution containing
an analyte of unknown concentration in a titration process. A solution of known
concentration is known as a standard solution. According to Skoog, et. al., the
ideal standard solution that is used in titrimetric analysis should have the following
characteristics:
a. Be sufficiently stable so that it is necessary to determine its concentration only
once;
b. React rapidly with the analyte so that the time required between additions of
reagent is minimized;
c. React more or less completely with the analyte so that satisfactory end points
are realized;
d. Undergo selective reaction with the analyte that can be described by a balanced
equation.
Preparation of the Standard Solution through
the Direct Method
A carefully determined mass of a primary standard is dissolved in a
suitable solvent and is carefully diluted to a desired volume in a
volumetric flask. A primary standard is an ultrapure compound that
serves as a reference material for quantitative analysis.
Calculations Involving the Preparation of Standard Solutions
through the Direct Method
Problem 01: Describe the preparation of 500 mL of 0.3000 M AgNO3 from the primary standard grade
solid.
Problem 02: Describe the preparation of 50.0 mL of 0.05M Ag+ from the solution prepared
above
Problem 03: Describe the preparation of 200.0 mL of 5.00% (w/v) aqueous CuSO4 from a 0.500 M
CuSO4 solution.
Given: a reagent of 0.500 M CuSO4 solutionvolume of solution to be prepared = 200 mL
concentration of the solution to be prepared = 5.00% (w/v)
Required: Description of the method of preparation for 200.0 mL of 5.00% (w/v) CuSO4 solution
Problem 04: Describe the preparation of 2.00 L of 0.355 M NaOH from the concentrated
commercial reagent [50% NaOH (w/w), specific gravity = 1.525]
Given: a concentrated NaOH solution volume of solution to be prepared = 2.00 L concentration of the
solution to be prepared = 0.355 M
Required: Description of the method of preparation for 2.00 L of 0.355 M NaOH solution, [50% NaOH
(w/w), specific gravity = 1.525].
Answer:
Dilute 37.2 mL of 50% (w/w) NaOH solution to 2.00 liters.
Problem 05: Describe the preparation of 2.00 L of a solution that is 15.0 ppm K+ starting with solid
K4Fe(CN)6.
Given: a solid K4Fe(CN)6. volume of solution to be prepared = 2.00 L concentration of the solution to
be prepared = 15.0 ppm K+
Required: Description of the method of preparation 2.00 L of a solution that is 15.0 ppm K+
LESSON 05: Sampling,
Standardization and Calibration
Unit 01: Sampling and Sample Preparation
Kind of analysis based on sample size
1. Macro analysis – this kind of analysis is for samples with masses greater than 0.1 g.
2. Semi-micro analysis – this method is used in handling samples with masses that range
from 0.01 to 0.1 g.
3. Micro analysis – analysis of samples with masses of 10-4 to 10-2 g.
4. Ultra-micro analysis – for very minute sample masses which is less than 10-4 g.
The constituent concentration that is found in the sample can also dictate the kind of analysis to
be done. Below are the types of constituents:
1. Major constituent – these are constituents that are present in the sample at 1 to 100% by mass
2. Minor constituent – constituents that are present in 0.01 to 1% of the sample
3. Trace constituent – these are constituents that are present in the sample between the
concentration of 100 ppm to 1 ppb
4. Ultratrace constituent – these are constituents that are present in amounts that is smaller than
1 ppb.
Sampling
We call the process of getting a representative fraction of a sample from its source as
sampling. In sample collection we call samples that are chosen for analysis as sample
increments, the collection of this sample increments is typically called by chemists as the gross
sample.
Statistically, sampling aims to:
1. Obtain a mean analyte concentration that is an unbiased estimate of the population mean.
2. Obtain a variance in the measured analyte concentration that is an unbiased estimate of the
population variance so that valid confidence limits can be found for the mean, and various
hypothesis tests can be applied
These sampling goals all lead to what we call as random sampling which is a kind of sampling
that utilizes randomization procedure to obtain a sample. An example of this is labeling
samples from 1 to 100 without proper labeling and randomly choosing 5 numbers between 1 to
100 as the samples for analysis.
Representative Samples
The other designations for samples designates them according to their stage in the
laboratory procedure, namely: bulk sample, primary sample, secondary sample,
subsample, laboratory sample, and test sample. These terms are used when a
sample of a bulk system is divided, possibly a number of times, before actually
used in an analysis.
For example, a water sample from a well may be collected in a large bottle (bulk
sample or primary sample) from which a smaller sample is acquired by pouring
into a vial to be taken to the laboratory (secondary sample, subsample, or
laboratory sample) and then poured into a beaker (another secondary sample or
subsample) before a portion is finally carefully measured into a flask (test sample)
and diluted to make the sample solution. The problems associated with sampling
are unique to every individual situation. The analyst simply needs to take all
possible variations into account when obtaining the sample, so that the sample
taken to the laboratory truly does represent what it is intended to represent.
Gross sample
Gross sample size is determined by:
(1) uncertainty that can be tolerated between the
composition of the gross sample and that of the whole
(2) degree of heterogeneity of the whole, and
(3) level of particle size at which heterogeneity begins
Sample Problem # 1
A column-packing material for chromatography consists of a mixture of two types of particles.
Assume that the average particle in the batch being sampled is approximately spherical with a
radius of about 0.5 mm. Roughly 20% of the particles appear to be pink in color and are known to
have about 30% by mass of a polymeric stationary phase attached (analyte). The pink particles
have a density of 0.48 O/'(=. The remaining particles have a density of about 0.24 O/'(= and
contain little or no polymeric stationary phase. What mass of the material should the gross sample
contain if the sampling uncertainty is to be kept below 0.5% relative? (retrieved from: Skoog,2015)
Sampling for homogeneous
mixtures of liquids and
gasses
Determinate errors are also called systematic errors. These are errors that were known to have
occurred, or at least were later known to have occurred, in the course of the lab work. It may come
from avoidable sources, such as contamination, wrongly calibrated instruments, reagent
impurities, instrumental malfunctions, poor sampling techniques, and errors in calculations. If the
error was a calculation error, it must be recalculated. If avoidable determinate errors results from
laboratory work are known to have occurred, then the faulty result must then be rejected.
Determinate errors may also arise from unavoidable sources. An error that is known to have
occurred but was unavoidable is called a “bias”. Such an error occurs each time a procedure is
executed, and thus its effect is usually known and a correction factor can be applied.
Errors
Two types of errors in the analytical laboratory
• “Determinate Errors”
• “Indeterminate errors”
Indeterminate errors are also known as random errors. These are errors that are not specifically
identified and are therefore impossible to avoid. Unlike for determinate errors, results arising from
such errors cannot be immediately rejected or compensated since the errors cannot be specifically
identified. Statistical analysis is applied to determine the rejection of a result; categorizing whether
it is far enough “off-track” to be rejected. An established quality limits for the answers are derived
from any given method. If a sample is within these limits, the data is still considered as “good”.
Review of Elementary Statistics
1. Mean – In the case in which a given measurement on a sample is repeated a number of times,
the average of all measurements is called the mean. It is calculated by adding together the
numerical values of all measurements and dividing this sum by the number of measurements. In
this text, we give the mean the symbol m. The true mean, or the mean of an infinite number of
measurements (the entire population of measurements), is given the symbol m, the Greek letter
mu.
2. Deviation – How much each measurement differs from the mean is an important number and is
called the deviation. A deviation is associated with each measurement, and if agiven deviation
is large compared to others in a series of identical measurements, the proverbial red flag is
raised. Such a measurement is called an outlier. Mathematically, the deviation is calculated as
follows:
in which d is the deviation, m is the mean, and e represents the individual experimental
measurement. The bars refer to absolute value, which means the value of d is calculated without
regard to sign.
3. Standard deviation – The most common measure of the dispersion of data around the mean
is the standard deviation:
The term n is the number of measurements, and n – 1 is referred to as the number of degrees of
freedom. The term s represents the standard deviation. The significance of s is that the smaller
it is numerically, the more precise the data (the more the measurements are “bunched” around
the mean). For an infinite number of measurements (where the mean is m), the standard
deviation is symbolized as s (Greek letter sigma) and is known as the population standard
deviation. An infinite number of measurements is approximated by 30 or more measurements.
4. Relative Standard deviation – The most common measure of the dispersion of data around
the mean is the standard deviation. One final deviation parameter is the relative standard
deviation (RSD).
It is calculated by dividing the standard deviation (s) by the mean (m) and then multiplying by
100. Relative standard deviation relates the standard deviation to the value of the mean and
represents a practical and popular expression of data quality.
The Normal Distribution
For an infinite data set, a plot of frequency of
occurrence vs. the measurement value yields
a smooth bell-shaped curve. It is referred to
as bell-shaped because there is equal drop-
off on both sides of a peak value, resulting in
a shape that resembles a bell. The peak value
corresponds to m, the population mean. This
curve is called the normal distribution curve
because it represents a normal distribution of
values for any infinitely repeated
measurement.
Precision and Accuracy
Precision refers to the repeatability of a measurement. If you repeat a given measurement
over and over and these measurements deviate only slightly from one another, within the
limits of the number of significant figures obtainable, then we say that the data are precise,
or that the results exhibit a high degree of precision. The mean of such data may or may not
represent the real value of that parameter. In other words, it may not be accurate.
Accuracy refers to the correctness of a measurement, or how close it comes to the correct
value of a parameter. For example, if an analyst has an object that he or she knows weighs
exactly 1.0000 g, the accuracy of a laboratory balance (weight measuring device) can be
determined. The object can be weighed on the balance to see if the balance will read 1.0000
g.
If it is established that a measuring
device provides a value for a known
sample that is in agreement with the
known value to within established
limits of precision, that device is said
to be calibrated. Thus, calibration
refers to a procedure that checks the
device to confirm that it provides the
known value. The calibration curve
ensures that the read-outs follow a
simple linear relationship within the
course of a very small chosen range.
Statistical Control
Using Control Charts
A given device, procedure, process, or method is
usually said to be in statistical control if numerical
values derived from it on a regular basis (such as
daily) are consistently within 2 standard deviations
from the established mean, or the most desirable
value. As we learned in previously, such numerical
values occur statistically 95.5% of the time. Thus if,
say, two or more consecutive values differ from the
established value by more than 2 standard
deviations, a problem is indicated because this
should happen only 4.5% of the time, or once in
roughly every 20 events, and is not expected two or
more times consecutively. The device, procedure,
process, or method would be considered out of
statistical control, indicating that an evaluation is in
order.