Physical Tests and Determinations: Á591Ñ Zinc Determination
Physical Tests and Determinations: Á591Ñ Zinc Determination
Physical Tests and Determinations: Á591Ñ Zinc Determination
The need for a quantitative determination of zinc in the Pharmacopeial insulin preparations reflects the fact that the element
is an essential component of zinc-insulin crystals. In common with lead, zinc may be determined either by the dithizone meth-
od or by atomic absorption.
Dithizone Method
Select all reagents for this test to have as low a content of heavy metals as practicable. If necessary, distill water and other
solvents into hard or borosilicate glass apparatus. Rinse thoroughly all glassware with warm dilute nitric acid (1 in 2) followed
by water. Avoid using on the separator any lubricants that dissolve in chloroform.
Special Solutions and Solvents—
ALKALINE AMMONIUM CITRATE SOLUTION—Dissolve 50 g of dibasic ammonium citrate in water to make 100 mL. Add 100 mL of
ammonium hydroxide. Remove any heavy metals that may be present by extracting the solution with 20-mL portions of Dithi-
zone Extraction Solution (see Lead á251ñ) until the dithizone solution retains a clear green color, then extract any dithizone re-
maining in the citrate solution by shaking with chloroform.
CHLOROFORM—Distill chloroform in hard or borosilicate glass apparatus, receiving the distillate in sufficient dehydrated alcohol
to make the final concentration 1 mL of alcohol for each 100 mL of distillate.
DITHIZONE SOLUTION—Use Standard Dithizone Solution (see Lead á251ñ), prepared with the distilled Chloroform.
STANDARD ZINC SOLUTION—Dissolve 625 mg of zinc oxide, accurately weighed, and previously gently ignited to constant
weight, in 10 mL of nitric acid, and add water to make 500.0 mL. This solution contains 1.0 mg of zinc per mL.
DILUTED STANDARD ZINC SOLUTION—Dilute 1 mL of Standard Zinc Solution, accurately measured, with 2 drops of nitric acid and
sufficient water to make 100.0 mL. This solution contains 10 mg of zinc per mL. Use this solution within 2 weeks.
TRICHLOROACETIC ACID SOLUTION—Dissolve 100 g of trichloroacetic acid in water to make 1000 mL.
Procedure—Transfer 1 to 5 mL of the preparation to be tested, accurately measured, to a centrifuge tube graduated at 40
mL. If necessary, add 0.25 N hydrochloric acid, dropwise, to obtain a clear solution. Add 5 mL of Trichloroacetic Acid Solution
and sufficient water to make 40.0 mL. Mix, and centrifuge.
Transfer to a hard-glass separator an accurately measured volume of the supernatant believed to contain from 5 to 20 mg of
zinc, and add water to make about 20 mL. Add 1.5 mL of Alkaline Ammonium Citrate Solution and 35 mL of Dithizone Solution.
Shake vigorously 100 times. Allow the chloroform phase to separate. Insert a cotton plug in the stem of the separator to re-
move any water emulsified with the chloroform. Collect the chloroform extract (discarding the first portion that comes
through) in a test tube, and determine the absorbance at 530 nm, with a suitable spectrophotometer.
Calculate the amount of zinc present by reference to a standard absorbance-concentration curve obtained by using 0.5 mL,
1.0 mL, 1.5 mL, and, if the zinc content of the sample extracted exceeds 15 mg, 2.0 mL of the Diluted Standard Zinc Solution,
corrected as indicated by a blank determination run concomitantly, using all of the reagents but no added zinc.
Method I is to be used for the determination of alcohol, unless otherwise specified in the individual monograph. It is suitable
for examining most fluidextracts and tinctures, provided the capacity of the distilling flask is sufficient (commonly two to four
times the volume of the liquid to be heated) and the rate of distillation is such that clear distillates are produced. Cloudy distil-
lates may be clarified by agitation with talc, or with calcium carbonate, and filtered, after which the temperature of the filtrate
is adjusted and the alcohol content determined from the specific gravity. During all manipulations, take precautions to mini-
mize the loss of alcohol by evaporation.
Treat liquids that froth to a troublesome extent during distillation by rendering them strongly acidic with phosphoric, sulfu-
ric, or tannic acid, or treat with a slight excess of calcium chloride solution or with a small amount of paraffin or silicone oil
before starting the distillation.
Prevent bumping during distillation by adding porous chips of insoluble material such as silicon carbide, or beads.
For Liquids Presumed to Contain 30% of Alcohol or Less—By means of a pipet, transfer to a suitable distilling apparatus
not less than 25 mL of the liquid in which the alcohol is to be determined, and note the temperature at which the volume was
measured. Add an equal volume of water, distill, and collect a volume of distillate about 2 mL less than the volume taken of
the original test liquid, adjust to the temperature at which the original test liquid was measured, add sufficient water to meas-
ure exactly the original volume of the test liquid, and mix. The distillate is clear or not more than slightly cloudy, and does not
contain more than traces of volatile substances other than alcohol and water. Determine the specific gravity of the liquid at
25°, as directed under Specific Gravity á841ñ, using this result to ascertain the percentage, by volume, of C2H5OH contained in
the liquid examined by reference to the Alcoholometric Table in the section Reference Tables.
For Liquids Presumed to Contain More Than 30% of Alcohol—Proceed as directed in the foregoing paragraph, except
to do the following: dilute the specimen with about twice its volume of water, collect a volume of distillate about 2 mL less
than twice the volume of the original test liquid, bring to the temperature at which the original liquid was measured, add
sufficient water to measure exactly twice the original volume of the test liquid, mix, and determine its specific gravity. The
proportion of C2H5OH, by volume, in this distillate, as ascertained from its specific gravity, equals one-half that in the liquid
examined.
Special Treatment—
VOLATILE ACIDS AND BASES—Render preparations containing volatile bases slightly acidic with diluted sulfuric acid before dis-
tilling. If volatile acids are present, render the preparation slightly alkaline with sodium hydroxide TS.
GLYCERIN—To liquids that contain glycerin add sufficient water so that the residue, after distillation, contains not less than
50% of water.
IODINE—Treat all solutions containing free iodine with powdered zinc before the distillation, or decolorize with just sufficient
sodium thiosulfate solution (1 in 10), followed by a few drops of sodium hydroxide TS.
OTHER VOLATILE SUBSTANCES—Spirits, elixirs, tinctures, and similar preparations that contain appreciable proportions of volatile
materials other than alcohol and water, such as volatile oils, chloroform, ether, camphor, etc., require special treatment, as
follows:
For Liquids Presumed to Contain 50% of Alcohol or Less—Mix 25 mL of the specimen under examination, accurately meas-
ured, with about an equal volume of water in a separator. Saturate this mixture with sodium chloride, then add 25 mL of sol-
vent hexane, and shake the mixture to extract the interfering volatile ingredients. Draw off the separated, lower layer into a
second separator, and repeat the extraction twice with two further 25-mL portions of solvent hexane. Extract the combined
solvent hexane solutions with three 10-mL portions of a saturated solution of sodium chloride. Combine the saline solutions,
and distill in the usual manner, collecting a volume of distillate having a simple ratio to the volume of the original specimen.
For Liquids Presumed to Contain More Than 50% of Alcohol—Adjust the specimen under examination to a concentration of
approximately 25% of alcohol by diluting it with water, then proceed as directed in For Liquids Presumed to Contain 50% of
Alcohol or Less, beginning with “Saturate this mixture with sodium chloride.”
In preparing Collodion or Flexible Collodion for distillation, use water in place of the saturated solution of sodium chloride di-
rected above.
If volatile oils are present in small proportions only, and a cloudy distillate is obtained, the solvent hexane treatment not
having been employed, the distillate may be clarified and rendered suitable for the specific gravity determination by shaking it
with about one-fifth its volume of solvent hexane, or by filtering it through a thin layer of talc.
Use Method IIa when Method II is specified in the individual monograph. For a discussion of the principles upon which it is
based, see Gas Chromatography under Chromatography á621ñ.
USP Reference Standards—USP Alcohol Determination—Acetonitrile RS. USP Alcohol Determination—Alcohol RS.
Method IIa
Apparatus—Under typical conditions, use a gas chromatograph equipped with a flame-ionization detector and a 4-mm ×
1.8-m glass column packed with 100- to 120-mesh chromatographic column packing support S3, using nitrogen or helium as
the carrier. Prior to use, condition the column overnight at 235° with a slow flow of carrier gas. The column temperature is
maintained at 120°, and the injection port and detector temperatures are maintained at 210°. Adjust the carrier flow and tem-
perature so that acetonitrile, the internal standard, elutes in 5 to 10 minutes.
Solutions—
Test Stock Preparation—Dilute the specimen under examination stepwise with water to obtain a solution containing approx-
imately 2% (v/v) of alcohol.
Test Preparation—Pipet 5 mL each of the Test Stock Preparation and the USP Alcohol Determination—Acetonitrile RS
[NOTE—Alternatively, a 2% aqueous solution of acetonitrile of suitable quality may be used as the internal standard solution]
into a 50-mL volumetric flask, dilute with water to volume, and mix.
Standard Preparation—Pipet 5 mL each of the USP Alcohol Determination—Alcohol RS and the USP Alcohol Determina-
tion—Acetonitrile RS [NOTE—Alternatively, a 2% aqueous solution of acetonitrile of suitable quality may be used as the internal
standard solution] into a 50-mL volumetric flask, dilute with water to volume, and mix.
Procedure—Inject about 5 mL each of the Test Preparation and the Standard Preparation, in duplicate, into the gas chroma-
tograph, record the chromatograms, and determine the peak response ratios. Calculate the percentage of alcohol (v/v) in the
specimen under test according to the formula:
CD(RU/RS)
in which C is the labeled concentration of USP Alcohol Determination—Alcohol RS; D is the dilution factor (the ratio of the
volume of the Test Stock Preparation to the volume of the specimen taken); and RU and RS are the peak response ratios ob-
tained from the Test Preparation and the Standard Preparation, respectively.
System Suitability Test—In a suitable chromatogram, the resolution factor, R, is not less than 2; the tailing factor of the
alcohol peak is not greater than 2.0; and six replicate injections of the Standard Preparation show a relative standard deviation
of not more than 2.0% in the ratio of the peak of alcohol to the peak of the internal standard.
Method IIb
Apparatus—The gas chromatograph is equipped with a split injection port with a split ratio of 5:1, a flame-ionization de-
tector, and a 0.53-mm × 30-m capillary column coated with a 3.0-mm film of phase G43. Helium is used as the carrier gas at a
linear velocity of 34.0 cm per second. The chromatograph is programmed to maintain the column temperature at 50° for 5
minutes, then to increase the temperature at a rate of 10° per minute to 200°, and maintain at this temperature for 4 minutes.
The injection port temperature is maintained at 210° and the detector temperature at 280°.
Solutions—
Test Stock Preparation—Dilute the specimen under examination stepwise with water to obtain a solution containing approx-
imately 2% (v/v) of alcohol.
Test Preparation—Pipet 5 mL each of the Test Stock Preparation and the USP Alcohol Determination—Acetonitrile RS
[NOTE—Alternatively, a 2% aqueous solution of acetonitrile of suitable quality may be used as the internal standard solution]
into a 25-mL volumetric flask, dilute with water to volume, and mix.
Standard Preparation—Pipet 5 mL each of the USP Alcohol Determination—Alcohol RS and the USP Alcohol Determina-
tion—Acetonitrile RS [NOTE—Alternatively, a 2% aqueous solution of acetonitrile of suitable quality may be used as the internal
standard solution] into a 25-mL volumetric flask, dilute with water to volume, and mix.
Procedure—Inject about 0.2 to 0.5 mL each of the Test Preparation and the Standard Preparation, in duplicate, into the gas
chromatograph, record the chromatograms, and determine the peak response ratios. Calculate the percentage of alcohol (v/v)
in the specimen under test according to the formula:
CD(RU/RS)
in which C is the labeled concentration of USP Alcohol Determination—Alcohol RS; D is the dilution factor (the ratio of the
volume of the Test Stock Preparation to the volume of the specimen taken); and RU and RS are the peak response ratios ob-
tained from the Test Preparation and the Standard Preparation, respectively.
System Suitability Test—In a suitable chromatogram, the resolution factor, R, between alcohol and the internal standard is
not less than 4; the tailing factor of the alcohol peak is not greater than 2.0; and six replicate injections of the Standard Prepa-
ration show a relative standard deviation of not more than 4.0% in the ratio of the peak of alcohol to the peak of the internal
standard.
BULK DENSITY
This general chapter has been harmonized with the corresponding texts of the European Pharmacopoeia and/or the Japanese
Pharmacopoeia. ♦The portion that is not harmonized is marked with symbols (♦♦) to specify this fact.♦
The bulk density of a powder is the ratio of the mass of an untapped powder sample and its volume including the contribu-
tion of the interparticulate void volume. Hence, the bulk density depends on both the density of powder particles and the
spatial arrangement of particles in the powder bed. The bulk density is expressed in grams per mL (g/mL) although the inter-
national unit is kilograms per cubic meter (1 g/mL = 1000 kg/m3) because the measurements are made using cylinders. It may
also be expressed in grams per cubic centimeter (g/cm3). The bulking properties of a powder are dependent upon the prepa-
ration, treatment, and storage of the sample, i.e., how it was handled. The particles can be packed to have a range of bulk
densities; however, the slightest disturbance of the powder bed may result in a changed bulk density. Thus, the bulk density of
a powder is often very difficult to measure with good reproducibility and, in reporting the results, it is essential to specify how
the determination was made. The bulk density of a powder is determined by measuring the volume of a known weight of
powder sample, that may have been passed through a sieve, into a graduated cylinder (Method I), or by measuring the mass
of a known volume of powder that has been passed through a volumeter into a cup (Method II) or a measuring vessel (Method
III).
Method I and Method III are favored.
Procedure—Pass a quantity of material sufficient to complete the test through a sieve with apertures greater than or equal
to 1.0 mm, if necessary, to break up agglomerates that may have formed during storage; this must be done gently to avoid
changing the nature of the material. Into a dry graduated 250-mL cylinder (readable to 2 mL) introduce, without compacting,
approximately 100 g of test sample, M, weighed with 0.1% accuracy. Carefully level the powder without compacting, if neces-
sary, and read the unsettled apparent volume (V0) to the nearest graduated unit. Calculate the bulk density in g/mL by the
formula m/V0. Generally, replicate determinations are desirable for the determination of this property. If the powder density is
too low or too high, such that the test sample has an untapped apparent volume of either more than 250 mL or less than 150
mL, it is not possible to use 100 g of powder sample. Therefore, a different amount of powder has to be selected as the test
sample, such that its untapped apparent volume is 150–250 mL (apparent volume greater than or equal to 60% of the total
volume of the cylinder); the weight of the test sample is specified in the expression of results. For test samples having an appa-
rent volume between 50 mL and 100 mL, a 100-mL cylinder readable to 1 mL can be used; the volume of the cylinder is
specified in the expression of results.
Apparatus—The apparatus (Figure 1) consists of a top funnel fitted with a 1.0-mm sieve.1 The funnel is mounted over a
baffle box containing four glass baffle plates over which the powder slides and bounces as it passes. At the bottom of the baf-
fle box is a funnel that collects the powder and allows it to pour into a cup of specified capacity mounted directly below it. The
cup may be cylindrical (25.00 ± 0.05 mL volume with an inside diameter of 30.00 ± 2.00 mm) or cubical (16.39 ± 0.2 mL
volume with inside dimensions of 25.400 ± 0.076 mm).
Figure 1.
1 The apparatus (the Scott Volumeter) conforms to the dimensions in ASTM 329 90.
Official text. Reprinted from First Supplement to USP38-NF33.
210 á616ñ Bulk Density and Tapped Density of Powders / Physical Tests DSC
Procedure—Allow an excess of powder to flow through the apparatus into the sample receiving cup until it overflows, us-
ing a minimum of 25 cm3 of powder with the square cup and 35 cm3 of powder with the cylindrical cup. Carefully scrape
excess powder from the top of the cup by smoothly moving the edge of the blade of a spatula perpendicular to and in contact
with the top surface of the cup, taking care to keep the spatula perpendicular to prevent packing or removal of powder from
the cup. Remove any material from the sides of the cup, and determine the weight, M, of the powder to the nearest 0.1%.
Calculate the bulk density, in g/mL, by the formula:
(M)/(V0)
in which V0 is the volume, in mL, of the cup. Record the average of three determinations using three different powder samples.
Apparatus—The apparatus consists of a 100-mL cylindrical vessel of stainless steel with dimensions as specified in Figure 2.
Figure 2.
Procedure—Pass a quantity of powder sufficient to complete the test through a 1.0-mm sieve, if necessary, to break up
agglomerates that may have formed during storage, and allow the obtained sample to flow freely into the measuring vessel
until it overflows. Carefully scrape the excess powder from the top of the vessel as described for Method II. Determine the
weight (M0) of the powder to the nearest 0.1% by subtraction of the previously determined mass of the empty measuring
vessel. Calculate the bulk density (g/mL) by the formula M0/100, and record the average of three determinations using three
different powder samples.
TAPPED DENSITY
The tapped density is an increased bulk density attained after mechanically tapping a container containing the powder sam-
ple. Tapped density is obtained by mechanically tapping a graduated measuring cylinder or vessel containing a powder sam-
ple. After observing the initial powder volume or weight, the measuring cylinder or vessel is mechanically tapped, and volume
or weight readings are taken until little further volume or weight change is observed. The mechanical tapping is achieved by
raising the cylinder or vessel and allowing it to drop under its own weight a specified distance by either of three methods as
described below. Devices that rotate the cylinder or vessel during tapping may be preferred to minimize any possible separa-
tion of the mass during tapping down.
Method I
Figure 3.
Procedure—Proceed as described above for the determination of the bulk volume (V0). Secure the cylinder in the holder.
Carry out 10, 500, and 1250 taps on the same powder sample and read the corresponding volumes V10, V500, and V1250 to the
nearest graduated unit. If the difference between V500 and V1250 is less than or equal to 2 mL, V1250 is the tapped volume. If the
difference between V500 and V1250 exceeds 2 mL, repeat in increments such as 1250 taps, until the difference between succeed-
ing measurements is less than or equal to 2 mL. Fewer taps may be appropriate for some powders, when validated. Calculate
the tapped density (g/mL) using the formula m/VF, in which VF is the final tapped volume. Generally, replicate determinations
are desirable for the determination of this property. Specify the drop height with the results. If it is not possible to use a 100-g
test sample, use a reduced amount and a suitable 100-mL graduated cylinder (readable to 1 mL) weighing 130 ± 16 g and
mounted on a holder weighing 240 ± 12 g. The modified test conditions are specified in the expression of the results.
Method II
Apparatus and Procedure—Proceed as directed under Method I except that the mechanical tester provides a fixed drop of
3 ± 0.2 mm at a nominal rate of 250 taps per min.
Method III
Apparatus and Procedure—Proceed as directed in Method III—Measurement in a Vessel for measuring bulk density using
the measuring vessel equipped with the cap shown in Figure 2. The measuring vessel with the cap is lifted 50–60 times per min
by the use of a suitable tapped density tester. Carry out 200 taps, remove the cap, and carefully scrape excess powder from
the top of the measuring vessel as described in Method III—Measurement in a Vessel for measuring the bulk density. Repeat the
procedure using 400 taps. If the difference between the two masses obtained after 200 and 400 taps exceeds 2%, carry out a
test using 200 additional taps until the difference between succeeding measurements is less than 2%. Calculate the tapped
density (g/mL) using the formula MF/100, where MF is the mass of powder in the measuring vessel. Record the average of three
determinations using three different powder samples. The test conditions including tapping height are specified in the expres-
sion of the results.
Because the interparticulate interactions influencing the bulking properties of a powder are also the interactions that inter-
fere with powder flow, a comparison of the bulk and tapped densities can give a measure of the relative importance of these
interactions in a given powder. Such a comparison is often used as an index of the ability of the powder to flow, for example
the Compressibility Index or the Hausner Ratio as described below.
The Compressibility Index and Hausner Ratio are measures of the propensity of a powder to be compressed as described
above. As such, they are measures of the powder’s ability to settle, and they permit an assessment of the relative importance
of interparticulate interactions. In a free-flowing powder, such interactions are less significant, and the bulk and tapped densi-
ties will be closer in value. For poorer flowing materials, there are frequently greater interparticle interactions, and a greater
difference between the bulk and tapped densities will be observed. These differences are reflected in the Compressibility Index
and the Hausner Ratio.
Compressibility Index—Calculate by the formula:
100(V0 − VF)/V0
Depending on the material, the compressibility index can be determined using V10 instead of V0. [NOTE—If V10 is used, it will
be clearly stated in the results.]
á621ñ CHROMATOGRAPHY
INTRODUCTION
Chromatographic separation techniques are multistage separation methods in which the components of a sample are dis-
tributed between two phases, of which one is stationary and the other mobile. The stationary phase may be a solid or a liquid
supported on a solid or a gel. The stationary phase may be packed in a column, spread as a layer, distributed as a film, or
applied by other techniques. The mobile phase may be gaseous or liquid or supercritical fluid. The separation may be based on
adsorption, mass distribution (partition), or ion exchange; or it may be based on differences among the physicochemical prop-
erties of the molecules, such as size, mass, and volume. This chapter contains general procedures, definitions, and calculations
of common parameters and describes general requirements for system suitability. The types of chromatography useful in quali-
tative and quantitative analysis employed in USP procedures are column, gas, paper, thin-layer (including high-performance
thin-layer chromatography), and pressurized liquid chromatography (commonly called high-pressure or high-performance liq-
uid chromatography).
GENERAL PROCEDURES
This section describes the basic procedures used when a chromatographic method is described in a monograph. The follow-
ing procedures are followed unless otherwise indicated in the individual monograph.
Paper Chromatography
Stationary Phase: The stationary phase is a sheet of paper of suitable texture and thickness. Development may be as-
cending, in which the solvent is carried up the paper by capillary forces, or descending, in which the solvent flow is also assis-
ted by gravitational force. The orientation of paper grain with respect to solvent flow is to be kept constant in a series of chro-
matograms. (The machine direction is usually designated by the manufacturer.)
Apparatus: The essential equipment for paper chromatography consists of a vapor-tight chamber with inlets for addition
of solvent and a rack of corrosion-resistant material about 5 cm shorter than the inside height of the chamber. The rack serves
as a support for solvent troughs and for antisiphon rods that, in turn, hold up the chromatographic sheets. The bottom of the
chamber is covered with the prescribed solvent system or mobile phase. Saturation of the chamber with solvent vapor is facili-
tated by lining the inside walls with paper wetted with the prescribed solvent system.
Spotting: The substance or substances analyzed are dissolved in a suitable solvent. Convenient volumes delivered from
suitable micropipets of the resulting solution, normally containing 1–20 mg of the compound, are placed in 6- to 10-mm spots
NLT 3 cm apart.
Descending Paper Chromatography Procedure
1. A spotted chromatographic sheet is suspended in the apparatus, using the antisiphon rod to hold the upper end of the
sheet in the solvent trough. [NOTE—Ensure that the portion of the sheet hanging below the rods is freely suspended in
the chamber without touching the rack, the chamber walls, or the fluid in the chamber.]
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á621ñ Chromatography 213
2. The chamber is sealed to allow equilibration (saturation) of the chamber and the paper with the solvent vapor. Any excess
pressure is released as necessary.
3. After equilibration of the chamber, the prepared mobile phase is introduced into the trough through the inlet.
4. The inlet is closed, and the mobile solvent phase is allowed to travel the desired distance down the paper.
5. The sheet is removed from the chamber.
6. The location of the solvent front is quickly marked, and the sheet is dried.
7. The chromatogram is observed and measured directly or after suitable development to reveal the location of the spots of
the isolated drug or drugs.
Ascending Paper Chromatography Procedure
1. The mobile phase is added to the bottom of the chamber.
2. The chamber is sealed to allow equilibration (saturation) of the chamber and the paper with the solvent vapor. Any excess
pressure is released as necessary.
3. The lower edge of the stationary phase is dipped into the mobile phase to permit the mobile phase to rise on the chroma-
tographic sheet by capillary action.
4. When the solvent front has reached the desired height, the chamber is opened, the sheet is removed, the location of the
solvent front is quickly marked, and the sheet is dried.
5. The chromatogram is observed and measured directly or after suitable development to reveal the location of the spots of
the isolated drug or drugs.
Thin-Layer Chromatography
Stationary Phase: The stationary phase is a relatively thin, uniform layer of dry, finely powdered material applied to a
glass, plastic, or metal sheet or plate (typically called the plate). The stationary phase of TLC plates has an average particle size
of 10–15 mm, and that of high-performance TLC (HPTLC) plates has an average particle size of 5 mm. Commercial plates with a
preadsorbent zone can be used if they are specified in a monograph. Sample applied to the preadsorbent region develops into
sharp, narrow bands at the preadsorbent–sorbent interface. The separations achieved may be based on adsorption, partition,
or a combination of both effects, depending on the particular type of stationary phase.
Apparatus: A chromatographic chamber made of inert, transparent material and having the following specifications is
used: a flat-bottom or twin trough, a tightly fitted lid, and a size suitable for the plates. The chamber is lined on at least one
wall with filter paper. Sufficient mobile phase or developing solvent is added to the chamber that, after impregnation of the
filter paper, a depth appropriate to the dimensions of the plate used is available. The chromatographic chamber is closed and
allowed to equilibrate. [NOTE—Unless otherwise indicated, the chromatographic separations are performed in a saturated
chamber.]
Detection/Visualization: An ultraviolet (UV) light source suitable for observations under short- (254 nm) and long- (365
nm) wavelength UV light and a variety of other spray reagents used to make spots visible are often used.
Spotting: Solutions are spotted on the surface of the stationary phase (plate) at the prescribed volume in sufficiently small
portions to obtain circular spots of 2–5 mm in diameter (1–2 mm on HPTLC plates) or bands of 10–20 mm × 1–2 mm (5–10
mm × 0.5–1 mm on HPTLC plates) at an appropriate distance from the lower edge of and sides of the plate. [NOTE—During
development, the application position must be at least 5 mm (TLC) or 3 mm (HPTLC) above the level of the mobile phase.]
The solutions are applied on a line parallel to the lower edge of the plate with an interval of at least 10 mm (5 mm on HPTLC
plates) between the centers of spots, or 4 mm (2 mm on HPTLC plates) between the edges of bands, then allowed to dry.
Procedure
1. Place the plate in the chamber, ensuring that the spots or bands are above the surface of the mobile phase.
2. Close the chamber.
3. Allow the mobile phase to ascend the plate until the solvent front has traveled three-quarters of the length of the plate, or
the distance prescribed in the monograph.
4. Remove the plate, mark the solvent front with a pencil, and allow to dry.
5. Visualize the chromatograms as prescribed.
6. Determine the chromatographic retardation factor (RF) values for the principal spots or zones.
7. Presumptive identification can be made by observation of spots or zones of identical RF value and about equal magnitude
obtained, respectively, with an unknown and a standard chromatographed on the same plate. A visual comparison of the
size or intensity of the spots or zones may serve for semiquantitative estimation. Quantitative measurements are possible
by means of densitometry (absorbence or fluorescence measurements).
Column Chromatography
Solid Support: Purified siliceous earth is used for normal-phase separation. Silanized chromatographic siliceous earth is
used for reverse-phase partition chromatography.
Stationary Phase: The solid support is modified by the addition of a stationary phase specified in the individual mono-
graph. If a mixture of liquids is used as the stationary phase, mix the liquids before the introduction of the solid support.
Mobile Phase: The mobile phase is specified in the individual monograph. If the stationary phase is an aqueous solution,
equilibrate with water. If the stationary phase is a polar organic fluid, equilibrate with that fluid.
Apparatus: Unless otherwise specified in the individual monograph, the chromatographic tube is about 22 mm in inside
diameter and 200–300 mm long. Attached to it is a delivery tube, without stopcock, about 4 mm in inside diameter and about
50 mm long.
APPARATUS PREPARATION: Pack a pledget of fine glass wool in the base of the tube. Combine the specified volume of station-
ary phase and the specified amount of solid support to produce a homogeneous, fluffy mixture. Transfer this mixture to the
chromatographic tube, and tamp using gentle pressure to obtain a uniform mass. If the specified amount of solid support is
more than 3 g, transfer the mixture to the column in portions of approximately 2 g, and tamp each portion. If the assay or test
requires a multisegment column with a different stationary phase specified for each segment, tamp after the addition of each
segment, and add each succeeding segment directly to the previous one. Pack a pledget of fine glass wool above the comple-
ted column packing. [NOTE—The mobile phase should flow through a properly packed column as a moderate stream or, if
reverse-phase chromatography is applied, as a slow trickle.]
If a solution of the analyte is incorporated into the stationary phase, complete the quantitative transfer to the chromato-
graphic tube by scrubbing the beaker used for the preparation of the test mixture with a mixture of about 1 g of Solid Support
and several drops of the solvent used to prepare the sample solution before adding the final portion of glass wool.
Procedure
1. Transfer the mobile phase to the column space above the column packing, and allow it to flow through the column un-
der the influence of gravity.
2. Rinse the tip of the chromatographic column with about 1 mL of mobile phase before each change in composition of
mobile phase and after completion of the elution.
3. If the analyte is introduced into the column as a solution in the mobile phase, allow it to pass completely into the column
packing, then add mobile phase in several small portions, allowing each to drain completely, before adding the bulk of
the mobile phase.
4. Where the procedure indicates the use of multiple chromatographic columns mounted in series and the addition of mo-
bile phase in divided portions is specified, allow each portion to drain completely through each column, and rinse the tip
of each with mobile phase before the addition of each succeeding portion.
Liquid Stationary Phase: This type of phase is available in packed or capillary columns.
Packed Column GC: The liquid stationary phase is deposited on a finely divided, inert solid support, such as diatoma-
ceous earth, porous polymer, or graphitized carbon, which is packed into a column that is typically 2–4 mm in internal diame-
ter and 1–3 m in length.
Capillary Column GC: In capillary columns, which contain no packed solid support, the liquid stationary phase is depos-
ited on the inner surface of the column and may be chemically bonded to it.
Solid Stationary Phase: This type of phase is available only in packed columns. In these columns the solid phase is an
active adsorbent, such as alumina, silica, or carbon, packed into a column. Polyaromatic porous resins, which are sometimes
used in packed columns, are not coated with a liquid phase. [NOTE—Packed and capillary columns must be conditioned before
use until the baseline and other characteristics are stable. The column or packing material supplier provides instructions for the
recommended conditioning procedure.]
Apparatus: A gas chromatograph consists of a carrier gas source, injection port, column, detector, and recording device.
The injection port, column, and detector are temperature controlled and may be varied as part of the analysis. The typical
carrier gas is helium, nitrogen, or hydrogen, depending on the column and detector in use. The type of detector used de-
pends on the nature of the compounds analyzed and is specified in the individual monograph. Detector output is recorded as
a function of time, and the instrument response, measured as peak area or peak height, is a function of the amount present.
Temperature Program: The length and quality of a GC separation can be controlled by altering the temperature of the
chromatographic column. When a temperature program is necessary, the individual monograph indicates the conditions in
table format. The table indicates the initial temperature, rate of temperature change (ramp), final temperature, and hold time
at the final temperature.
Procedure
1. Equilibrate the column, injector, and detector with flowing carrier gas until a constant signal is received.
2. Inject a sample through the injector septum, or use an autosampler.
3. Begin the temperature program.
4. Record the chromatogram.
5. Analyze as indicated in the monograph.
The term liquid chromatography, as used in the compendia, is synonymous with high-pressure liquid chromatography and
high-performance liquid chromatography. LC is a separation technique based on a solid stationary phase and a liquid mobile
phase.
Stationary Phase: Separations are achieved by partition, adsorption, or ion-exchange processes, depending on the type
of stationary phase used. The most commonly used stationary phases are modified silica or polymeric beads. The beads are
modified by the addition of long-chain hydrocarbons. The specific type of packing needed to complete an analysis is indicated
by the “L” designation in the individual monograph (see also the section Chromatographic Columns, below). The size of the
beads is often described in the monograph as well. Changes in the packing type and size are covered in the System Suitability
section of this chapter.
Chromatographic Column: The term column includes stainless steel, lined stainless steel, and polymeric columns, packed
with a stationary phase. The length and inner diameter of the column affects the separation, and therefore typical column di-
mensions are included in the individual monograph. Changes to column dimensions are discussed in the System Suitability
section of this chapter. Compendial monographs do not include the name of appropriate columns; this omission avoids the
appearance of endorsement of a vendor’s product and natural changes in the marketplace. See the section Chromatographic
Columns for more information.
In LC procedures, a guard column may be used with the following requirements, unless otherwise is indicated in the individ-
ual monograph: (a) the length of the guard column must be NMT 15% of the length of the analytical column, (b) the inner
diameter must be the same or smaller than that of the analytical column, and (c) the packing material should be the same as
the analytical column (e.g., silica) and contain the same bonded phase (e.g., C18). In any case, all system suitability require-
ments specified in the official procedure must be met with the guard column installed.
Mobile Phase: The mobile phase is a solvent or a mixture of solvents, as defined in the individual monograph.
Apparatus: A liquid chromatograph consists of a reservoir containing the mobile phase, a pump to force the mobile
phase through the system at high pressure, an injector to introduce the sample into the mobile phase, a chromatographic
column, a detector, and a data collection device.
Gradient Elution: The technique of continuously changing the solvent composition during the chromatographic run is
called gradient elution or solvent programming. The gradient elution profile is presented in the individual monograph as a
gradient table, which lists the time and proportional composition of the mobile phase at the stated time.
Procedure
1. Equilibrate the column and detector with mobile phase at the specified flow rate until a constant signal is received.
2. Inject a sample through the injector, or use an autosampler.
3. Begin the gradient program.
4. Record the chromatogram.
5. Analyze as directed in the monograph.
CHROMATOGRAPHIC COLUMNS
A complete list of packings (L), phases (G), and supports (S) used in USP–NF tests and assays is located in USP–NF and PF,
Reagents, Indicators, and Solutions—Chromatographic Columns. This list is intended to be a convenient reference for the chro-
matographer in identifying the pertinent chromatographic column specified in the individual monograph.
Chromatogram: A chromatogram is a graphical representation of the detector response, concentration of analyte in the
effluent, or other quantity used as a measure of effluent concentration versus effluent volume or time. In planar chromatogra-
phy, chromatogram may refer to the paper or layer with the separated zones.
Figure 1 represents a typical chromatographic separation of two substances, 1 and 2. tR1 and tR2 are the respective retention
times; h is the height, h/2 is the half-height, and Wh/2 is the width at half-height, for peak 1. W1 and W2 are the respective
widths of peaks 1 and 2 at the baseline. Air peaks are a feature of gas chromatograms and correspond to the solvent front in
LC. The retention time of these air peaks, or unretained components, is designated as tM.
Dwell Volume (D): The dwell volume, also known as gradient delay volume, is the volume between the point at which
the eluents meet and the top of the column.
Hold-Up Time (tM): The hold-up time is the time required for elution of an unretained component (see Figure 1, shown as
an air or unretained solvent peak, with the baseline scale in min).
Hold-Up Volume (VM): The hold-up volume is the volume of mobile phase required for elution of an unretained compo-
nent. It may be calculated from the hold-up time and the flow rate F, in mL/min:
VM = tM × F
where tR is the retention time of the substance, and W is the peak width at its base, obtained by extrapolating the relatively
straight sides of the peak to the baseline. The value of N depends upon the substance being chromatographed as well as the
operating conditions, such as the flow rate and temperature of the mobile phase or carrier gas, the quality of the packing, the
uniformity of the packing within the column, and, for capillary columns, the thickness of the stationary phase film and the
internal diameter and length of the column.
Where electronic integrators are used, it may be convenient to determine the number of theorical plates, by the equation:
where Wh/2 is the peak width at half-height. However, in the event of dispute, only equations based on peak width at baseline
are to be used.
Peak: The peak is the portion of the chromatographic recording of the detector response when a single component is
eluted from the column. If separation is incomplete, two or more components may be eluted as one unresolved peak.
Peak-to-Valley Ratio (p/v): The p/v may be employed as a system suitability criterion in a test for related substances
when baseline separation between two peaks is not achieved. Figure 2 represents a partial separation of two substances, where
Hp is the height above the extrapolated baseline of the minor peak and Hv is the height above the extrapolated baseline at the
lowest point of the curve separating the minor and major peaks:
p/v = Hp/Hv
1 The parameters k, N, r, and r were developed for isothermal GC separations and isocratic HPLC separations. Because these terms are thermodynamic
G
parameters, they are only valid for separations made at constant temperature, mobile phase composition, and flow rate. However, for separations made with a
temperature program or solvent gradient, these parameters may be used simply as comparative means to ensure that adequate chromatographic conditions exist
to perform the methods as intended in the monographs.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á621ñ Chromatography 217
Relative Retardation (Rret): The relative retardation is the ratio of the distance traveled by the analyte to the distance si-
multaneously traveled by a reference compound (see Figure 3) and is used in planar chromatography.
Rret = b/c
Relative Retention (r)1: The ratio of the adjusted retention time of a component relative to that of another used as a ref-
erence obtained under identical conditions:
r = tR2 − tM/tR1 − tM
where tR2 is the retention time measured from the point of injection of the compound of interest; tR1 is the retention time
measured from the point of injection of the compound used as reference; and tM is the retention time of a nonretained marker
defined in the procedure, all determined under identical experimental conditions on the same column.
Relative Retention Time (RRT): Also known as unadjusted relative retention. Comparisons in USP are normally made in
terms of unadjusted relative retention, unless otherwise indicated.
RRT = tR2/tR1
Retardation Factor (RF): The retardation factor is the ratio of the distance traveled by the center of the spot to the dis-
tance simultaneously traveled by the mobile phase and is used in planar chromatography. Using the symbols in Figure 3:
RF = b/a
Retention Factor (k)1: The retention factor is also known as the capacity factor (k¢). Defined as:
or
Retention Time (tR): In liquid chromatography and gas chromatography, the retention time, tR, is defined as the time
elapsed between the injection of the sample and the appearance of the maximum peak response of the eluted sample zone. tR
may be used as a parameter for identification. Chromatographic retention times are characteristic of the compounds they rep-
resent but are not unique. Coincidence of retention times of a sample and a reference substance can be used as a partial crite-
rion in construction of an identity profile but may not be sufficient on its own to establish identity. Absolute retention times of
a given compound may vary from one chromatogram to the next.
Retention Volume (VR): The retention volume is the volume of mobile phase required for elution of a component. It may
be calculated from the retention time and the flow rate in mL/min:
VR = tR × F
Resolution (RS): The resolution is the separation of two components in a mixture, calculated by:
RS = 2 × (tR2 − tR1)/(W1 + W2)
where tR2 and tR1 are the retention times of the two components; and W2 and W1 are the corresponding widths at the bases of
the peaks obtained by extrapolating the relatively straight sides of the peaks to the baseline.
Where electronic integrators are used, it may be convenient to determine the resolution, by the equation:
RS = 1.18 × (tR2 − tR1)/(W1,h/2 + W2,h/2)
Separation Factor (a): The separation factor is the relative retention calculated for two adjacent peaks (by convention,
the value of the separation factor is always >1):
a = k2/k1
Symmetry Factor (AS)2: The symmetry factor, also known as the tailing factor, of a peak (see Figure 4) is calculated by:
AS = W0.05/2f
where W0.05 is the width of the peak at 5% height and f is the distance from the peak maximum to the leading edge of the
peak, the distance being measured at a point 5% of the peak height from the baseline.
2 It is also a common practice to measure the Asymmetry Factor as the ratio of the distance between the vertical line connecting the peak apex with the
interpolated baseline and the peak front, and the distance between that line and the peak back measured at 10% of the peak height (see Figure 4), would be
(W0.10 − f0.10)/f0.10. However, for the purposes of USP, only the formula (AS) as presented here is valid.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á621ñ Chromatography 219
SYSTEM SUITABILITY
System suitability tests are an integral part of gas and liquid chromatographic methods. These tests are used to verify that
the chromatographic system is adequate for the intended analysis.
The tests are based on the concept that the equipment, electronics, analytical operations, and samples analyzed constitute
an integral system that can be evaluated as such.
Factors that may affect chromatographic behavior include the following:
• Composition, ionic strength, temperature, and apparent pH of the mobile phase
• Flow rate, column dimensions, column temperature, and pressure
• Stationary phase characteristics, including type of chromatographic support (particle-based or monolithic), particle or
macropore size, porosity, and specific surface area
• Reverse-phase and other surface modification of the stationary phases, the extent of chemical modification (as expressed
by end-capping, carbon loading, etc.)
The resolution, RS, is a function of the number of theoretical plates, N (also referred to as efficiency), the separation factor, a,
and the capacity factor, k. [NOTE—All terms and symbols are defined in the preceding section Definitions and Interpretation of
Chromatograms.] For a given stationary phase and mobile phase, N may be specified to ensure that closely eluting compounds
are resolved from each other, to establish the general resolving power of the system, and to ensure that internal standards are
resolved from the drug. This is a less reliable means to ensure resolution than is direct measurement. Column efficiency is, in
part, a reflection of peak sharpness, which is important for the detection of trace components.
Replicate injections of a standard preparation or other standard solutions are compared to ascertain whether requirements
for precision are met. Unless otherwise specified in the individual monograph, data from five replicate injections of the analyte
are used to calculate the relative standard deviation, %RSD, if the requirement is 2.0% or less; data from six replicate injections
are used if the relative standard deviation requirement is more than 2.0%.
For the Assay in a drug substance monograph, where the value is 100% for the pure substance, and no maximum relative
standard deviation is stated, the maximum permitted %RSD is calculated for a series of injections of the reference solution:
%RSD = KBÖn/t90%,n−1
where K is a constant (0.349), obtained from the expression K = (0.6/Ö2) × (t90%,5/Ö6), in which 0.6/Ö2 represents the required
percentage relative standard deviation after six injections for B = 1.0; B is the upper limit given in the definition of the individu-
al monograph minus 100%; n is the number of replicate injections of the reference solution (3 £ n £ 6); and t90%,n−1 is the Stu-
dent’s t at the 90% probability level (double sided) with n − 1 degrees of freedom.
Unless otherwise prescribed, the maximum permitted relative standard deviation does not exceed the appropriate value giv-
en in the table of repeatability requirements. This requirement does not apply to tests for related substances.
Relative Standard Deviation Requirements
Number of Individual Injections
3 4 5 6
B (%) Maximum Permitted RSD
2.0 0.41 0.59 0.73 0.85
2.5 0.52 0.74 0.92 1.06
3.0 0.62 0.89 1.10 1.27
The symmetry factor, AS, a measure of peak symmetry, is unity for perfectly symmetrical peaks; and its value increases as
tailing becomes more pronounced (see Figure 4). In some cases, values less than unity may be observed. As peak symmetry
moves away from values of 1, integration, and hence precision, become less reliable.
The signal-to-noise ratio (S/N) is a useful system suitability parameter. The S/N is calculated as follows:
S/N = 2H/h
where H is the height of the peak measured from the peak apex to a baseline extrapolated over a distance ³5 times the peak
width at its half-height; and h is the difference between the largest and smallest noise values observed over a distance ³5 times
the width at the half-height of the peak and, if possible, situated equally around the peak of interest (see Figure 5).
These system suitability tests are performed by collecting data from replicate injections of standard or other solutions as
specified in the individual monograph.
The specification of definitive parameters in a monograph does not preclude the use of other suitable operating conditions.
Adjustments to the specified chromatographic system may be necessary in order to meet system suitability requirements.
Adjustments to chromatographic systems performed in order to comply with system suitability requirements are not to be
made in order to compensate for column failure or system malfunction. Adjustments are permitted only when suitable stand-
ards (including Reference Standards) are available for all compounds used in the suitability test; and the adjustments or col-
umn change yields a chromatogram that meets all the system suitability requirements specified in the official procedure.
If adjustments of operating conditions are necessary in order to meet system suitability requirements, each of the items in
the following list is the maximum variation that can be considered, unless otherwise directed in the monograph; these
changes may require additional verification data. To verify the suitability of the method under the new conditions, assess the
relevant analytical performance characteristics potentially affected by the change. Multiple adjustments can have a cumulative
effect on the performance of the system and are to be considered carefully before implementation. In some circumstances, it
may be desirable to use an HPLC column with different dimensions to those prescribed in the official procedure (different
length, internal diameter, and/or particle size). In either case, changes in the chemical characteristics (“L” designation) of the
stationary phase will be considered a modification to the method and will require full validation. Adjustments to the composi-
tion of the mobile phase in gradient elution may cause changes in selectivity and are not recommended. If adjustments are
necessary, change in column packing (maintaining the same chemistry), the duration of an initial isocratic hold (when prescri-
bed), and/or dwell volume adjustments are allowed. Additional allowances for gradient adjustment are noted in the text be-
low.
pH of Mobile Phase (HPLC): The pH of the aqueous buffer used in the preparation of the mobile phase can be adjusted
to within ±0.2 units of the value or range specified. Applies to both gradient and isocratic separations.
Concentration of Salts in Buffer (HPLC): The concentration of the salts used in the preparation of the aqueous buffer
employed in the mobile phase can be adjusted to within ±10% if the permitted pH variation (see above) is met. Applies to
both gradient and isocratic separations.
Ratio of Components in Mobile Phase (HPLC): The following adjustment limits apply to minor components of the mo-
bile phase (specified at 50% or less). The amounts of these components can be adjusted by ±30% relative. However, the
change in any component cannot exceed ±10% absolute (i.e., in relation to the total mobile phase). Adjustment can be made
to one minor component in a ternary mixture. Examples of adjustments for binary and ternary mixtures are given below.
Binary Mixtures
SPECIFIED RATIO OF 50:50: 30% of 50 is 15% absolute, but this exceeds the maximum permitted change of ±10% absolute in
either component. Therefore, the mobile phase ratio may be adjusted only within the range of 40:60–60:40.
SPECIFIED RATIO OF 2:98: 30% of 2 is 0.6% absolute. Therefore the maximum allowed adjustment is within the range of
1.4: 98.6–2.6: 97.4.
Ternary Mixtures
SPECIFIED RATIO OF 60:35:5: For the second component, 30% of 35 is 10.5% absolute, which exceeds the maximum permit-
ted change of ±10% absolute in any component. Therefore the second component may be adjusted only within the range of
25%–45% absolute. For the third component, 30% of 5 is 1.5% absolute. In all cases, a sufficient quantity of the first compo-
nent is used to give a total of 100%. Therefore, mixture ranges of 50:45:5–70:25:5 or 58.5: 35: 6.5–61.5: 35: 3.5 would meet
the requirement.
Wavelength of UV-Visible Detector (HPLC): Deviations from the wavelengths specified in the procedure are not permit-
ted. The procedure specified by the detector manufacturer, or another validated procedure, is used to verify that error in the
detector wavelength is, at most, ±3 nm.
Stationary Phase
COLUMN LENGTH (GC): Can be adjusted by as much as ±70%.
COLUMN LENGTH (HPLC): See Particle Size (HPLC) below.
COLUMN INNER DIAMETER (HPLC): Can be adjusted if the linear velocity is kept constant. See Flow Rate (HPLC) below.
COLUMN INNER DIAMETER (GC): Can be adjusted by as much as ±50%.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á621ñ Chromatography 221
where F1 and F2 are the flow rates for the original and modified conditions, respectively; dc1 and dc2 are the respective column
diameters; and dp1 and dp2 are the particle sizes.
When a change is made from ³3-mm to <3-mm particles in isocratic separations, an additional increase in linear velocity (by
adjusting flow rate) may be justified, provided that the column efficiency does not drop by more than 20%. Similarly, a
change from <3-mm to ³3-mm particles may require additional reduction of linear velocity (flow rate) to avoid reduction in
column efficiency by more than 20%. Changes in F, dc, and dp are not allowed for gradient separations.
Additionally, the flow rate can be adjusted by ±50% (isocratic only).
EXAMPLES: Adjustments in column length, internal diameter, particle size, and flow rate can be used in combination to give
equivalent conditions (same N), but with differences in pressure and run time. The table below lists some of the more popular
column configurations to give equivalent efficiency (N), by adjusting these variables.
For example, if a monograph specifies a 150-mm × 4.6-mm; 5-mm column operated at 1.5 mL/min, the same separation
may be expected with a 75-mm × 2.1-mm; 2.5-mm column operated at 1.5 mL/min × 0.4 = 0.6 mL/min, along with a pres-
sure increase of about four times and a reduction in run time to about 30% of the original.
Injection Volume (HPLC): The injection volume can be adjusted as far as it is consistent with accepted precision, lineari-
ty, and detection limits. Note that excessive injection volume can lead to unacceptable band broadening, causing a reduction
in N and resolution. Applies to both gradient and isocratic separations.
Injection Volume and Split Volume (GC): The injection volume and split volume may be adjusted if detection and re-
peatability are satisfactory.
Column Temperature (HPLC): The column temperature can be adjusted by as much as ±10°. Column thermostating is
recommended to improve control and reproducibility of retention time. Applies to both gradient and isocratic separations.
Oven Temperature (GC): The oven temperature can be adjusted by as much as ±10%.
Oven Temperature Program (GC): Adjustment of temperatures is permitted as stated above. When the specified tem-
perature must be maintained or when the temperature must be changed from one value to another, an adjustment of up to
±20% is permitted.
Unless otherwise directed in the monograph, system suitability parameters are determined from the analyte peak.
Measured values of Rr, RF, or tR for the sample substance do not deviate from the values obtained for the reference com-
pound and mixture by more than the statistically determined reliability estimates from replicate assays of the reference com-
pound. Relative retention times may be provided in monographs for informational purposes only to aid in peak identification.
There are no acceptance criteria applied to relative retention times.
Suitability testing is used to ascertain the effectiveness of the final operating system, which should be subjected to this test-
ing. Make injections of the appropriate preparation(s) as required in order to demonstrate adequate System suitability (as de-
scribed in the Chromatographic system section of the method in a monograph) throughout the run.
The preparation can be a standard preparation or a solution containing a known amount of analyte and any additional ma-
terials (e.g., excipients or impurities) useful in controlling the analytical system. Whenever there is a significant change in the
chromatographic system (equipment, mobile phase component, or other components) or in a critical reagent, System suitabili-
ty is to be reestablished. No sample analysis is acceptable unless the suitability of the system has been demonstrated.
QUANTITATION
During quantitation, disregard peaks caused by solvents and reagents or arising from the mobile phase or the sample ma-
trix.
In the linear range, peak areas and peak heights are usually proportional to the quantity of compound eluting. The peak
areas and peak heights are commonly measured by electronic integrators but may be determined by more classical ap-
proaches. Peak areas are generally used but may be less accurate if peak interference occurs. The components measured are
separated from any interfering components. Peak tailing and fronting is minimized, and the measurement of peaks on tails of
other peaks are avoided when possible.
Although comparison of impurity peaks with those in the chromatogram of a standard at a similar concentration is prefer-
red, impurity tests may be based on the measurement of the peak response due to impurities and expressed as a percentage
of the area of the drug peak. The standard may be the drug itself at a level corresponding to, for example, 0.5% impurity,
assuming similar peak responses. When impurities must be determined with greater certainty, use a standard of the impurity
itself or apply a correction factor based on the response of the impurity relative to that of the main component.
External Standard Method: The concentration of the component(s) quantified is determined by comparing the re-
sponse(s) obtained with the sample solution to the response(s) obtained with a standard solution.
Internal Standard Method: Equal amounts of the internal standard are introduced into the sample solution and a stand-
ard solution. The internal standard is chosen so that it does not react with the test material, is stable, is resolved from the com-
ponent(s) quantified (analytes), and does not contain impurities with the same retention time as that of the analytes. The con-
centrations of the analytes are determined by comparing the ratios of their peak areas or peak heights and the internal stand-
ard in the sample solution with the ratios of their peak areas or peak heights and the internal standard in the standard solution.
Normalization Procedure: The percentage content of a component of the test material is calculated by determining the
area of the corresponding peak as a percentage of the total area of all the peaks, excluding those due to solvents or reagents
or arising from the mobile phase or the sample matrix and those at or below the limit at which they can be disregarded.
Calibration Procedure: The relationship between the measured or evaluated signal y and the quantity (e.g., concentra-
tion, mass) of substance x is determined, and the calibration function is calculated. The analytical results are calculated from
the measured signal or evaluated signal of the analyte and its position on the calibration curve.
In tests for impurities for both the External Standard Method, when a dilution of the sample solution is used for comparison,
and the Normalization Procedure, any correction factors indicated in the monograph are applied (e.g., when the relative re-
sponse factor is outside the range 0.8–1.2).
When the impurity test prescribes the total of impurities or there is a quantitative determination of an impurity, choice of an
appropriate threshold setting and appropriate conditions for the integration of the peak areas is important. In such tests the
limit at or below which a peak is disregarded is generally 0.05%. Thus, the threshold setting of the data collection system cor-
responds to at least half of this limit. Integrate the peak area of any impurity that is not completely separated from the princi-
pal peak, preferably by valley-to-valley extrapolation (tangential skim).
Definition—For the purposes of this chapter, color may be defined as the perception or subjective response by an observer
to the objective stimulus of radiant energy in the visible spectrum extending over the range 400 nm to 700 nm in wavelength.
Perceived color is a function of three variables: spectral properties of the object, both absorptive and reflective; spectral proper-
ties of the source of illumination; and visual characteristics of the observer.
Two objects are said to have a color match for a particular source of illumination when an observer cannot detect a color
difference. Where a pair of objects exhibit a color match for one source of illumination and not another, they constitute a
metameric pair. Color matches of two objects occur for all sources of illumination if the absorption and reflectance spectra of
the two objects are identical.
Achromicity or colorlessness is one extreme of any color scale for transmission of light. It implies the complete absence of
color, and therefore the visible spectrum of the object lacks absorbances. For practical purposes, the observer in this case per-
ceives little if any absorption taking place in the visible spectrum.
Color Attributes—Because the sensation of color has both a subjective and an objective part, color cannot be described
solely in spectrophotometric terms. The common attributes of color therefore cannot be given a one-to-one correspondence
with spectral terminology.
Three attributes are commonly used to identify a color: (1) hue, or the quality by which one color family is distinguished
from another, such as red, yellow, blue, green, and intermediate terms; (2) value, or the quality that distinguishes a light color
from a dark one; and (3) chroma, or the quality that distinguishes a strong color from a weak one, or the extent to which a
color differs from a gray of the same value.
The three attributes of color may be used to define a three-dimensional color space in which any color is located by its coor-
dinates. The color space chosen is a visually uniform one if the geometric distance between two colors in the color space is
directly a measure of the color distance between them. Cylindrical coordinates are often conveniently chosen. Points along the
long axis represent value from dark to light or black to white and have indeterminate hue and no chroma. Focusing on a cross-
section perpendicular to the value axis, hue is determined by the angle about the long axis and chroma is determined by the
distance from the long axis. Red, yellow, green, blue, purple, and intermediate hues are given by different angles. Colors along
a radius of a cross-section have the same hue, which become more intense farther out. For example, colorless or achromic
water has indeterminate hue, high value, and no chroma. If a colored solute is added, the water takes on a particular hue. As
more is added, the color becomes darker, more intense, or deeper; i.e., the chroma generally increases and value decreases. If,
however, the solute is a neutral color, i.e., gray, the value decreases, no increase in chroma is observed, and the hue remains
indeterminate.
Laboratory spectroscopic measurements can be converted to measurements of the three color attributes. Spectroscopic re-
sults for three chosen lights or stimuli are weighted by three distribution functions to yield the tristimulus values, X, Y, Z (see
Color—Instrumental Measurement á1061ñ). The distribution functions were determined in color matching experiments with hu-
man subjects.
The tristimulus values are not coordinates in a visually uniform color space; however, several transformations have been pro-
posed that are close to being uniform, one of which is given in the chapter cited á1061ñ Color—Instrumental Measurement. The
value is often a function of only the Y value. Obtaining uniformity in the chroma-hue subspace has been less satisfactory. In a
practical sense, this means in visual color comparison that if two objects differ significantly in hue, deciding which has a higher
chroma becomes difficult. This points out the importance of matching standard to sample color as closely as possible, especial-
ly for the attributes of hue and chroma.
Color Determination and Standards—The perception of color and color matches is dependent on conditions of viewing
and illumination. Determinations should be made using diffuse, uniform illumination under conditions that reduce shadows
and nonspectral reflectance to a minimum. The surface of powders should be smoothed with gentle pressure so that a planar
surface free from irregularities is presented. Liquids should be compared in matched color-comparison tubes, against a white
background. If results are found to vary with illumination, those obtained in natural or artificial daylight are to be considered
correct. Instead of visual determination, a suitable instrumental method may be used.
Colors of standards should be as close as possible to those of test specimens for quantifying color differences. Standards for
opaque materials are available as sets of color chips that are arranged in a visually uniform space.* Standards identified by a
letter for matching the colors of fluids can be prepared according to the accompanying table. To prepare the matching fluid
required, pipet the prescribed volumes of the colorimetric test solutions [see under Colorimetric Solutions (CS) in the section
Reagents, Indicators, and Solutions] and water into one of the matching containers, and mix the solution in the container. Make
the comparison as directed in the individual monograph, under the viewing conditions previously described. The matching
fluids, or other combinations of the colorimetric solutions, may be used in very low concentrations to measure deviation from
achromicity.
Matching Fluids
Parts of Parts of Parts of
Matching Cobaltous Ferric Cupric Parts of
Fluid Chloride CS Chloride CS Sulfate CS Water
A 0.1 0.4 0.1 4.4
B 0.3 0.9 0.3 3.5
C 0.1 0.6 0.1 4.2
D 0.3 0.6 0.4 3.7
E 0.4 1.2 0.3 3.1
F 0.3 1.2 0.0 3.5
G 0.5 1.2 0.2 3.1
H 0.2 1.5 0.0 3.3
* Collections of color chips, arranged according to hue, value, and chroma in a visually uniform space and suitable for use in color designation of specimens by
visual matching are available from GretagMacbeth LLC, 617 Little Britain Road, New Windsor, NY 12553-6148.
Official text. Reprinted from First Supplement to USP38-NF33.
224 á631ñ Color and Achromicity / Physical Tests DSC
Place the quantity of the substance specified in the individual monograph in a meticulously cleansed, glass-stoppered, 10-
mL glass cylinder approximately 13 mm × 125 mm in size. Using the solvent that is specified in the monograph or on the label
of the product, fill the cylinder almost to the constriction at the neck. Shake gently to effect solution: the solution is not less
clear than an equal volume of the same solvent contained in a similar vessel and examined similarly.
Total organic carbon (TOC) is an indirect measure of organic molecules present in pharmaceutical waters measured as car-
bon. Organic molecules are introduced into the water from the source water, from purification and distribution system materi-
als, from biofilm growing in the system, and from the packaging of sterile and nonsterile waters. TOC can also be used as a
process control attribute to monitor the performance of unit operations comprising the purification and distribution system. A
TOC measurement is not a replacement test for endotoxin or microbiological control. Although there can be a qualitative rela-
tionship between a food source (TOC) and microbiological activity, there is no direct numerical correlation.
A number of acceptable methods exist for analyzing TOC. This chapter does not endorse, limit, or prevent any technologies
from being used, but this chapter provides guidance on how to qualify these analytical technologies for use as well as guid-
ance on how to interpret instrument results for use as a limit test.
Apparatuses commonly used to determine TOC in water for pharmaceutical use have in common the objective of oxidizing
the organic molecules in the water to produce carbon dioxide followed by the measurement of the amount of carbon dioxide
produced. Then the amount of CO2 produced is determined and used to calculate the organic carbon concentration in the
water.
All technologies must discriminate between the inorganic carbon, which may be present in the water from sources such as
dissolved CO2 and bicarbonate, and the CO2 generated from the oxidation of organic molecules in the sample. The discrimina-
tion may be accomplished either by determining the inorganic carbon and subtracting it from the total carbon (total carbon is
the sum of organic carbon and inorganic carbon), or by purging inorganic carbon from the sample before oxidation. Although
purging may entrain organic molecules, such purgeable organic carbon is present in negligible quantities in water for pharma-
ceutical use.
BULK WATER
The following sections apply to tests for bulk Purified Water, Water for Injection, Water for Hemodialysis, and the condensate of
Pure Steam.
Apparatus Requirements: This test method is performed either as an on-line test or as an off-line laboratory test using a
calibrated instrument. The suitability of the apparatus must be periodically demonstrated as described below. In addition, it
must have a manufacturer’s specified limit of detection of 0.05 mg/L (0.05 ppm) or lower of carbon.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á643ñ Total Organic Carbon 225
When testing water for quality control purposes, ensure that the instrument and its data are under appropriate control and
that the sampling approaches and locations of both on-line and off-line measurements are representative of the quality of the
water used. The nature of the water production, distribution, and use should be considered when selecting either on-line or
off-line measurement.
USP Reference Standards á11ñ: USP 1,4-Benzoquinone RS. USP Sucrose RS.
Reagent Water: Use water having a TOC level of not more than 0.10 mg/L. [NOTE—A conductivity requirement may be nec-
essary in order to ensure method reliability.]
Container Preparation: Organic contamination of containers results in higher TOC values. Therefore, use labware and con-
tainers that have been scrupulously cleaned of organic residues. Any method that is effective in removing organic matter can
be used (see Cleaning Glass Apparatus á1051ñ). Use Reagent Water for the final rinse.
Standard Solution: Unless otherwise directed in the individual monograph, dissolve in the Reagent Water an accurately
weighed quantity of USP Sucrose RS to obtain a solution having a concentration of 1.19 mg/L of sucrose (0.50 mg/L of car-
bon).
System Suitability Solution: Dissolve in Reagent Water an accurately weighed quantity of USP 1,4-Benzoquinone RS to ob-
tain a solution having a concentration of 0.75 mg/L (0.50 mg/L of carbon).
Reagent Water Control: Use a suitable quantity of Reagent Water obtained at the same time as that used in the preparation
of the Standard Solution and the System Suitability Solution.
Water Sample: Obtain an on-line or off-line sample that suitably reflects the quality of water used.
Other Control Solutions: Prepare appropriate reagent blank solutions or other specified solutions needed for establishing
the apparatus baseline or for calibration adjustments following the manufacturer’s instructions, and run the appropriate blanks
to zero the instrument, if necessary.
System Suitability: Test the Reagent Water Control in the apparatus, and record the response, rW. Repeat the test using the
Standard Solution, and record the response, rS. Calculate the corrected Standard Solution response, which is also the limit re-
sponse, by subtracting the Reagent Water Control response from the response of the Standard Solution. The theoretical limit of
0.50 mg/L of carbon is equal to the corrected Standard Solution response, rS − rW. Test the System Suitability Solution in the
apparatus, and record the response, rSS. Calculate the corrected System Suitability Solution response by subtracting the Reagent
Water Control response from the response of the System Suitability Solution, rSS − rW. Calculate the percent response efficiency
for the System Suitability Solution:
% response efficiency = 100[(rSS − rW)/(rS − rW)]
where rSS is the instrument response to the System Suitability Solution; rW is the instrument response to the Reagent Water Con-
trol; and rS is the instrument response to the Standard Solution. The system is suitable if the percent response efficiency is not
less than 85% and not more than 115%.
Procedure: Perform the test on the Water Sample, and record the response, rU. The Water Sample meets the requirements if
rU is not more than the limit response, rS − rW. This method can be performed using on-line or off-line instrumentation that
meets the Apparatus Requirements.
STERILE WATER
The following sections apply to tests for Sterile Water for Injection, Sterile Purified Water, Sterile Water for Irrigation, and Sterile
Water for Inhalation.
Follow the requirements in Bulk Water, with the following exceptions.
Apparatus Requirements: In addition to the Apparatus Requirements in Bulk Water, the apparatus must have a manufac-
turer’s specified limit of detection of 0.10 mg/L (0.10 ppm) or lower of carbon.
Reagent Water: Use water having a TOC level of not more than 0.50 mg/L. [NOTE—A conductivity requirement may be
necessary in order to ensure method reliability.]
Standard Solution: Unless otherwise directed in the individual monograph, dissolve in the Reagent Water an accurately
weighed quantity of USP Sucrose RS to obtain a solution having a concentration of 19.0 mg/L of sucrose (8.0 mg/L of carbon).
System Suitability Solution: Dissolve in Reagent Water an accurately weighed quantity of USP 1,4-Benzoquinone RS to
obtain a solution having a concentration of 12.0 mg/L (8.0 mg/L of carbon).
Water Sample: Obtain a sample that suitably reflects the quality of water used. Before opening, vigorously agitate the
package to homogenize the water sample. Several packages may be required in order to collect sufficient water for analysis.
System Suitability: Test the Reagent Water Control in the apparatus, and record the response, rW. Repeat the test using the
Standard Solution, and record the response, rS. Calculate the corrected Standard Solution response, which is also the limit re-
sponse, by subtracting the Reagent Water Control response from the response of the Standard Solution. The theoretical limit of
8.0 mg/L of carbon is equal to the corrected Standard Solution response, rS − rW. Test the System Suitability Solution in the appa-
ratus, and record the response, rSS. Calculate the corrected System Suitability Solution response by subtracting the Reagent Wa-
ter Control response from the response of the System Suitability Solution, rSS − rW. Calculate the percent response efficiency for
the System Suitability Solution:
% response efficiency = 100[(rSS − rW)/(rS − rW)]
where rSS is the instrument response to the System Suitability Solution; rW is the instrument response to the Reagent Water Con-
trol; and rS is the instrument response to the Standard Solution. The system is suitable if the percent response efficiency is not
less than 85% and not more than 115%.
Procedure: Perform the test on the Water Sample, and record the response, rU. The Water Sample meets the requirements
if rU is not more than the limit response, rS − rW, determined in the System Suitability requirements in Sterile Water.
INTRODUCTION
Electrical conductivity in water is a measure of the ion-facilitated electron flow through it. Water molecules dissociate into
ions as a function of pH and temperature and result in a very predictable conductivity. Some gases, most notably carbon diox-
ide, readily dissolve in water and interact to form ions, which predictably affect conductivity also. For the purpose of this dis-
cussion, these ions and their resulting conductivity can be considered intrinsic to the water.
Water conductivity is also affected by the presence of extraneous ions. The extraneous ions used in modeling the conductivi-
ty specifications described below are the chloride and ammonia ions. The conductivity of the ubiquitous chloride ion (at the
theoretical endpoint concentration of 0.47 ppm when chloride was a required attribute test in USP 22 and earlier revisions)
and the ammonium ion (at the limit of 0.3 ppm) represents a major portion of the allowed water ionic impurity level. A bal-
ancing quantity of anions (such as chloride, to counter the ammonium ion) and cations (such as sodium, to counter the chlor-
ide ion) is included in this allowed impurity level to maintain electroneutrality. Extraneous ions such as these may have a signif-
icant effect on the water’s chemical purity and suitability for use in pharmaceutical applications.
The procedure in the section Bulk Water is specified for measuring the conductivity of waters such as Purified Water, Water for
Injection, Water for Hemodialysis, and the condensate of Pure Steam. The procedure in the section Sterile Water is specified for
measuring the conductivity of waters such as Sterile Purified Water, Sterile Water for Injection, Sterile Water for Inhalation, and
Sterile Water for Irrigation.
The procedures below shall be performed using instrumentation that has been calibrated, has conductivity sensor cell con-
stants that have been accurately determined, and has a temperature compensation function that has been disabled for Bulk
Water Stage 1 testing. For both online and offline measurements, the suitability of instrumentation for quality control testing is
also dependent on the sampling location(s) in the water system. The selected sampling instrument location(s) must reflect the
quality of the water used.
Water conductivity must be measured accurately with calibrated instrumentation. An electrical conductivity measurement
consists of the determination of the conductance, G (or its inverse, resistance, R), of the fluid between and around the electro-
des. The conductance (1/R) is directly affected by the geometrical properties of the electrodes; i.e., the conductance is inverse-
ly proportional to the distance (d) between the electrodes and proportional to the area (A) of the electrodes. This geometrical
ratio (d/A) is known as the cell constant, Q. Thus the measured conductance is normalized for the cell constant to determine
the conductivity, k, according to the following equation:
conductivity, k (S/cm) = Q (cm–1)/R (W)
It is the cell constant and the resistance measurement that must be verified and adjusted, if necessary.
Cell constant: The cell constant must be known within ±2%. The cell constant can be verified directly by using a solution of
known or traceable conductivity, or indirectly by comparing the instrument reading taken with the conductivity sensor in
question to readings from a conductivity sensor of known or traceable cell constant. If necessary, adjust the cell constant fol-
lowing the manufacturer¢s instrument protocol. The frequency of verification/calibration is a function of the sensor design.
Resistance measurement: Calibration (or verification) of the resistance measurement is accomplished by replacing the con-
ductivity sensor electrodes with precision resistors having standards traceable to NIST or equivalent national authorities in oth-
er countries (accurate to ±0.1% of the stated value) to give a predicted instrument conductivity response. The accuracy of the
resistance measurement is acceptable if the measured conductivity with the traceable resistor is within ±0.1 mS/cm of the cal-
culated value according to the equation above. For example, the traceable resistor is 50 kW, and the cell constant, Q, is 0.10
cm–1. The calculated value is 2.0 × 10–6 S/cm or 2.0 mS/cm. The measured value should be 2.0 ± 0.1 mS/cm. The instrument
must have a minimum resolution of 0.1 mS/cm on the lowest range.
The target conductivity value(s) should be based on the type of water to be analyzed, and it should be equal to or less than
the water conductivity limit for that type of water. Multiple measuring circuits may be embedded in the meter or the sensor,
and each circuit may require separate verification or calibration before use. The frequency of recalibration is a function of in-
strument system design.
System verification: The cell constant of the user’s sensor can be determined with the user’s resistance measurement sys-
tem, or the cell constant can be determined with an independent resistance measurement system. If the cell constant is deter-
mined with an independent resistance measurement system, it is recommended that the user verify that the sensor has been
properly connected to the resistance measurement system to ensure proper performance. Verification can be made by com-
paring the conductivity (or resistivity) values displayed by the measuring equipment with those of an external calibrated con-
ductivity-measuring device. The two non–temperature-compensated conductivity (or resistivity) values should be equivalent to
or within ±5% of each other, or should have a difference that is acceptable on the basis of product water criticality and/or the
water conductivity ranges in which the measurements are taken. The two conductivity sensors should be positioned close
enough together to measure the same water sample at the same temperature and water quality.
Temperature compensation and temperature measurements: Because temperature has a substantial effect on conductivi-
ty readings of specimens at high and low temperatures, many instruments automatically correct the actual reading to display
the value that theoretically would be observed at the nominal temperature of 25°. This is typically done using a temperature
sensor embedded in the conductivity sensor and a software algorithm embedded in the instrument. This temperature com-
pensation algorithm may not be accurate for the various water types and impurities. For this reason, conductivity values used
in the Stage 1 test for Bulk Water are non–temperature-compensated measurements. Other conductivity tests that are specified
for measurement at 25° can use either temperature-compensated or non–temperature-compensated measurements.
A temperature measurement is required for the Stage 1 test or for the other tests at 25°. It may be made using the tempera-
ture sensor embedded in the conductivity cell sensor. An external temperature sensor positioned near the conductivity sensor
is also acceptable. Accuracy of the temperature measurement must be ±2°.
BULK WATER
The procedure and test limits in this section are intended for Purified Water, Water for Injection, Water for Hemodialysis, the
condensate of Pure Steam, and any other monographs that specify this section.
This is a three-stage test method to accommodate online or offline testing. Online conductivity testing provides real-time
measurements and opportunities for real-time process control, decision, and intervention. Precautions should be taken while
collecting water samples for offline conductivity measurements. The sample may be affected by the sampling method, the
sampling container, and environmental factors such as ambient carbon dioxide concentration and organic vapors. This proce-
dure can be started at Stage 2 if offline testing is preferred.
Procedure
STAGE 1
Stage 1 is intended for online measurement or may be performed offline in a suitable container.
1. Determine the temperature of the water and the conductivity of the water with a non–temperature-compensated con-
ductivity reading.
2. Using Table 1, find the temperature value that is NMT the measured temperature, i.e., the next lower temperature. The
corresponding conductivity value on this table is the limit. [NOTE—Do not interpolate.]
3. If the measured conductivity is NMT the table value determined in step 2, the water meets the requirements of the test
for conductivity. If the conductivity is higher than the table value, proceed with Stage 2.
Table 1. Stage 1—Temperature and Conductivity Requirements
(for non–temperature-compensated conductivity measurements only)
Temperature Conductivity Requirement (mS/cm)
0 0.6
5 0.8
10 0.9
15 1.0
20 1.1
25 1.3
30 1.4
35 1.5
40 1.7
45 1.8
50 1.9
55 2.1
60 2.2
65 2.4
STAGE 2
4. Transfer a sufficient amount of water to a suitable container, and stir the test specimen. Adjust the temperature, if neces-
sary, and, while maintaining it at 25 ± 1°, begin vigorously agitating the test specimen while periodically observing the con-
ductivity. When the change in conductivity (due to uptake of atmospheric carbon dioxide) is less than a net of 0.1 mS/cm per
5 min, note the conductivity. [NOTE—Conductivity measurements at this stage may be temperature-compensated to 25° or
non–temperature-compensated.]
5. If the conductivity is not greater than 2.1 mS/cm, the water meets the requirements of the test for conductivity. If the
conductivity is greater than 2.1 mS/cm, proceed with Stage 3.
STAGE 3
6. Perform this test within approximately 5 min of the conductivity determination in step 5, while maintaining the sample
temperature at 25 ± 1°. Add a saturated potassium chloride solution to the same water sample (0.3 mL per 100 mL of the test
specimen), and determine the pH to the nearest 0.1 pH unit, as directed in pH á791ñ.
7. Referring to Table 2, determine the conductivity limit at the measured pH value. If the measured conductivity in step 4 is
NMT the table value determined in step 6, the water meets the requirements of the test for conductivity. If either the meas-
ured conductivity is greater than this value or the pH is outside the range of 5.0–7.0, the water does not meet the require-
ments of the test for conductivity.
Table 2. Stage 3—pH and Conductivity Requirements
(for atmosphere- and temperature-equilibrated samples only)
pH Conductivity Requirement (mS/cm)
5.0 4.7
5.1 4.1
5.2 3.6
5.3 3.3
5.4 3.0
5.5 2.8
5.6 2.6
5.7 2.5
5.8 2.4
5.9 2.4
6.0 2.4
6.1 2.4
6.2 2.5
6.3 2.4
6.4 2.3
6.5 2.2
6.6 2.1
6.7 2.6
6.8 3.1
6.9 3.8
7.0 4.6
STERILE WATER
The procedure and test limits are intended for Sterile Purified Water, Sterile Water for Injection, Sterile Water for Inhalation, and
Sterile Water for Irrigation, and any other monographs that specify this section. The sterile waters are derived from Purified Wa-
ter or Water for Injection, and therefore have been determined to be compliant with the Bulk Water requirements before being
stored in the container. The specification provided represents the maximum allowable conductivity value, taking into consider-
ation the limitation of the measurement method and reasonable container leaching. Such specification and the sampling vol-
ume choices should be defined and validated on the basis of the intended purpose of the water.
Procedure
Obtain a sample that suitably reflects the quality of water used. Before opening, vigorously agitate the package to homoge-
nize the water sample. Several packages may be required to collect sufficient water for analysis.
Transfer a sufficient amount of water to a suitable container, and stir the test specimen. Adjust the temperature, if necessary,
and, while maintaining it at 25 ± 1°, begin vigorously agitating the test specimen while periodically observing the conductivity.
When the change in conductivity (due to uptake of ambient carbon dioxide) is less than a net of 0.1 mS/cm per 5 min, note
the conductivity.
For containers with a nominal volume of 10 mL or less, if the conductivity is NMT 25 mS/cm, the water meets the require-
ments. For containers with a nominal volume greater than 10 mL, if the conductivity is NMT 5 mS/cm, the water meets the
requirements.
The temperature at which a substance passes from the liquid to the solid state upon cooling is a useful index to purity if heat
is liberated when the solidification takes place, provided that any impurities present dissolve in the liquid only, and not in the
solid. Pure substances have a well-defined freezing point, but mixtures generally freeze over a range of temperatures. For many
mixtures, the congealing temperature, as determined by strict adherence to the following empirical methods, is a useful index
of purity. The method for determining congealing temperatures set forth here is applicable to substances that melt between
−20° and 150°, the range of the thermometer used in the bath. The congealing temperature is the maximum point (or lacking
a maximum, the point of inflection) in the temperature-time curve.
Apparatus—Assemble an apparatus similar to that illustrated,in which the container for the substance is a 25- × 100-mm
test tube. This is provided with a suitable, short-range thermometer suspended in the center, and a wire stirrer, about 30 cm
long, bent at its lower end into a horizontal loop around the thermometer. Use a thermometer having a range not exceeding
30°, graduated in 0.1° divisions, and calibrated for, but not used at, 76-mm immersion. A suitable series of thermometers, cov-
ering a range from −20° to +150°, is available as the ASTM E1 series 89C through 96C. Other temperature-measuring devices
may be used if they are validated for this procedure (see Thermometers á21ñ). Dimensions should be within ±20% of those giv-
en in the illustration.
The specimen container is supported, by means of a cork, in a suitable water-tight cylinder about 50 mm in internal diame-
ter and 11 cm in length. The cylinder, in turn, is supported in a suitable bath sufficient to provide not less than a 37-mm layer
surrounding the sides and bottom of the cylinder. The outside bath is provided with a suitable thermometer.
Procedure—Melt the substance, if a solid, at a temperature not exceeding 20° above its expected congealing point, and
pour it into the test tube to a height of 50 to 57 mm. Assemble the apparatus with the bulb of the test tube thermometer
immersed halfway between the top and bottom of the specimen in the test tube. Fill the bath to about 12 mm from the top of
the tube with suitable fluid at a temperature 4° to 5° below the expected congealing point.
In case the substance is a liquid at room temperature, carry out the determination using a bath temperature about 15° be-
low the expected congealing point.
When the test specimen has cooled to about 5° above its expected congealing point, adjust the bath to a temperature 7° to
8° below the expected congealing point. Stir the specimen continuously during the remainder of the test by moving the loop
up and down between the top and bottom of the specimen, at a regular rate of 20 complete cycles per minute.
Congelation frequently may be induced by rubbing the inner walls of the test tube with the thermometer, or by introducing
a small fragment of the previously congealed substance. Pronounced supercooling may cause deviation from the normal pat-
tern of temperature changes. If the latter occurs, repeat the test, introducing small particles of the material under test in solid
form at 1° intervals as the temperature approaches the expected congealing point.
Record the reading of the test tube thermometer every 30 seconds. Continue stirring only so long as the temperature is
gradually falling, stopping when the temperature becomes constant or starts to rise slightly. Continue recording the tempera-
ture in the test tube every 30 seconds for at least 3 minutes after the temperature again begins to fall after remaining constant.
The average of not less than four consecutive readings that lie within a range of 0.2° constitutes the congealing tempera-
ture. These readings lie about a point of inflection or a maximum, in the temperature-time curve, that occurs after the temper-
ature becomes constant or starts to rise and before it again begins to fall. The average to the nearest 0.1° is the congealing
temperature.
PACKAGING
Packaging must not interact physically or chemically with official articles in any way that causes their safety, identity,
strength, quality, or purity to fail to conform to requirements.
Packaging container choices are given in this chapter. For drug products and active pharmaceutical ingredients (APIs), the
container choices are tight, well-closed, or, where needed, light-resistant. For excipients, given their typical presentation as
large-volume commodity items (containers ranging from drums to tank cars), a well-closed container is an appropriate default.
For articles other than drug substances and drug products, where no specific directions or limitations are provided, articles
shall be protected from moisture, freezing, and excessive heat, and, where necessary, from light during shipping and distribu-
tion.
The compendial requirements for the use of specified containers apply also to articles as packaged by the pharmacist or oth-
er dispenser, unless otherwise indicated in the individual monograph.
GENERAL DEFINITIONS
Packaging system (also referred to as a container–closure system): The sum of packaging components that together contain
and protect the article. This includes primary packaging components and secondary packaging components, if the latter is in-
tended to provide additional protection.
Container: A receptacle that holds an intermediate compound, active pharmaceutical ingredient, excipient, or dosage form
and is or may be in direct contact with the articles. The immediate container is that which is in direct contact with the article
at all times. The closure is a part of the container. Before being filled, the container should be clean. Special precautions and
cleaning procedures may be necessary to ensure that each container is clean and that extraneous matter is not introduced into
or onto the article.
Packaging component: Any single part of the package or container–closure system including the container (e.g., ampuls,
prefilled syringes, vials, bottles); container liners (e.g., tube cartridge liners); closures (e.g., screw caps, stoppers); ferrules and
overseals; closure liners; inner seals; administration ports; overwraps; administration accessories; and labels.
Primary packaging component: Packaging components that are in direct contact or may become in direct contact with the
article.
Secondary packaging component: Packaging components that are not and will not be in direct contact with the article.
Tertiary packaging: Packaging components that are not in direct contact with the article but facilitate the handling and
transport in order to prevent damage from physical handling and storage conditions to which the article is subjected.
Materials of construction: Refers to the materials (e.g., glass, plastic, elastomers, metal) used to manufacture a packaging
component.
Multiple-dose container (also referred to as multi-dose): A packaging system that permits withdrawal of successive portions
of an article for parenteral administration without changing the safety, strength, quality, or purity of the remaining portion.
See Multi-Dose Containers in Container Content for Injections á697ñ.
Multiple-unit container: A packaging system that permits withdrawal of successive portions of an article without changing
the safety, strength, quality, or purity of the remaining portion.
Single-unit container: A packaging system that holds a quantity of an article intended for administration as a single dose or a
single finished device intended for use promptly after the container is opened. Preferably, the immediate container and/or the
outer container or protective packaging shall be so designed as to show evidence of any tampering with the contents.
Single-dose container: A single-dose container is a container of sterile medication for parenteral administration (injection or
infusion) that is not required to meet the antimicrobial effectiveness testing criteria. [NOTE—For this definition only, container
is synonymous with packaging system and container–closure system.] A single-dose container is designed for use with a single
patient as a single injection/infusion.1 A single-dose container is labeled as such and, when space permits, should include ap-
propriate discard instructions on the label. Examples of single-dose containers are vials, ampuls, and prefilled syringes.
Unit-dose container: A single-unit packaging system for an article intended for administration by other than the parenteral
route as a single dose.
Unit-of-use container: A packaging system that contains a specific quantity of an article that is intended to be dispensed as
such without further modification except for the addition of appropriate labeling. Unit-of-use packaging may not be repack-
aged for sale.
Pharmacy bulk package: A packaging system of a sterile preparation for parenteral use that contains many single doses. The
contents are intended for use in a pharmacy admixture program and are restricted to the preparation of admixtures for infu-
sion or, through a sterile transfer device, for the filling of empty sterile syringes.
The closure shall be penetrated only one time after constitution, if necessary, with a suitable sterile transfer device or dis-
pensing set that allows measured dispensing of the contents. The Pharmacy bulk package is to be used only in a suitable work
area such as a laminar flow hood (or an equivalent clean-air compounding area).
Designation as a Pharmacy bulk package is limited to Injection, for Injection, or Injectable Emulsion dosage forms as defined
in Nomenclature á1121ñ, General Nomenclature Forms.
Pharmacy bulk package, although containing more than one single dose, is exempt from the multiple-dose container volume
limit of 30 mL and the requirement that it contains a substance or suitable mixture of substances to prevent the growth of
microorganisms. See Labeling á7ñ for labeling requirements.
Small-volume injections: An injection that is packaged in containers labeled as containing 100 mL or less.
Large-volume injections: An injection that is intended for intravenous use, and is packaged in containers labeled as contain-
ing more than 100 mL.
Child-resistant packaging: A packaging system designed or constructed to meet Consumer Product Safety Commission
standards pertaining to opening by children (16 CFR §1700.20).
Senior-friendly packaging: A packaging system designed or constructed to meet Consumer Product Safety Commission
standards pertaining to opening by senior adults (16 CFR §1700.20).
Tamper-evident packaging: A packaging system that may not be accessed without obvious destruction of the seal or some
portion of the packaging system. Tamper-evident packaging shall be used for a sterile article intended for ophthalmic or otic
use, except where extemporaneously compounded for immediate dispensing on prescription. Articles intended for sale with-
1 Exceptions may be considered only under conditions described in Pharmaceutical Compounding—Sterile Preparations á797ñ.
Official text. Reprinted from First Supplement to USP38-NF33.
232 á659ñ Packaging and Storage Requirements / Physical Tests DSC
out prescription are also required to comply with the tamper-evident packaging and labeling requirements of the FDA where
applicable. Preferably, the immediate container and/or the outer container or protective packaging used by a manufacturer or
distributor for all dosage forms that are not specifically exempt is designed so as to show evidence of any tampering with the
contents.
Hermetic container: A packaging system that is impervious to air or any other gas under the ordinary or customary condi-
tions of handling, shipment, storage, and distribution.
Tight container: A packaging system that protects the contents from contamination by extraneous liquids, solids, or vapors;
from loss of the article; and from efflorescence, deliquescence, or evaporation under the ordinary or customary conditions of
handling, shipment, storage, and distribution and is capable of tight reclosure. Where a tight container is specified, it may be
replaced by a hermetic container for a single dose of an article. [NOTE—Where packaging and storage in a tight container or
well-closed container is specified in the individual monograph, the container used for an article when dispensed on prescrip-
tion meets the requirements in Containers—Performance Testing á671ñ.]
Well-closed container: A packaging system that protects the contents from contamination by extraneous solids and from loss
of the article under the ordinary or customary conditions of handling, shipment, storage, and distribution. See Containers—
Performance Testing á671ñ.
Light-resistant container: A packaging system that protects the contents from the effects of light by virtue of the specific
properties of the material of which it is composed, including any coating applied to it. A clear and colorless or a translucent
container may be made light-resistant by means of an opaque covering or by use of secondary packaging, in which case the
label of the container bears a statement that the opaque covering or secondary packaging is needed until the articles are to be
used or administered. Where it is directed to “protect from light” in an individual monograph, preservation in a light-resistant
container is intended. See Containers—Performance Testing á671ñ, Light Transmission Test.
Black closure system or black bands: The use of a black closure system on a vial (e.g., a black cap overseal and a black
ferrule to hold the elastomeric closure) or the use of a black band or series of bands above the constriction on an ampul is
prohibited, except for Potassium Chloride for Injection Concentrate. See Labeling á7ñ.
INJECTION PACKAGING
Validation of container–closure integrity must demonstrate no penetration of microbial contamination or chemical or physi-
cal impurities. In addition, the solutes and the vehicle must maintain their specified total and relative quantities or concentra-
tions when exposed to anticipated extreme conditions of manufacturing and processing, storage, shipment, and distribution.
Closures for multiple-dose packaging systems permit the withdrawal of the contents without removal or destruction of the clo-
sure. The closure permits penetration by a needle and, upon withdrawal of the needle, closes at once, protecting the contents
against contamination. Validation of the multiple-dose container–closure integrity must include verification that such a pack-
age prevents microbial contamination or loss of product contents under anticipated conditions of multiple entry and use.
Piggyback packaging systems are usually intravenous infusion container–closure systems used to administer a second infu-
sion through a connector of some type or an injection port on the administration set of the first fluid, thereby avoiding the
need for another injection site on the patient's body. Piggyback packaging systems also are known as secondary infusion con-
tainers.
The volume of injection in a single-dose packaging system provides the amount specified for one-time parenteral administra-
tion and in no case is more than sufficient to permit the withdrawal and administration of 1 L.
Preparations intended for intraspinal, intracisternal, or peridural administration are packaged only in single-dose packaging
systems.
Unless otherwise specified in the individual monograph, a multiple-dose packaging system contains a volume of injection
sufficient to permit the withdrawal of NMT 30 mL.
The following injections are exempt from the 1-L restriction of the foregoing requirements relating to packaging:
1. Injections packaged for extravascular use as irrigation solutions or peritoneal dialysis solutions.
2. Injections packaged for intravascular use as parenteral nutrition or as replacement or substitution fluid to be administered
continuously during hemofiltration.
Injections packaged for intravascular use that may be used for intermittent, continuous, or bolus replacement fluid adminis-
tration during hemodialysis or other procedures, unless excepted above, must conform to the 1-L restriction. Injections labeled
for veterinary use are exempt from packaging and storage requirements concerning the limitation to single-dose packaging
systems and the limitation on the volume of multiple-dose containers.
Sterile solids packaging: Containers, including the closures, for dry solids intended for injection do not interact physically or
chemically with the preparation in any manner to alter the strength, quality, or purity beyond the official requirements under
the ordinary or customary conditions of handling, shipment, storage, sale, and use. A packaging system for a sterile solid per-
mits the addition of a suitable solvent and withdrawal of portions of the resulting solution or suspension in such manner that
the sterility of the product is maintained. Where the Assay in a monograph provides a procedure for the Sample solution, in
which the total withdrawable contents are to be withdrawn from a single-dose packaging system with a hypodermic needle
and syringe, the contents are to be withdrawn as completely as possible into a dry hypodermic syringe of a rated capacity not
exceeding three times the volume to be withdrawn and fitted with a 21-gauge needle NLT 2.5 cm (1 inch) in length, with
care being taken to expel any air bubbles, and discharged into a container for dilution and assay.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á659ñ Packaging and Storage Requirements 233
Gas cylinder: A gas cylinder is a metallic packaging system constructed of steel or aluminum designed to hold medical gases
under pressure. Medical gases include Carbon Dioxide USP, Helium USP, Medical Air USP, nitric oxide, Nitrous Oxide USP, Ni-
trogen NF, and Oxygen USP. As a safety measure, for carbon dioxide, cyclopropane, helium, medical air, nitrous oxide, and
oxygen, the Pin-Index Safety System of matched fittings is recommended for cylinders of Size E or smaller.
ASSOCIATED COMPONENTS
Many associated components are graduated for dose administration. It is the responsibility of the manufacturer to ensure
that the appropriate dosing component is provided or that a general purpose component, such as those described in this sec-
tion, is specified for delivering the appropriate dose with the intended accuracy. The graduations should be legible and indeli-
ble.
Graduated associated components described in this section are for general use. Graduated markings should be legible, in-
delible, and on an extraoral nonproduct contact surface. Under ideal conditions of use, the volume error incurred in measuring
liquids for individual dose administration by means of such graduated components should be NMT 10% of the indicated
amount of the liquid preparation with which the graduated component will be used. Few liquid preparations have the same
surface and flow characteristics. Therefore, the volume delivered varies materially from one preparation to another.
Polymers and ingredients added to polymers that are used in the fabrication of associated components must conform to the
requirements in the applicable sections of the Code of Federal Regulations, Title 21, Indirect Food Additives.
Dosing cup: A measuring device consisting of a small cup that is packaged with oral liquid articles or that may be sold and
purchased separately.
Dosing spoon: A measuring device consisting of a bowl and a handle that is packaged with oral liquid articles or that may
be sold and purchased separately. The handle may be a graduated tube.
Medicine dropper: A measuring device consisting of a transparent or translucent barrel or tube that is generally fitted with a
collapsible bulb. It is packaged with oral liquid articles or may be sold and purchased separately.
Droppers typically vary in capacity; however, the delivery end should be a round opening having an external diameter of
about 3 mm. The barrel may be graduated. [NOTE—Few medicinal liquids have the same surface and flow characteristics as
water, and therefore the size of drops varies materially from one preparation to another.]
Oral syringe: A measuring device consisting of a plunger and barrel made of suitable rigid, transparent or translucent plastic
material and a seal on the end. It is packaged with oral liquid articles or may be sold and purchased separately. The syringe
should expel a measured amount of a liquid article directly into the patient's mouth. Finger grips located at the open end of
the barrel should be the appropriate size, shape, and strength, and should allow the syringe to be held securely during use.
The barrel may be graduated.
Teaspoon: A measuring device consisting of a shallow bowl, oval or round, at the end of a handle. A teaspoon has been
established as containing 4.93 ± 0.24 mL. For the practice of administering articles, the teaspoon may be regarded as repre-
senting a volume of 5 mL.
Articles intended for administration by teaspoon should be formulated on the basis of dosage in 5-mL units, such that any
component used to administer liquid articles should deliver 5 mL wherever a teaspoon calibration is indicated. A household
spoon is not an acceptable alternative to the graduated teaspoon described herein.
This act requires special packaging of most human oral prescription drugs, oral controlled drugs, certain non-oral prescrip-
tion drugs, certain dietary supplements, and many over-the-counter (OTC) drug preparations in order to protect the public
from personal injury or illness from misuse of these preparations (16 CFR §1700.14).
The immediate packaging of substances regulated under the PPPA must comply with the special packaging standards (16
CFR §1700.15 and 16 CFR §1700.16) and applies to all packaging types including reclosable, nonclosable, and unit-dose
types.
Special packaging is not required either for drugs dispensed within a hospital setting for inpatient administration or by man-
ufacturers and packagers of bulk-packaged prescription drugs repackaged by the pharmacist. PPPA-regulated prescription
drugs may be dispensed in nonchild-resistant packaging upon the request of the purchaser or when directed in a legitimate
prescription (15 U.S.C. §1473).
Manufacturers or packagers of PPPA-regulated OTC preparations are allowed to package one size in nonchild-resistant pack-
aging as long as popular-size, special packages are also supplied. The nonchild-resistant packaging requires special labeling (16
CFR §1700.5).
STORAGE CONDITIONS
Specific directions are stated in some monographs with respect to storage conditions, e.g., the temperature or humidity at
which an article must be stored and shipped. Such directions apply, except where the label on the article has different storage
conditions that are based on stability studies. Where no specific storage conditions are provided in the individual monograph,
but the label of an article states storage conditions based on stability studies, such labeled storage directions apply.
Freezer: A place in which the temperature is maintained between −25° and −10° (−13° and 14 °F).
Refrigerator: A cold place in which the temperature is maintained between 2° and 8° (36° and 46 °F).
Cold: Any temperature not exceeding 8° (46 °F).
Cool: Any temperature between 8° and 15° (46° and 59 °F). [NOTE—An article for which storage in a cool place is directed
may, alternatively, be stored and shipped as refrigerated, unless otherwise specified by the individual monograph.]
Room temperature: The temperature prevailing in a work area.
Controlled room temperature: The temperature maintained thermostatically that encompasses at the usual and customary
working environment of 20°–25° (68°–77 °F). The following conditions also apply.
Mean kinetic temperature not to exceed 25°. Excursions between 15° and 30° (59° and 86 °F) that are experienced in phar-
macies, hospitals, and warehouses, and during shipping are allowed. Provided the mean kinetic temperature does not exceed
25°, transient spikes up to 40° are permitted as long as they do not exceed 24 h. Spikes above 40° may be permitted only if
the manufacturer so instructs.
Articles may be labeled for storage at “controlled room temperature” or at “up to 25°,” or other wording based on the same
mean kinetic temperature.
An article for which storage at Controlled room temperature is directed may, alternatively, be stored and shipped in a cool
place or refrigerated, unless otherwise specified in the individual monograph or on the label.
Warm: Any temperature between 30° and 40° (86° and 104 °F).
Excessive heat: Any temperature above 40° (104 °F).
Dry place: The term “dry place” denotes a place that does not exceed 40% average relative humidity at 20° (68 °F) or the
equivalent water vapor pressure at other temperatures. The determination may be made by direct measurement at the place
or may be based on reported climatic conditions. Determination is based on NLT 12 equally spaced measurements that en-
compass either a season, a year, or, where recorded data demonstrate, the storage period of the article. There may be values
of up to 45% relative humidity provided that the average value does not exceed 40% relative humidity. Storage in a container
validated to protect the article from moisture vapor, including storage in bulk, is considered a dry place.
Protection from freezing: Where, in addition to the risk of breakage of the container, freezing subjects an article to loss of
strength or potency, or to destructive alteration of its characteristics, the container label bears an appropriate instruction to
protect the article from freezing.
Protection from light: Where light subjects an article to loss of strength or potency, or to destructive alteration of its charac-
teristics, the container label bears an appropriate instruction to protect the article from light.
á660ñ CONTAINERS—GLASS
DESCRIPTION
Glass containers for pharmaceutical use are intended to come into direct contact with pharmaceutical products. Glass used
for pharmaceutical containers is either borosilicate (neutral) glass or soda-lime-silica glass. Borosilicate glass contains significant
amounts of boric oxide, aluminum oxide, and alkali and/or alkaline earth oxides in the glass network. Borosilicate glass has a
high hydrolytic resistance and a high thermal shock resistance due to the chemical composition of the glass itself; it is classified
as Type I glass. Soda-lime-silica glass is a silica glass containing alkaline metal oxides, mainly sodium oxide, and alkaline earth
oxides, mainly calcium oxide, in the glass network. Soda-lime-silica glass has a moderate hydrolytic resistance due to the
chemical composition of the glass itself; it is classified as Type III glass. Suitable treatment of the inner surface of Type III soda-
lime-silica glass containers will raise the hydrolytic resistance from a moderate to a high level, changing the classification of the
glass to Type II.
The following recommendations can be made as to the suitability of the glass type for containers for pharmaceutical prod-
ucts, based on the tests for hydrolytic resistance. Type I glass containers are suitable for most products for parenteral and non-
parenteral uses. Type II glass containers are suitable for most acidic and neutral aqueous products for parenteral and nonpar-
enteral uses. Type II glass containers may be used for alkaline parenteral products where stability data demonstrate their suita-
bility. Type III glass containers usually are not used for parenteral products or for powders for parenteral use, except where
suitable stability test data indicate that Type III glass is satisfactory.
The inner surface of glass containers may be treated to improve hydrolytic resistance. The outer surface of glass containers
may be treated to reduce friction or for protection against abrasion or breakage. The outer surface treatment is such that it
does not contaminate the inner surface of the container.
Information on chemical composition of glass types, the formation of glass containers, and factors that influence inner sur-
face durability of glass containers is provided in Evaluation of the Inner Surface Durability of Glass Containers á1660ñ. This chapter
also contains recommended approaches to evaluate the potential of a drug product to cause the formation of glass particles
and delamination.
Glass may be colored to provide protection from light by the addition of small amounts of metal oxides and is tested as
described in Spectral Transmission for Colored Glass Containers. A clear and colorless container that is made light resistant by
means of an opaque enclosure (see Packaging and Storage Requirements á659ñ, Light-Resistant) is exempt from the requirements
for spectral transmission.
SPECIFIC TESTS
The Glass Grains Test combined with the Surface Glass Test for hydrolytic resistance determines the glass type. The hydrolytic
resistance is determined by the quantity of alkali released from the glass under the conditions specified. This quantity of alkali
is extremely small in the case of the more resistant glasses, thus calling for particular attention to all details of the tests and the
use of apparatus of high quality and precision. Conducting these tests in conjunction with a glass standard reference material
(SRM) on a routine basis will help to ensure the accuracy of the method. Reference materials are available for both borosilicate
glass (SRM 623) and soda-lime-silica glass (SRM 622) from the National Institute of Standards and Technology. The tests
should be conducted in an area relatively free from fumes and excessive dust. Test selection is shown in Table 1 and Table 2.
Table 1. Determination of Glass Types
Container Type Test Reason
Distinguishes Type I borosilicate glass from Type
I, II, III Glass Grains Test II and III soda-lime-silica glass
The inner surface of glass containers is the contact surface for pharmaceutical preparations, and the quality of this surface is
determined by the Surface Glass Test for hydrolytic resistance. The Surface Etching Test may be used to determine whether high
hydrolytic resistance is due to chemical composition or to surface treatement. Alternatively, the comparison of data from the
Glass Grains Test and the Surface Glass Test may be used in Table 2.
Table 2. Determination of Inner Surface Hydrolytic Resistance
Container Type Test Reason
Determines hydrolytic resistance of inner sur-
face; distinguishes between Type I and Type II
containers with high hydrolytic resistance and
Type III containers with moderate hydrolytic
I, II, III Surface Glass Test resistance
Where it is necessary, determines whether high
hydrolytic resistance is due to inner surface
Surface Etching Test or comparison of Glass Grains treatment or to the chemical composition of
I, II Test and Surface Glass Test data the glass containers
Glass containers must comply with their respective specifications for identity and surface hydrolytic resistance to be classified
as Type I, II, or III glass. Type I or Type II containers for aqueous parenteral products are tested for extractable arsenic.
Hydrolytic Resistance
APPARATUS
Autoclave: For these tests, use an autoclave capable of maintaining a temperature of 121 ± 1°, equipped with a thermome-
ter, or a calibrated thermocouple device, allowing a temperature measurement independent of the autoclave system; a suita-
ble recorder; a pressure gauge; a vent cock; and a tray of sufficient capacity to accommodate the number of containers nee-
ded to carry out the test above the water level. Clean the autoclave and other apparatus thoroughly with Purified Water before
use.
Mortar and pestle: Use a hardened-steel mortar and pestle, made according to the specifications in Figure 1.
Other apparatus: Also required are a set of three square-mesh stainless steel sieves mounted on frames consisting of US
Sieve Nos. 25, 40, and 50 (see Particle Size Distribution Estimation by Analytical Sieving á786ñ, Table 1. Sizes of Standard Sieve
Series in Range of Interest); a mechanical sieve-shaker or a sieving machine that may be used to sieve the grains; a tempered,
magnetic steel hammer; a permanent magnet; weighing bottles; stoppers; metal foil (e.g., aluminum, stainless steel); a hot air
oven, capable of maintaining 140 ± 5°; a balance, capable of weighing up to 500 g with an accuracy of 0.005 g; a desiccator;
and an ultrasonic bath.
REAGENTS
Carbon dioxide-free water: This is Purified Water that has been boiled vigorously for 5 min or more and allowed to cool
while protected from absorption of carbon dioxide from the atmosphere, or Purified Water that has a resistivity of not less than
18 Mohm-cm.
Methyl red solution: Dissolve 50 mg of methyl red in 1.86 mL of 0.1 M sodium hydroxide and 50 mL of ethanol (96%), and
dilute with Purified Water to 100 mL. To test for sensitivity, add 100 mL of carbon dioxide-free water and 0.05 mL of 0.02 M
hydrochloric acid to 0.1 mL of the methyl red solution. The resulting solution should be red. NMT 0.1 mL of 0.02 M sodium
hydroxide is required to change the color to yellow. A color change from red to yellow corresponds to a change in pH from
pH 4.4 (red) to pH 6.0 (yellow).
The Glass Grains Test may be performed either on the canes used for the manufacture of tubing glass containers or on the
containers.
SAMPLE PREPARATION
Rinse the containers to be tested with Purified Water, and dry in the oven. Wrap at least three of the glass articles in clean
paper, and crush to produce two samples of about 100 g each in pieces NMT 30 mm across. Place in the mortar 30–40 g of
the pieces between 10 and 30 mm across taken from one of the samples, insert the pestle, and strike it heavily with the ham-
mer once only. Alternatively, transfer samples into a ball mill-breaker, add the balls, and crush the glass. Transfer the contents
of the mortar or ball mill to the coarsest sieve (No. 25) of the set. Repeat the operation until all fragments have been transfer-
red to the sieve. Shake the set of sieves for a short time by hand, and remove the glass that remains on sieves No. 25 and No.
40. Submit these portions to further fracture, repeating the operation until about 10 g of glass remains on sieve No. 25. Reject
this portion and the portion that passes through sieve No. 50. Reassemble the set of sieves, and shake for 5 min. Transfer to a
weighing bottle the glass grains that passed through sieve No. 40 and are retained on sieve No. 50. Repeat the crushing and
sieving procedure with the second glass sample until two samples of grains are obtained, each of which weighs more than
10 g.
Spread each sample on a piece of clean glazed paper, and remove any iron particles by passing the magnet over them.
Transfer each sample into a beaker for cleaning. Add 30 mL of acetone to the grains in each beaker, and scour the grains,
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á660ñ Containers—Glass 237
using suitable means such as a rubber-tipped or plastic-coated glass rod. After scouring the grains, allow to settle, and decant
as much acetone as possible. Add another 30 mL of acetone, swirl, decant, and add a new portion of acetone. Fill the bath of
the ultrasonic vessel with water at room temperature, then place the beaker in the rack, and immerse it until the level of the
acetone is at the level of the water; apply the ultrasound for 1 min. Swirl the beaker, allow to settle, and decant the acetone as
completely as possible; then repeat the ultrasonic cleaning operation. If a fine turbidity persists, repeat the ultrasonic cleaning
and acetone washing until the solution remains clear. Swirl, and decant the acetone. Dry the grains, first by putting the beaker
on a warm plate, then by heating at 140° for 20 min in a drying oven. Transfer the dried grains from each beaker into separate
weighing bottles, insert the stoppers, and cool in a desiccator.
METHOD
Filling and heating: Weigh 10.00 g of the cleaned and dried grains into two separate conical flasks. Pipet 50 mL of carbon
dioxide-free Purified Water into each of the conical flasks (test solutions). Pipet 50 mL of carbon dioxide-free Purified Water
into a third conical flask that will serve as a blank. Distribute the grains evenly over the flat bases of the flasks by shaking gen-
tly. Close the flasks with neutral glass dishes or aluminum foil rinsed with Purified Water or with inverted beakers so that the
inner surfaces of the beakers fit snugly down onto the top rims of the flasks. Place all three flasks in the autoclave containing
the water at ambient temperature, and ensure that they are held above the level of the water in the vessel. Carry out the fol-
lowing operations:
1. Insert the end of a calibrated thermometric device in a filled container through a hole of approximately the diameter of
the thermocouple and connect it to an external measuring device. If the container is too small to insert a thermocouple,
apply a thermocouple in a suitable, similar container. Alternatively, use the internal thermometer of the autoclave.
2. Close the autoclave door or lid securely but leave the vent-cock open.
3. Start automatic recording of the temperature versus time, and heat the autoclave at a regular rate such that steam issues
vigorously from the vent-cock after 20–30 min, and maintain a vigorous evolution of steam for a further 10 min. For auto-
claves using a steam generator, it is not necessary to maintain the temperature for 10 min at 100°.
4. Close the vent-cock, and raise the temperature from 100° to 121° at a rate of 1°/min within 20–22 min.
5. Maintain the temperature at 121 ± 1° for 30 ± 1 min from the time when the holding temperature is reached.
6. Cool down to 100° at a rate of 0.5°/min, venting to prevent formation of a vacuum, within 40–44 min.
7. Do not open the autoclave until it has cooled to 95°.
8. Remove the hot samples from the autoclave using appropriate safety precautions, and cool the samples cautiously down
to room temperature within 30 min, avoiding thermal shock.
Titration: To each of the three flasks add 0.05 mL of Methyl red solution. Titrate the blank solution immediately with 0.02 M
hydrochloric acid, then titrate the test solutions until the color matches that obtained with the blank solution. Subtract the
titration volume for the blank solution from that for the test solutions. Calculate the mean value of the results in mL of 0.02 M
hydrochloric acid per g of the sample. Repeat the test if the highest and lowest observed values differ by more than the per-
missible range given in Table 3.
Table 3. Permissible Range for Values Obtained
Mean of the Values Obtained for the Consumption of Hydrochloric
Acid Solution per g of Glass Grains (mL/g) Permissible Range of the Values Obtained
NMT 0.10 25% of the mean
0.10–0.20 20% of the mean
NLT 0.20 10% of the mean
NOTE—Where necessary to obtain a sharp endpoint, decant the clear solution into a separate 250-mL flask. Rinse the grains
by swirling with three 15-mL portions of carbon dioxide-free water, and add the washings to the main solution. Add 0.05 mL
of the Methyl red solution. Titrate, and calculate as before. In this case also add 45 mL of carbon dioxide-free Purified Water and
0.05 mL of Methyl red solution to the blank solution.
LIMITS
The filling volume is the volume of Purified Water to be added to the container for the purpose of the test. For vials, bottles,
cartridges, and syringes, the filling volume is 90% of the brimful capacity. For ampuls, it is the volume up to the height of the
shoulder.
Vials and bottles: Select six dry vials or bottles from the sample lot, or three if their capacity exceeds 100 mL, and remove
any dirt or debris. Weigh the empty containers with an accuracy of 0.1 g. Place the containers on a horizontal surface, and fill
them with Purified Water to about the rim edge, avoiding overflow and the introduction of air bubbles. Adjust the liquid levels
to the brimful line. Weigh the filled containers to obtain the mass of the water expressed to two decimal places, for containers
having a nominal volume less than or equal to 30 mL, and expressed to one decimal place, for containers having a nominal
volume greater than 30 mL. Calculate the mean value of the brimful capacity in mL, and multiply it by 0.9. This volume, ex-
pressed to one decimal place, is the filling volume for the particular container lot.
Cartridges and syringes: Select six dry syringes or cartridges, and seal the small opening (mouth of cartridges; Luer cone or
staked needle of syringes), using an inert material. Determine the mean brimful capacity and filling volume according to Vials
and Bottles.
Ampuls: Place at least six dry ampuls on a flat, horizontal surface, and fill them with Purified Water from a buret until the
water reaches point A, where the body of the ampul starts to decrease to the shoulder of the ampul (see Figure 2). Read the
capacities, expressed to two decimal places, and calculate the mean value. This volume, expressed to one decimal place, is the
filling volume for the particular ampul lot. The filling volume may also be determined by weighing.
TEST
The determination is carried out on unused containers. The volumes of the test solution necessary for the final determination
are shown in Table 5.
Table 5. Volume of Test Solution and Number of Titrations
Volume of Test Liquid for One Ti-
Filling Volume tration
(mL) (mL) Number of Titrations
NMT 3 25.0 1
3–30 50.0 2
30–100 100.0 2
NLT 100 100.0 3
METHOD
Cleaning: Remove any debris or dust. Shortly before the test, rinse each container carefully at least twice with Purified Water,
refilled, and allow to stand. Immediately before testing, empty the containers; rinse once with Purified Water, then with car-
bon dioxide-free water; and allow to drain. Complete the cleaning procedure from the first rinsing within 20–30 min. Closed
ampules may be warmed in a water bath or in an air oven at about 40° for approximately 2 min before opening to avoid
container pressure when opening. Do not rinse before testing.
Filling and heating: The containers are filled with carbon dioxide-free water up to the filling volume. Containers in the form
of cartridges or prefillable syringes are closed in a suitable manner with material that does not interfere with the test. Each
container, including ampuls, shall be loosely capped with an inert material such as a dish of neutral glass or aluminum foil
previously rinsed with Purified Water. Place the containers on the tray of the autoclave. Place the tray in an autoclave contain-
ing a quantity of water such that the tray remains clear of the water. Close the autoclave, and carry out autoclaving procedure
steps 1–8 as described in the Glass Grains Test, except that the temperature is maintained at 121 ± 1° for 60 ± 1 min. If a water
bath is used for cooling samples, take care that the water does not make contact with the loose foil caps to avoid contamina-
tion of the extraction solution. The extraction solutions are analyzed by titration according to the method described below.
Titration: Carry out the titration within 1 h of the removal of the containers from the autoclave. Combine the liquids ob-
tained from the containers, and mix. Introduce the prescribed volume (see Table 5) into a conical flask. Transfer the same vol-
ume of carbon dioxide-free water, to be used as a blank, into a second similar flask. Add to each flask 0.05 mL of Methyl red
solution for each 25 mL of liquid. Titrate the blank with 0.01 M hydrochloric acid. Titrate the test solution with the same acid
until the color of the resulting solution is the same as that obtained for the blank. Subtract the value found for the blank titra-
tion from that found for the test solution, and express the results in mL of 0.01 M hydrochloric acid per 100 mL of test solu-
tion. Express titration values of less than 1.0 mL to two decimal places; express titration values of greater than or equal to 1.0
mL to one decimal place.
LIMITS
The results, or the average of the results if more than one titration is performed, are not greater than the values stated in
Table 6.
Table 6. Limit Values for the Surface Glass Test
Maximum Volume of 0.01 M HCl per 100 mL of Test Solution
Filling Volume (mL)
(mL) Types I and II Type III
NMT 1 2.0 20.0
1–2 1.8 17.6
2–3 1.6 16.1
3–5 1.3 13.2
5–10 1.0 10.2
10–20 0.80 8.1
20–50 0.60 6.1
50–100 0.50 4.8
100–200 0.40 3.8
200–500 0.30 2.9
NLT–500 0.20 2.2
The Surface Etching Test is used in addition to the Surface Glass Test when it is necessary to determine whether a container
has been surface treated and/or to distinguish between Type I and Type II glass containers. Alternatively, the Glass Grains Test
and Surface Glass Test may be used. The Surface Etching Test may be carried out either on unused samples or on samples used
in the Surface Glass Test.
METHOD
Vials and bottles: The volumes of test solution required are shown in Table 5. Rinse the containers twice with Purified Water,
fill to the brimful point with a mixture of one volume of hydrofluoric acid and nine volumes of hydrochloric acid, and allow to
stand for 10 min. Empty the containers, and rinse carefully five times with Purified Water. Immediately before the test, rinse
once again with Purified Water. Submit these containers to the same autoclaving and determination procedure as described in
the Surface Glass Test. If the results are considerably higher than those obtained from the original surfaces (by a factor of about
5–10), the samples have been surface treated. [Caution—Hydrofluoric acid is extremely aggressive. Even small quantities can
cause life threatening injuries.]
Ampuls, cartridges, and syringes: Apply the test method as described in Vials and Bottles. If the ampuls, cartridges, and sy-
ringes are not surface treated, the values obtained are slightly lower than those obtained in the previous tests. [NOTE—Ampuls,
cartridges, and syringes made from Type I glass tubing are not normally subjected to internal surface treatment.]
The results obtained from the Surface Etching Test are compared to those obtained from the Surface Glass Test. For Type I
glass containers, the values obtained are close to those found in the Surface Glass Test. For Type II glass containers, the values
obtained greatly exceed those found in the Surface Glass Test; and they are similar to, but not greater than, those obtained for
Type III glass containers of the same filling volume.
IMPURITIES
Arsenic á211ñ
Use as the Test Preparation 35 mL of the water from one Type I or Type II glass container, or, in the case of smaller contain-
ers, 35 mL of the combined contents of several Type I or Type II glass containers, prepared as directed in the Surface Glass Test.
The limit does not exceed 0.1 mg/g.
FUNCTIONALITY
APPARATUS
A UV-Vis spectrophotometer, equipped with either a photodiode detector or a photomultiplier tube coupled with an inte-
grating sphere
SAMPLE PREPARATION
Break the glass container or cut it with a circular saw fitted with a wet abrasive wheel, such as a carborundum or a bonded
diamond wheel. Select sections representative of the wall thickness, and trim them as suitable for mounting in a spectropho-
tometer. After cutting, wash and dry each specimen, taking care to avoid scratching the surfaces. If the specimen is too small
to cover the opening in the specimen holder, mask the uncovered portion of the opening with opaque paper or tape, provi-
ded that the length of the specimen is greater than that of the slit. Before placing in the holder, wash, dry, and wipe the speci-
men with lens tissue. Mount the specimen with the aid of wax, or by other convenient means, taking care to avoid leaving
fingerprints or other marks.
METHOD
Place the specimen in the spectrophotometer with its cylindrical axis parallel to the slit and in such a way that the light
beam is perpendicular to the surface of the section and the losses due to reflection are at a minimum. Measure the transmis-
sion of the specimen with reference to air in the spectral region of 290–450 nm, continuously or at intervals of 20 nm.
LIMITS
The observed spectral transmission for colored glass containers for products for nonparenteral use does not exceed 10% at
any wavelength in the range of 290–450 nm, irrespective of the type and capacity of the glass container. The observed spec-
tral transmission in colored glass containers for parenteral products does not exceed the limits given in Table 7.
Table 7. Limits of Spectral Transmission for Colored Glass Containers for Parenteral Products
Maximum Percentage of Spectral Transmission at Any Wave-
Nominal Volume length between 290 nm and 450 nm
(mL) Flame-Sealed Containers Containers with Closures
NMT 1 50 25
1–2 45 20
2–5 40 15
5–10 35 13
10–20 30 12
NLT 20 25 10
á661ñ CONTAINERS—PLASTICS
INTRODUCTION
It is the purpose of this chapter to provide standards for plastic materials and components used to package medical articles
(pharmaceuticals, biologics, dietary supplements, and devices). Definitions that apply to this chapter are provided in á659ñ
Packaging and Storage Requirements. Standards and tests for the functional properties of containers and their components are
provided in general chapter Containers—Performance Testing á671ñ.
In addition to the standards provided herein, the ingredients added to the polymers, and those used in the fabrication of the
containers, must conform to the requirements in the applicable sections of the Code of Federal Regulations, Title 21, Indirect
Food Additives, or have been evaluated by the FDA and determined to be acceptable substances for the listed use.
Plastic articles are identified and characterized by IR spectroscopy and differential scanning calorimetry. Standards are provi-
ded in this chapter for the identification and characterization of the different types of plastic, and the test procedures are provi-
ded at the end of the chapter. The degree of testing is based on whether or not the container has direct contact with the drug
product, and the risk is based on the route of administration.
Plastics are composed of a mixture of homologous polymers, having a range of molecular weights. Plastics may contain oth-
er substances such as residues from the polymerization process, plasticizers, stabilizers, antioxidants, pigments, and lubricants.
These materials meet the requirements for food contact as provided in the Code of Federal Regulations, Title 21. Factors such as
plastic composition, processing and cleaning procedures, surface treatment, contacting media, inks, adhesives, absorption and
permeability of preservatives, and conditions of storage may also affect the suitability of a plastic for a specific use. Extraction
tests are designed to characterize the extracted components and identify possible migrants. The degree or extent of testing for
extractables of the component is dependent on the intended use and the degree of risk to adversely impact the efficacy of the
compendial article (drug, biologic, dietary supplement, or device). Resin-specific extraction tests are provided in this chapter
for polyethylene, polypropylene, polyethylene terephthalate, and polyethylene terephthalate G. Test all other plastics as direc-
ted for Physicochemical Tests in the section Test Methods. Conduct the Buffering Capacity test only when the containers are in-
tended to hold a liquid product.
Plastic components used for products of high risk, such as those intended for inhalation, parenteral preparation, and oph-
thalmics are tested using the Biological Tests in the section Test Methods.
Plastic containers intended for packaging products prepared for parenteral use meet the requirements for Biological Tests and
Physicochemical Tests in the section Test Methods. Standards are also provided for polyethylene containers used to package dry
oral dosage forms that are not meant for constitution into solution.
POLYETHYLENE CONTAINERS
Scope
The standards and tests provided in this section characterize containers and components, produced from either low-density
polyethylene or high-density polyethylene of either homopolymer or copolymer resins that are interchangeably suitable for
packaging dry oral dosage forms not meant for constitution into solution. All polyethylene components are subject to testing
by IR spectroscopy and differential scanning calorimetry. Where stability studies have been performed to establish the expira-
tion date of a particular dosage form in the appropriate polyethylene container, then any other polyethylene container meet-
ing these requirements may be similarly used to package such a dosage form, provided that the appropriate stability programs
are expanded to include the alternative container, in order to ensure that the identity, strength, quality, and purity of the dos-
age form are maintained throughout the expiration period.
Background
High-density and low-density polyethylene are long-chain polymers synthesized under controlled conditions of heat and
pressure, with the aid of catalysts from not less than 85.0% ethylene and not less than 95.0% total olefins. Other olefin ingre-
dients that are most frequently used are butene, hexene, and propylene. High-density polyethylene and low-density polyethy-
lene both have an IR absorption spectrum that is distinctive for polyethylene, and each possesses characteristic thermal proper-
ties. High-density polyethylene has a density between 0.941 and 0.965 g per cm3. Low-density polyethylene has a density be-
tween 0.850 and 0.940 g per cm3. Other properties that may affect the suitability of polyethylene include modulus of elastici-
ty, melt index, environmental stress crack resistance, and degree of crystallinity after molding.
High-Density Polyethylene
Infrared Spectroscopy—Proceed as directed for Multiple Internal Reflectance in the section Test Methods. The corrected
spectrum of the specimen exhibits major absorption bands only at the same wavelengths as the spectrum of USP High-Density
Polyethylene RS.
Differential Scanning Calorimetry—Proceed as directed for Thermal Analysis in the section Test Methods. The thermogram
of the specimen is similar to the thermogram of USP High-Density Polyethylene RS, similarly determined, and the temperature
of the endotherm (melt) in the thermogram of the specimen does not differ from that of the USP Reference Standard by more
than 6.0°.
Heavy Metals and Nonvolatile Residue—Prepare extracts of specimens for these tests as directed for Physicochemical Tests
under Test Methods, except that for each 20.0 mL of Extracting Medium the portion shall be 60 cm2, regardless of thickness.
HEAVY METALS—Containers meet the requirements for Heavy Metals in the section Physicochemical Tests under Test Methods.
NONVOLATILE RESIDUE—Proceed as directed for Nonvolatile Residue under Physicochemical Tests, except that the Blank shall be
the same solvent used in each of the following test conditions: the difference between the amounts obtained from the Sample
Preparation and the Blank does not exceed 12.0 mg when water maintained at a temperature of 70° is used as the Extracting
Medium; does not exceed 75.0 mg when alcohol maintained at a temperature of 70° is used as the Extracting Medium; and
does not exceed 100.0 mg when hexanes maintained at a temperature of 50° is used as the Extracting Medium.
Components Used in Contact with Oral Liquids—Proceed as directed for Buffering Capacity in the section Physicochemical
Tests under Test Methods.
Low-Density Polyethylene
Infrared Spectroscopy—Proceed as directed for Multiple Internal Reflectance under Test Methods. The corrected spectrum of
the specimen exhibits major absorption bands only at the same wavelengths as the spectrum of USP Low-Density Polyethylene
RS.
Differential Scanning Calorimetry—Proceed as directed for Thermal Analysis under Test Methods. The thermogram of the
specimen is similar to the thermogram of USP Low-Density Polyethylene RS, similarly determined, and the temperature of the
endotherm (melt) in the thermogram of the specimen does not differ from that of the USP Reference Standard by more than
8.0°.
Heavy Metals and Nonvolatile Residue—Prepare extracts of specimens for these tests as directed for Sample Preparation in
the section Physicochemical Tests under Test Methods, except that for each 20.0 mL of Extracting Medium the portion shall be
60 cm2, regardless of thickness.
HEAVY METALS—Containers meet the requirements for Heavy Metals in the section Physicochemical Tests under Test Methods.
NONVOLATILE RESIDUE—Proceed as directed for Nonvolatile Residue in the section Physicochemical Tests under Test Methods,
except that the Blank shall be the same solvent used in each of the following test conditions: the difference between the
amounts obtained from the Sample Preparation and the Blank does not exceed 12.0 mg when water maintained at a tempera-
ture of 70° is used as the Extracting Medium; does not exceed 75.0 mg when alcohol maintained at a temperature of 70° is
used as the Extracting Medium; and does not exceed 350.0 mg when hexanes maintained at a temperature of 50° is used as
the Extracting Medium.
Components Used in Contact with Oral Liquids—Proceed as directed for Buffering Capacity in the section Physicochemical
Tests under Test Methods.
POLYPROPYLENE CONTAINERS
Scope
The standards and tests provided in this section characterize polypropylene containers, produced from either homopolymers
or copolymers, that are interchangeably suitable for packaging dry solid and liquid oral dosage forms. Where suitable stability
studies have been performed to establish the expiration date of a particular dosage form in the appropriate polypropylene
container, then any other polypropylene container meeting these requirements may be similarly used to package such a dos-
age form, provided that the appropriate stability programs are expanded to include the alternative container, in order to en-
sure that the identity, strength, quality, and purity of the dosage form are maintained throughout the expiration period.
Background
Propylene polymers are long-chain polymers synthesized from propylene or propylene and other olefins under controlled
conditions of heat and pressure, with the aid of catalysts. Examples of other olefins most commonly used include ethylene and
butene. The propylene polymers, the ingredients used to manufacture the propylene polymers, and the ingredients used in
the fabrication of the containers conform to the applicable sections of the Code of Federal Regulations, Title 21.
Factors such as plastic composition, processing and cleaning procedures, contacting media, inks, adhesives, absorption, ad-
sorption and permeability of preservatives, and conditions of storage may also affect the suitability of a plastic for a specific
use. The suitability of a specific polypropylene must be established by appropriate testing.
Polypropylene has a distinctive IR spectrum and possesses characteristic thermal properties. It has a density between 0.880
and 0.913 g per cm3. The permeation properties of molded polypropylene containers may be altered when reground polymer
is incorporated, depending on the proportion of reground material in the final product. Other properties that may affect the
suitability of polypropylene used in containers for packaging drugs are the following: oxygen and moisture permeability, mod-
ulus of elasticity, melt flow index, environmental stress crack resistance, and degree of crystallinity after molding. The require-
ments in this section are to be met when dry solid and liquid oral dosage forms are to be packaged in a container defined by
this section.
Infrared Spectroscopy—Proceed as directed for Multiple Internal Reflectance under Test Methods. The corrected spectrum of
the specimen exhibits major absorption bands only at the same wavelengths as the spectrum of the respective USP Homopoly-
mer Polypropylene RS or copolymer polypropylene standard, similarly determined.
Differential Scanning Calorimetry—Proceed as directed for Thermal Analysis under Test Methods. The temperature of the
endotherm (melt) in the thermogram does not differ from that of the USP Reference Standard for homopolymers by more
than 6.0°. The temperature of the endotherm obtained from the thermogram of the copolymer polypropylene specimen does
not differ from that of the copolymer polypropylene standard by more than 12.0°.
Heavy Metals and Nonvolatile Residue—Prepare extracts of specimens for these tests as directed for Sample Preparation in
the section Physicochemical Tests under Test Methods, except that for each 20 mL of Extracting Medium the portion shall be 60
cm2, regardless of thickness.
HEAVY METALS—Containers meet the requirements for Heavy Metals in the section Physicochemical Tests under Test Methods.
NONVOLATILE RESIDUE—Proceed as directed for Nonvolatile Residue in the section Physicochemical Tests under Test Methods,
except that the Blank shall be the same solvent used in each of the following test conditions: the difference between the
amounts obtained from the Sample Preparation and the Blank does not exceed 10.0 mg when water maintained at a tempera-
ture of 70° is used as the Extracting Medium; does not exceed 60.0 mg when alcohol maintained at a temperature of 70° is
used as the Extracting Medium; and does not exceed 225.0 mg when hexanes maintained at a temperature of 50° is used as
the Extracting Medium. Containers meet these requirements for Nonvolatile Residue for all of the above extracting media.
[NOTE—Hexanes and alcohol are flammable. When evaporating these solvents, use a current of air with the water bath; when
drying the residue, use an explosion-proof oven.]
Components Used in Contact with Oral Liquids—Proceed as directed for Buffering Capacity in the section Physicochemical
Tests under Test Methods.
Scope
The standards and tests provided in this section characterize polyethylene terephthalate (PET) and polyethylene terephtha-
late G (PETG) bottles that are interchangeably suitable for packaging liquid oral dosage forms. Where stability studies have
been performed to establish the expiration date of a particular liquid oral dosage form in a bottle meeting the requirements
set forth herein for either PET or PETG bottles, any other PET or PETG bottle meeting these requirements may be similarly used
to package such a dosage form, provided that the appropriate stability programs are expanded to include the alternative bot-
tle in order to ensure that the identity, strength, quality, and purity of the dosage form are maintained throughout the expira-
tion period. The suitability of a specific PET or PETG bottle for use in the dispensing of a particular pharmaceutical liquid oral
dosage form must be established by appropriate testing.
Background
PET resins are long-chain crystalline polymers prepared by the condensation of ethylene glycol with dimethyl terephthalate
or terephthalic acid. PET copolymer resins are prepared in a similar way, except that they may also contain a small amount of
either isophthalic acid (not more than 3 mole percent) or 1,4-cyclohexanedimethanol (not more than 5 mole percent). Poly-
merization is conducted under controlled conditions of heat and vacuum, with the aid of catalysts and stabilizers.
PET copolymer resins have physical and spectral properties similar to PET and for practical purposes are treated as PET. The
tests and specifications provided in this section to characterize PET resins and bottles apply also to PET copolymer resins and to
bottles fabricated from them.
PET and PET copolymer resins generally exhibit a large degree of order in their molecular structure. As a result, they exhibit
characteristic composition-dependent thermal behavior, including a glass transition temperature of about 76° and a melting
temperature of about 250°. These resins have a distinctive IR absorption spectrum that allows them to be distinguished from
other plastic materials (e.g., polycarbonate, polystyrene, polyethylene, and PETG resins). PET and PET copolymer resins have a
density between 1.3 and 1.4 g per cm3 and a minimum intrinsic viscosity of 0.7 dL per g, which corresponds to a number
average molecular weight of about 23,000 Da.
Official text. Reprinted from First Supplement to USP38-NF33.
244 á661ñ Containers—Plastics / Physical Tests DSC
PETG resins are high molecular weight polymers prepared by the condensation of ethylene glycol with dimethyl terephtha-
late or terephthalic acid and 15 to 34 mole percent of 1,4-cyclohexanedimethanol. PETG resins are clear, amorphous poly-
mers, having a glass transition temperature of about 81° and no crystalline melting point, as determined by differential scan-
ning calorimetry. PETG resins have a distinctive IR absorption spectrum that allows them to be distinguished from other plastic
materials, including PET. PETG resins have a density of approximately 1.27 g per cm3 and a minimum intrinsic viscosity of 0.65
dL per g, which corresponds to a number average molecular weight of about 16,000 Da.
PET and PETG resins, and other ingredients used in the fabrication of these bottles, conform to the requirements in the ap-
plicable sections of the Code of Federal Regulations, Title 21, regarding use in contact with food and alcoholic beverages. PET
and PETG resins do not contain any plasticizers, processing aids, or antioxidants. Colorants, if used in the manufacture of PET
and PETG bottles, do not migrate into the contained liquid.
Infrared Spectroscopy—Proceed as directed under Multiple Internal Reflectance in the section Test Methods. The corrected
spectrum of the specimen exhibits major absorption bands only at the same wavelengths as the spectrum of USP Polyethylene
Terephthalate RS, or USP Polyethylene Terephthalate G RS, similarly determined.
Differential Scanning Calorimetry—Proceed as directed under Thermal Analysis in the section Test Methods. For polyethy-
lene terephthalate, the thermogram of the specimen is similar to the thermogram of USP Polyethylene Terephthalate RS, simi-
larly determined: the melting point (Tm) of the specimen does not differ from that of the USP Reference Standard by more
than 9.0°, and the glass transition temperature (Tg) of the specimen does not differ from that of the USP Reference Standard
by more than 4.0°. For polyethylene terephthalate G, the thermogram of the specimen is similar to the thermogram of USP
Polyethylene Terephthalate G RS, similarly determined: the glass transition temperature (Tg) of the specimen does not differ
from that of the USP Reference Standard by more than 6.0°.
Colorant Extraction—Select three test bottles. Cut a relatively flat portion from the side wall of one bottle, and trim it as
necessary to fit the sample holder of the spectrophotometer. Obtain the visible spectrum of the side wall by scanning the por-
tion of the visible spectrum from 350 to 700 nm. Determine, to the nearest 2 nm, the wavelength of maximum absorbance.
Fill the remaining two test bottles, using 50% alcohol for PET bottles and 25% alcohol for PETG bottles. Fit the bottles with
impervious seals, such as aluminum foil, and apply closures. Fill a glass bottle having the same capacity as that of the test bot-
tles with the corresponding solvent, fit the bottle with an impervious seal, such as aluminum foil, and apply a closure. Incubate
the test bottles and the glass bottle at 49° for 10 days. Remove the bottles, and allow them to equilibrate to room tempera-
ture. Concomitantly determine the absorbances of the test solutions in 5-cm cells at the wavelength of maximum absorbance
(see Spectrophotometry and Light–Scattering á851ñ), using the corresponding solvent from the glass bottle as the blank. The ab-
sorbance values so obtained are less than 0.01 for both test solutions.
Heavy Metals, Total Terephthaloyl Moieties, and Ethylene Glycol—
EXTRACTING MEDIA—
Purified Water—(see monograph).
50 Percent Alcohol—Dilute 125 mL of alcohol with water to 238 mL, and mix.
25 Percent Alcohol—Dilute 125 mL of 50 Percent Alcohol with water to 250 mL, and mix.
n-Heptane.
GENERAL PROCEDURE—[NOTE—Use an Extracting Medium of 50 Percent Alcohol for PET bottles and 25 Percent Alcohol for PETG
bottles.] For each Extracting Medium, fill a sufficient number of test bottles to 90% of their nominal capacity to obtain not less
than 30 mL. Fill a corresponding number of glass bottles with Purified Water, a corresponding number of glass bottles with 50
Percent Alcohol or 25 Percent Alcohol, and a corresponding number of glass bottles with n-Heptane for use as Extracting Media
blanks. Fit the bottles with impervious seals, such as aluminum foil, and apply closures. Incubate the test bottles and the glass
bottles at 49° for 10 days. Remove the test bottles with the Extracting Media samples and the glass bottles with the Extracting
Media blanks, and store them at room temperature. Do not transfer the Extracting Media samples to alternative storage vessels.
HEAVY METALS—Pipet 20 mL of the Purified Water extract of the test bottles, filtered if necessary, into one of two matched 50-
mL color-comparison tubes, and retain the remaining Purified Water extract in the test bottles for use in the test for Ethylene
Glycol. Adjust the extract with 1 N acetic acid or 6 N ammonium hydroxide to a pH between 3.0 and 4.0, using short-range
pH paper as an external indicator. Dilute with water to about 35 mL, and mix.
Into the second color-comparison tube, pipet 2 mL of freshly prepared (on day of use) Standard Lead Solution (see Heavy
Metals á231ñ), and add 20 mL of Purified Water. Adjust with 1 N acetic acid or 6 N ammonium hydroxide to a pH between 3.0
and 4.0, using short-range pH paper as an external indicator. Dilute with water to about 35 mL, and mix.
To each tube add 1.2 mL of thioacetamide–glycerin base TS and 2 mL of pH 3.5 Acetate Buffer (see Heavy Metals á231ñ),
dilute with water to 50 mL, and mix: any color produced within 10 minutes in the tube containing the Purified Water extract of
the test bottles does not exceed that in the tube containing the Standard Lead Solution, both tubes being viewed downward
over a white surface (1 ppm in extract).
TOTAL TEREPHTHALOYL MOIETIES—Determine the absorbance of the 50 Percent Alcohol or 25 Percent Alcohol extract in a 1-cm
cell at the wavelength of maximum absorbance at about 244 nm (see Spectrophotometry and Light–Scattering á851ñ), using
as the blank the corresponding Extracting Medium blank: the absorbance of the extract does not exceed 0.150, corresponding
to not more than 1 ppm of total terephthaloyl moieties.
Determine the absorbance of the n-Heptane extract in a 1-cm cell at the wavelength of maximum absorbance at about 240
nm (see Spectrophotometry and Light-Scattering á851ñ), using as the blank the n-Heptane Extracting Medium: the absorbance of
the extract does not exceed 0.150, corresponding to not more than 1 ppm of total terephthaloyl moieties.
ETHYLENE GLYCOL—
Periodic Acid Solution—Dissolve 125 mg of periodic acid in 10 mL of water.
Dilute Sulfuric Acid—To 50 mL of water add slowly and with constant stirring 50 mL of sulfuric acid, and allow to cool to
room temperature.
Sodium Bisulfite Solution—Dissolve 0.1 g of sodium bisulfite in 10 mL of water. Use this solution within 7 days.
Disodium Chromotropate Solution—Dissolve 100 mg of disodium chromotropate in 100 mL of sulfuric acid. Protect this solu-
tion from light, and use within 7 days.
Standard Solution—Dissolve an accurately weighed quantity of ethylene glycol in water, and dilute quantitatively, and step-
wise if necessary, to obtain a solution having a known concentration of about 1 mg per mL.
Test Solution—Use the Purified Water extract.
Procedure—Transfer 1.0 mL of the Standard Solution to a 10-mL volumetric flask. Transfer 1.0 mL of the Test Solution to a
second 10-mL volumetric flask. Transfer 1.0 mL of the Purified Water Extracting Medium to a third 10-mL volumetric flask. To
each of the three flasks, add 100 mL of Periodic Acid Solution, swirl to mix, and allow to stand for 60 minutes. Add 1.0 mL of
Sodium Bisulfite Solution to each flask, and mix. Add 100 mL of Disodium Chromotropate Solution to each flask, and mix. [NOTE—
All solutions should be analyzed within 1 hour after addition of the Disodium Chromotropate Solution.] Cautiously add 6 mL of
sulfuric acid to each flask, mix, and allow the solutions to cool to room temperature. [Caution–Dilution of sulfuric acid produces
substantial heat and can cause the solution to boil. Perform this addition carefully. Sulfur dioxide gas will be evolved. Use of a fume
hood is recommended.] Dilute each solution with Dilute Sulfuric Acid to volume, and mix. Concomitantly determine the absor-
bances of the solutions from the Standard Solution and the Test Solution in 1-cm cells at the wavelength of maximum absorb-
ance at about 575 nm (see Spectrophotometry and Light-Scattering á851ñ), using as the blank the solution from the Purified Wa-
ter Extracting Medium: the absorbance of the solution from the Test Solution does not exceed that of the solution from the
Standard Solution, corresponding to not more than 1 ppm of ethylene glycol.
TEST METHODS
Apparatus—Use an IR spectrophotometer capable of correcting for the blank spectrum and equipped with a multiple inter-
nal reflectance accessory and a KRS-5 internal reflection plate.1 A KRS-5 crystal 2-mm thick having an angle of incidence of 45°
provides a sufficient number of reflections.
Specimen Preparation—Cut two flat sections representative of the average wall thickness of the container, and trim them
as necessary to obtain segments that are convenient for mounting in the multiple internal reflectance accessory. Taking care to
avoid scratching the surfaces, wipe the specimens with dry paper or, if necessary, clean them with a soft cloth dampened with
methanol, and permit them to dry. Securely mount the specimens on both sides of the KRS-5 internal reflection plate, ensur-
ing adequate surface contact. Prior to mounting the specimens on the plate, they may be compressed to thin uniform films by
exposing them to temperatures of about 177° under high pressures (15,000 psi or more).
General Procedure—Place the mounted specimen sections within the multiple internal reflectance accessory, and place the
assembly in the specimen beam of the IR spectrophotometer. Adjust the specimen position and mirrors within the accessory to
permit maximum light transmission of the unattenuated reference beam. (For a double-beam instrument, upon completing
the adjustments in the accessory, attenuate the reference beam to permit full-scale deflection during the scanning of the speci-
men.) Determine the IR spectrum from 3500 to 600 cm–1 for polyethylene and polypropylene and from 4000 to 400 cm–1 for
PET and PETG.
Thermal Analysis
General Procedure—Cut a section weighing about 12 mg, and place it in the test specimen pan. [NOTE—Intimate contact
between the pan and the thermocouple is essential for reproducible results.] Determine the thermogram under nitrogen, using
the heating and cooling conditions as specified for the resin type and using equipment capable of performing the determina-
tions as specified under Thermal Analysis á891ñ.
For Polyethylene—Determine the thermogram under nitrogen at temperatures between 40° and 200° at a heating rate
between 2° and 10° per minute followed by cooling at a rate between 2° and 10° per minute to 40°.
For Polypropylene—Determine the thermogram under nitrogen at temperatures ranging from ambient to 30° above the
melting point. Maintain the temperature for 10 minutes, then cool to 50° below the peak crystallization temperature at a rate
of 10° to 20° per minute.
For Polyethylene Terephthalate—Heat the specimen from room temperature to 280° at a heating rate of about 20° per
minute. Hold the specimen at 280° for 1 minute. Quickly cool the specimen to room temperature, and reheat it to 280° at a
heating rate of about 5° per minute.
1 The multiple internal reflectance accessory and KRS-5 plate are available from several sources, including Beckman Instruments, Inc., 2500 Harbor Blvd., Fullerton,
CA 92634, and from Perkin Elmer Corp., Main Ave., Norwalk, CT 06856.
Official text. Reprinted from First Supplement to USP38-NF33.
246 á661ñ Containers—Plastics / Physical Tests DSC
For Polyethylene Terephthalate G—Heat the specimen from room temperature to 120° at a heating rate of about 20° per
minute. Hold the specimen at 120° for 1 minute. Quickly cool the specimen to room temperature, and reheat it to 120° at a
heating rate of about 10° per minute.
Biological Tests
The in vitro biological tests are performed according to the procedures set forth under Biological Reactivity Test, In Vitro á87ñ.
Components that meet the requirements of the in vitro tests are not required to undergo further testing. No plastic class des-
ignation is assigned to these materials. Materials that do not meet the requirements of the in vitro tests are not suitable for
containers for drug products.
If a plastic class designation is needed for plastics and other polymers that meet the requirements under Biological Reactivity
Test, In Vitro á87ñ, perform the appropriate in vivo test specified for Classification of Plastics under Biological Reactivity Test, In
Vivo á88ñ.
Physicochemical Tests
The following tests, designed to determine physical and chemical properties of plastics and their extracts, are based on the
extraction of the plastic material, and it is essential that the designated amount of the plastic be used. Also, the specified sur-
face area must be available for extraction at the designated temperature.
Testing Parameters—
Extracting Medium—Unless otherwise directed in a specific test below, use Purified Water (see monograph) as the Extracting
Medium, maintained at a temperature of 70° during the extraction of the Sample Preparation.
Blank—Use Purified Water where a blank is specified in the tests that follow.
Apparatus—Use a water bath and the Extraction Containers as described under Biological Reactivity Tests, In Vivo á88ñ. Pro-
ceed as directed in the first paragraph of Preparation of Apparatus under Biological Reactivity Tests, In Vivo á88ñ. [NOTE—The
containers and equipment need not be sterile.]
Sample Preparation—From a homogeneous plastic specimen, use a portion, for each 20.0 mL of Extracting Medium, equiva-
lent to 120 cm2 total surface area (both sides combined), and subdivide into strips approximately 3 mm in width and as near
to 5 cm in length as is practical. Transfer the subdivided sample to a glass-stoppered, 250-mL graduated cylinder of Type I
glass, and add about 150 mL of Purified Water. Agitate for about 30 seconds, drain off and discard the liquid, and repeat with a
second washing.
Sample Preparation Extract—Transfer the prepared Sample Preparation to a suitable extraction flask, and add the required
amount of Extracting Medium. Extract by heating in a water bath at the temperature specified for the Extracting Medium for 24
hours. Cool, but not below 20°. Pipet 20 mL of the prepared extract into a suitable container. [NOTE—Use this portion in the
test for Buffering Capacity.] Immediately decant the remaining extract into a suitably cleansed container, and seal.
Nonvolatile Residue—Transfer, in suitable portions, 50.0 mL of the Sample Preparation Extract to a suitable, tared crucible
(preferably a fused-silica crucible that has been acid-cleaned), and evaporate the volatile matter on a steam bath. Similarly
evaporate 50.0 mL of the Blank in a second crucible. [NOTE—If an oily residue is expected, inspect the crucible repeatedly dur-
ing the evaporation and drying period, and reduce the amount of heat if the oil tends to creep along the walls of the crucible.]
Dry at 105° for 1 hour: the difference between the amounts obtained from the Sample Preparation Extract and the Blank does
not exceed 15 mg.
Residue on Ignition á281ñ—[NOTE—It is not necessary to perform this test when the Nonvolatile Residue test result does not
exceed 5 mg.] Proceed with the residues obtained from the Sample Preparation Extract and from the Blank in the test for Non-
volatile Residue above, using, if necessary, additional sulfuric acid but adding the same amount of sulfuric acid to each crucible:
the difference between the amounts of residue on ignition obtained from the Sample Preparation Extract and the Blank does
not exceed 5 mg.
Heavy Metals—Pipet 20 mL of the Sample Preparation Extract, filtered if necessary, into one of two matched 50-mL color-
comparison tubes. Adjust with 1 N acetic acid or 6 N ammonium hydroxide to a pH between 3.0 and 4.0, using short-range
pH paper as an external indicator, dilute with water to about 35 mL, and mix.
Into the second color-comparison tube pipet 2 mL of Standard Lead Solution (see Heavy Metals á231ñ), and add 20 mL of the
Blank. Adjust with 1 N acetic acid or 6 N ammonium hydroxide to a pH between 3.0 and 4.0, using short-range pH paper as
an external indicator, dilute with water to about 35 mL, and mix. To each tube add 1.2 mL of thioacetamide–glycerin base TS
and 2 mL of pH 3.5 Acetate Buffer (see Heavy Metals á231ñ), dilute with water to 50 mL, and mix: any brown color produced
within 10 minutes in the tube containing the Sample Preparation Extract does not exceed that in the tube containing the
Standard Lead Solution, both tubes being viewed downward over a white surface (1 ppm in extract).
Buffering Capacity—Titrate the previously collected 20-mL portion of the Sample Preparation Extract potentiometrically to
a pH of 7.0, using either 0.010 N hydrochloric acid or 0.010 N sodium hydroxide, as required. Treat a 20.0-mL portion of the
Blank similarly: if the same titrant was required for both the Sample Preparation Extract and the Blank, the difference between
the two volumes is not greater than 10.0 mL; and if acid was required for either the Sample Preparation Extract or the Blank and
alkali for the other, the total of the two volumes required is not greater than 10.0 mL.
Auxiliary packaging components are articles that are used to support or enhance container–closure systems. These articles
include, but are not limited to, pharmaceutical coil for containers. The components covered in this chapter must meet the
applicable requirements provided and the additional applicable requirements provided in other specified chapters.
PHARMACEUTICAL COIL
Pharmaceutical coil is used as a filling material in multiple-unit containers for solid oral dosage forms to prevent breakage of
tablets or capsules during shipment. The filling material should be discarded once the bottle is opened.
Solutions
Iodinated Zinc Chloride Solution—Dissolve 20 g of zinc chloride and 6.5 g of potassium iodide in 10.5 mL of Purified Wa-
ter. Add 0.5 g of iodine, and shake for 15 minutes. Filter if necessary. Protect from light.
Zinc Chloride–Formic Acid Solution—Dissolve 20 g of zinc chloride in 80 g of an 850 g/L solution of anhydrous formic
acid.
1% DuPont Fiber Identification Stain No. 41 Solution—Dissolve 3.8 g of powdered stain in 378.5 mL of deionized water.
Purified cotton is the hair of the seed of cultivated varieties of Gossypium hirsutum Linné, or of other species of Gossypium
(Fam. Malvaceae). It is deprived of fatty matter and bleached, and does not contain more than traces of leaf residue, pericarp,
seed coat, or other impurities. Cotton pharmaceutical coil is used in bottles of solid oral dosage forms to prevent breakage.
Identification—
A: When examined under a microscope, each fiber is seen to consist of a single cell, up to about 4-cm long and 40-mm
wide, in the form of a flattened tube with thick and rounded walls that are often twisted.
B: When treated with Iodinated Zinc Chloride Solution, the fibers become violet.
C: To 0.1 g of fibers add 10 mL of Zinc Chloride–Formic Acid Solution, heat to 40°, and allow to stand for 2 hours, shaking
occasionally: the fibers do not dissolve.
D: Weigh about 5 g of fibers, wet with water, and squeeze out the excess. Add fibers to 100 mL of a boiling solution of a
1% DuPont Fiber Identification Stain No. 4 Solution, and gently boil for at least 1 minute. Remove the fibers, rinse well in cold
water, and squeeze out the excess moisture: the fibers become green.
Acidity or Alkalinity—Immerse about 10 g of fibers in 100 mL of recently boiled and cooled Purified Water, and allow to
macerate for 2 hours. Decant 25-mL portions of the water, with the aid of a glass rod, into each of two dishes. To one portion
add 3 drops of phenolphthalein TS, and to the other portion add 1 drop of methyl orange TS. Neither portion appears pink
when viewed against a white background.
Fluorescence—Examine a layer about 5 mm in thickness under UV light at 365 nm. It displays only a slight brownish-violet
fluorescence and a few yellow particles. It shows no intense blue fluorescence, apart from that which may be shown by a few
isolated fibers.
Residual Hydrogen Peroxide Concentration—Place 1 g of fibers in a beaker containing 30 mL of Purified Water, and stir
for 3 minutes with a stirring rod. Pour contents into another clean container (do not squeeze sample), or alternatively, remove
the fibers from the solution with clean tweezers. Remove a peroxide analytical test strip2 from its container, and immerse the
test end into the sample liquid for 2 seconds. Shake to remove the excess liquid, immediately insert the test strip into a suita-
ble reflectometry instrument, and record the reading in mg/kg (ppm), and calculate residual hydrogen peroxide concentration
in ppm.
For an alternate method, place 20 g in a beaker, add 400 mL of Purified Water, stir, add 20 mL of 20% sulfuric acid, and stir
contents. Titrate with 0.100 N potassium permanganate solution to a faint pink color that remains for 30 seconds. Record the
amount of titer, and calculate the concentration in ppm.
NMT 50 ppm is found using either method.
Loss on Drying á731ñ—Dry 5.00 g of fibers in an oven at 105° to constant weight: it loses NMT 8.0% of its weight.
Residue on Ignition á281ñ—Place 5 g of fibers in a porcelain or platinum dish, and moisten with 2 N sulfuric acid. Gently
heat the cotton until it is charred, then ignite more strongly until the carbon is completely consumed: NMT 0.20% of residue
remains.
1DuPont Fiber Identification Stain No. 4 is available from Pylam Products Co., 2175 East Cedar Street, Tempe, AZ 85281: www.pylamdyes.com.
2A suitable analysis system consisting of Reflectoquant® peroxide test strips and a RQflex® reflectometry instrument may be obtained from EMD Chemicals Inc.,
480 S. Democrat Road, Gibbstown, NJ 08027: www.emdchemicals.com.
Official text. Reprinted from First Supplement to USP38-NF33.
248 á670ñ Auxiliary Packaging Components / Physical Tests DSC
Water-Soluble Substances—Place 10.00 g of fibers in a beaker containing 1000 mL of Purified Water, and boil gently for
30 minutes, adding water as required to maintain the volume. Pour the water through a funnel into another vessel, and press
out the excess water from the cotton with a glass rod. Wash the cotton in the funnel with two 250-mL portions of boiling
water, pressing the cotton after each washing. Filter the combined extract and washings, and wash the filter thoroughly with
hot water. Evaporate the combined extract and washings to a small volume, transfer to a tared porcelain or platinum dish,
evaporate to dryness, and dry the residue at 105° to constant weight. The residue weighs NMT 0.35%.
Fatty Matter—Pack 10.00 g of fibers in a Soxhlet extractor provided with a tared receiver, and extract with ethyl ether for
4 hours at a rate such that the ether siphons over not less than four times per hour. The ethyl ether solution in the flask shows
no trace of blue, green, or brownish color. Evaporate the extract to dryness, and dry at 105° for 1 hour. The weight of the
residue does not exceed 0.7%.
Dyes—Pack about 10 g in a narrow percolator, and extract slowly with alcohol until the percolate measures 50 mL. When
observed downward through a column 20 cm in depth, the percolate may show a yellowish color, but not a blue or a green
tint.
Other Foreign Matter—Pinches contain no oil stains or metallic particles by visual inspection.
Rayon pharmaceutical coil is a fibrous form of bleached, regenerated cellulose, to be used as a filler in bottles of solid oral
dosage forms to prevent breakage. It consists exclusively of rayon fibers except a few isolated foreign fibers may be present.
[NOTE—Rayon pharmaceutical coil has been found to be a potential source of dissolution problems for gelatin capsules or
gelatin-coated tablets resulting from gelatin cross-linking.]
Identification—
A: When treated with Iodinated Zinc Chloride Solution, the fibers become violet.
B: Add 10 mL of Zinc Chloride–Formic Acid Solution to 0.1 g of fibers, heat to 40°, and allow to stand for 2 hours, shaking
occasionally: the fibers dissolve completely, except for matt rayon fibers where titanium particles remain.
C: Weigh about 5 g of fibers, wet with water, and squeeze out the excess. Add fibers to 100 mL of a boiling solution of a
1% DuPont Fiber Identification Stain No. 4 Solution, and gently boil for at least 1 minute. Remove the fibers, rinse well in cold
water, and squeeze out the excess moisture: the fibers become blue-green.
Acidity or Alkalinity, Fluorescence, Fatty Matter, Dyes, and Other Foreign Matter—Proceed as directed under Cotton
Pharmaceutical Coil, except to use rayon pharmaceutical coil. Sample weight for fatty matter is 5 g and weight of residue does
not exceed 0.5%.
Loss on Drying á731ñ—Dry 5.00 g of fibers in an oven at 105° to constant weight: it loses NMT 11.0% of its weight.
Residue on Ignition á281ñ: NMT 1.50%, determined on a 5.00-g test specimen.
Acid-Insoluble Ash—To the residue obtained in the test for Residue on Ignition, add 25 mL of 3 N hydrochloric acid, and
boil for 5 minutes. Collect the insoluble matter on a tared filtering crucible, wash with hot water, ignite, and weigh: the resi-
due weighs NMT 1.25%.
Water-Soluble Substances—Proceed as directed under Cotton Pharmaceutical Coil, except to use rayon pharmaceutical
coil. The residue weighs NMT 1.0%.
Polyester pharmaceutical coil is a white odorless material, to be used as a filler in bottles of solid oral dosage forms to pre-
vent breakage.
Identification—
A: Proceed as directed under Infrared Spectroscopy in the Test Methods section. Determine the IR spectrum from 4000 to
650 cm−1 (2.5 to 15 mm). The spectrum obtained from the specimen exhibits major absorption bands only at the same wave-
lengths as the spectrum of USP Polyethylene Terephthalate RS.
B: Weigh about 5 g of fibers, wet with water, and squeeze out excess. Add fibers to 100 mL of a boiling solution of a 1%
DuPont Fiber Identification Stain No. 4 Solution, and gently boil for at least 1 minute. Remove the fibers, rinse well in cold water,
and squeeze out the excess moisture: the fibers become pale orange.
Acidity or Alkalinity—Proceed as directed under Cotton Pharmaceutical Coil, except to use polyester pharmaceutical coil.
Loss on Drying á731ñ—Dry 5.00 g of fibers in an oven at 105° to constant weight: it loses NMT 1.0% of its weight.
Residue on Ignition á281ñ: NMT 0.5%, determined on a 5.00-g test specimen.
Finish on Fibers—The finish on fibers used for processing should comply with FDA food contact regulations.
Test Methods
INFRARED SPECTROSCOPY3
Apparatus: FTIR or a double-beam spectrometer capable of scanning from 4000 to 650 cm−1 (2.5 to 15 mm).
Specimen Preparation—
Method 1 (Potassium Bromide Disk)—Use scissors to cut polyester fibers (1 to 3 mg) into short lengths (less than 1 mm
long), mix with 200 mg of powdered potassium bromide, and grind in a ball mill for 1 to 2 minutes. Transfer to potassium
bromide-disc die, and form a disc.
Method 2 (Melt Film)—Produce film by pressing polyester fibers between TFE-fluorocarbon sheets and place between heated
plates.
INTRODUCTION
It is the purpose of this chapter to provide standards for the functional properties of packaging systems used for solid oral
dosage forms (SODF) and liquid oral dosage forms (LODF) for pharmaceuticals and dietary supplements. Definitions that apply
to this chapter are provided in Packaging and Storage Requirements á659ñ. The tests that follow are provided to determine the
moisture vapor transmission rate (water vapor permeation rate) and spectral transmission of plastic containers.
Test methods are provided to measure moisture vapor transmission rates that may be useful for pharmaceutical and dietary
supplement manufacturers to determine the level of barrier protection provided by packaging systems for SODF. Additional
methods are provided to determine classification of packaging systems for SODF and LODF that are repackaged by organiza-
tions or when being dispensed on prescription by pharmacists as single-unit and multiple-unit containers (Table 1). There may
be additional packaging systems where the test methods in this section could be applied; however, any deviation should be
described. If other methods are used to measure moisture vapor transmission rate, these methods should be described in suffi-
cient detail to justify their use.
Table 1. Moisture Vapor Transmission Test Methods for Packaging Systems
Liquid Oral
Solid Oral Dosage Forms Dosage Forms
Classification for Single-Unit Classification for Multiple-
Section Barrier Protection Determi- Classification for Multiple- Containers and Unit-Dose Unit Containers and Unit-
Title nation Unit Containers Containers Dose Containers
Multiple-unit containers Multiple-unit containers
with seal intact or broach- with closure applied and
ed state and single-unit or seal intact or broached Single-unit and unit-dose Multiple- and single-unit
Application unit dose containers state containers in sealed state containers
Manufacturers Manufacturers Manufacturers
Packagers Packagers Packagers
Repackagers Repackagers Repackagers
Users Manufacturers Pharmacies Pharmacies Pharmacies
Definitions
Blister—Formed, lidded, and sealed plastic or foil dome that contains the capsule or tablet (usually a single-unit or unit-
dose).
Low-barrier blister—Blisters made from low-barrier materials formed and sealed so that the moisture vapor transmission
rate when tested at 40°/75% relative humidity (RH) is greater than 1.0 mg/cavity-day.
High-barrier blister—Blisters made from high-barrier material, formed and sealed so that the moisture vapor transmis-
sion rate when tested at 40°/75% RH is less than 1.0 mg/cavity-day.
Ultra-high barrier blister—Blisters made from ultlra high-barrier material, formed and sealed so that the moisture vapor
transmission rate when tested at 40°/75% RH is less than 0.01 mg/cavity-day.
Blister card—A contiguous group of blisters formed and sealed with lid in place. The number of blisters per card com-
monly ranges from one to ten but may be more. The blister card may sometimes be referred to as a packaging system.
Cavity—Formed, lidded, and sealed plastic or foil dome (see Blister).
3 Additional information on fiber identification methods may be found in “Standard Test Methods for Identification of Fibers in Textiles”. Current version of ASTM
Method D276, published by ASTM International, 100 Barr Harbor Drive, P.O. Box C700, West Conshohocken, PA 19428-2959. www.astm.org.
Official text. Reprinted from First Supplement to USP38-NF33.
250 á671ñ Containers—Performance Testing / Physical Tests DSC
Moisture vapor transmission rate—The steady state moisture vapor transmission in unit time through a packaging sys-
tem, under specific conditions of temperature and humidity. These test methods use gravimetric measurement to deter-
mine the rate of weight gain as a result of water vapor transmission into the packaging system and subsequent uptake by
a desiccant enclosed within the packaging system.
Test specimen (or specimen)—For multiple-unit containers, the bottle is the test specimen; and for single-unit or unit-
dose containers, the blister card containing multiple blister cavities is the test specimen. For blisters, more than one card
(or specimen) may be grouped into a test unit for conducting the test.
Test unit—For multiple-unit containers, the bottle is the test unit as well as being the test specimen and for single-unit or
unit-dose containers, the test unit is a group of test specimens (blister cards) processed together for temperature and hu-
midity exposure and for weighing at each time point. The purpose of the test unit for single-unit or unit-dose containers
is to gain the advantage of additive weight gain resulting from more blister cavities than are present on a single card. The
test unit, when applied to bottles, is used to maintain congruence of naming among the three test methods.
Barrier Protection Determination for Packaging Systems for Solid Oral Dosage Forms
This section describes moisture vapor transmission test methods for multiple-unit containers (Method 1), high barrier single-
unit and unit-dose containers (Method 2), and low barrier single-unit and unit-dose containers (Method 3) used by pharma-
ceutical manufacturers to package SODF. The purpose of this test method is to obtain reliable and specific moisture vapor
transmission rates that can be used to discriminate among barrier performance of packaging systems used for regulated arti-
cles; the method is based upon ASTM method D7709.1
This method contains the following attributes:
a. Reports a specific moisture vapor transmission value for a container rather than a classification
b. Provides sufficient sensitivity and precision to allow differentiation among moisture barrier performance for containers
c. Conditions used for testing these packaging systems are the same as those used for accelerated stability testing of the
primary packaging of regulated articles (typically 40°/75% RH).
EQUIPMENT
DESICCANT
Method 1: The desiccant is anhydrous calcium chloride in granular form. Other desiccants, such as a molecular sieve or silica
gel, may be suitable. If anhydrous calcium chloride is used, pre-dry at 215 ± 5° for 71/4 ± 1/4 h to ensure that any hexahydrate
present is fully converted to the anhydrate. Cool the desiccant in a desiccator for at least 2 h before use.
Methods 2 and 3: The desiccant is silica gel molded in a form to fit the size and shape of the blister cavity used. Other desic-
cants may be suitable, for example, a molecular sieve. If silica gel is used, pre-dry in a circulating hot air oven at one of two
conditions: 155 ± 5° for 31/4 ± 1/4 h or 150 ± 5° for 41/4 ± 1/4 h. If a molecular sieve is used, pre-dry in a muffle furnace at
595 ± 25°. Dry the 4A and 3A sieves for 31/4 ± 1/4 h; dry the 13X sieve for 51/4 ± 1/4 h. Cool the desiccant in a desiccator for at
least 2 h before use. [NOTE—It has been shown2 that anhydrous calcium chloride may contain calcium hexahydrate, which
loses water only when the temperature reaches 200°.]
PROCEDURE
Method 1: Use 15 multiple-unit containers and 15 closures representative of the system to be tested. Prepare the test speci-
mens by filling each multiple-unit container two-thirds with desiccant then, for screw type closures, apply the closure using the
torque that is within the range of tightness specified in Table 2. For other closure types, apply according to the intended meth-
1 ASTM D7709. Standard Test Methods for Measuring Water Vapor Transmission Rate (WVTR) of Pharmaceutical Bottles and Blisters published by ASTM Interna-
tional, 100 Barr Harbor Drive, P.O. Box C700, West Conshohocken, PA 19428-2959.
2 Determination of water vapor transmission rate (WVTR) of HDPE bottles for pharmaceutical products. Chen, Yisheng and Yanxia Li, International Journal of Phar-
od. Ensure that a proper seal has been made with the intended membrane to the land area of the bottle finish. Identify each
multiple-unit container with indelible ink. Do not use a label. If there is a need to increase the precision of the method, the
user can test the system without the closure as long as an impervious seal remains on the container.
If desired, weigh each multiple-unit container at ambient temperature and RH. Record this weight for time zero, but do not
use it in the calculation of permeation. Place all containers in the test chamber (40°/75% RH) within 1 h of weighing. Weigh all
multiple-unit containers at time intervals of 7 days ± 1 h. Weigh the multiple-unit containers at 7, 14, 21, 28, and 35 days to
get steady-state data points. (The time interval from time 0 to day 7 is the period of equilibration.) Prior to weighing at each
time interval, equilibrate the containers for about 30 min at the weighing temperature and RH. Limit the time out of the
chamber to less than 2 h. Record the weights in an appropriate manner for later computation of the regression line.
Table 2. Torque Applicable to Screw-Type Container
Suggested Tightness Range Suggested Tightness Range
Closure with Manually with Manually
Diametera Applied Torqueb Applied Torqueb
(mm) (inch-pounds) (Newton-meters)
8 5 0.56
10 6 0.68
13 8 0.90
15 5–9 0.56–1.02
18 7–10 0.79–1.13
20 8–12 0.90–1.36
22 9–14 1.02–1.58
24 10–18 1.13–2.03
28 12–21 1.36–2.37
30 13–23 1.47–2.60
33 15–25 1.69–2.82
38 17–26 1.92–2.94
43 17–27 1.92–3.05
48 19–30 2.15–3.39
53 21–36 2.37–4.07
58 23–40 2.60–4.52
63 25–43 2.82–4.86
66 26–45 2.94–5.08
70 28–50 3.16–5.65
83 32–65 3.62–7.35
86 40–65 4.52–7.35
89 40–70 4.52–7.91
100 45–70 5.09–7.91
110 45–70 5.09–7.91
120 55–95 6.22–10.74
132 60–95 6.78–10.74
a The torque designated for the next larger closure diameter is to be applied in testing containers having a closure diameter intermediate to the diameters listed.
b A suitable apparatus is available from SecurePak, PO Box 905, Maumee, Ohio 43552-0905; www.secure-pak.com. MRA Model with indicators on both the
removal and application sides available in the following ranges: 1) 0–25 inch lbs., read in 1-inch lb. increments, 2) 0–50 inch lbs., read in 2-inch lb. increments,
and 3) 0–100 inch lbs., read in 5-inch lb. increments. For further detail regarding instructions, reference may be made to “Standard Test Method for Application
and Removal Torque of Threaded or Lug-Style Closures” ASTM Method D3198, published by the ASTM International, 100 Barr Harbor Drive, P.O. Box C700,
West Conshohocken, PA 19428-2959.
Method 2: Use 10 test units for this method. Provide a minimum of 10 blister cavities for each test unit. If the card contains
less than 10 cavities, bundle the cards to form a single test unit of at least 10 cavities. This is required to provide sufficient
weight gain at each time interval. Fill with pre-dried desiccant, and seal the blisters on equipment that is capable of correctly
filling and sealing the blister. The desiccant should fill the cavity. The total weight of desiccant shall be sufficient to meet the
quantity required to avoid partial saturation of the desiccant before completion of the test. Fill the blisters in a low-humidity
atmosphere (as low as possible, but not greater than 50% RH). Do not expose the desiccants to room humidity for more than
30 min before sealing. Identify each test specimen with indelible ink; do not use a label. A test unit will be one or more test
specimens. Bundle test speciments into test units.
Weigh each test unit at ambient temperature and RH. Record this weight for time zero. Place all test units in the test cham-
ber (40°/75% RH) within 1 h of weighing. Weigh all test units at time intervals of 7 days ±1 h. Weigh the test units at 7, 14,
21, 28, and 35 days to get 5 steady-state data points. (The time interval from time 0 to day 7 is the period of equilibration.)
Prior to weighing at each time interval, equilibrate the containers for 30 ± 5 min at the controlled weighing temperature and
Official text. Reprinted from First Supplement to USP38-NF33.
252 á671ñ Containers—Performance Testing / Physical Tests DSC
RH. Limit the time out of the chamber to less than 2 h. Record the weights in an appropriate manner for later computation of
the regression line.
Ultra-high barrier blisters may not show the full measure of precision and sensitivity this method can provide. For ultra-high
barrier blisters, test units should have more than 10 cavities, but NMT 30 cavities. Examples are foil-foil blisters or very small
blisters formed from other materials. An alternative approach is to double or triple the length of weighing intervals to achieve
at least a 6-mg weight gain per time interval by the test specimen.
Method 3: Prepare the test units as directed for Method 2. Place all test units in the test chamber (40°/75% RH) within 1 h of
weighing. Weigh the test units at the end of 2 days (48 ± 1 h). At this time, the difference in weight (the weight gain) is divi-
ded by the number of blisters and days (2) in each test unit and this is taken as the moisture vapor transmission rate in mg/
blister/day. The number of blisters tested depends on the barrier characteristics of the material, the size of the blister, and the
sensitivity of the balance used in the test. For this method, the requirement of five consecutive weighings is waived because
the desiccant quickly becomes saturated when packed in a low-barrier package and stored at 40°/75% RH. [NOTE—For single-
unit and unit-dose low barrier containers, the weight gain after the second day displays a curvilinear profile typical of ap-
proaching saturation of the desiccant. To obtain five weighings within 2 days is not viable and is likely to increase variability.
Methods 2 and 3 may require that the blister cards be bundled in multiples to achieve periodic weight gains of sufficient mag-
nitude to use the balance sensitivity. When bundled, these cards or test specimens are called test units. The weight gain in
each weighing period shall be 20 times the sensitivity of the balance, and the balance sensitivity is 3 times the balance preci-
sion. In other words, the minimal weight gain per time interval should be at least 60 times the balance precision.]
CALCULATIONS
For Methods 1 and 2, perform the regression analysis for each test unit. Typically, the initial data point (at day 0) is not inclu-
ded in fitting the regression line. The slope of the regression line is the moisture vapor transmission rate of each test unit. For
Method 1, the slope is the moisture vapor transmission rate for the corresponding multiple-unit container. For Method 2, the
moisture vapor transmission rate of each blister cavity is calculated by dividing the slope by the number of cavities in each test
unit.
For Method 3, calculate the weight gain in mg/day from day 0 to day 2 using the 10 test units. The moisture vapor transmis-
sion rate of each blister is calculated by dividing the weight gain by 2 (for 2 days) and the number of blisters in each test unit.
Regression Equation:
W = I + MT
Calculations:
Intercept (I) = W − MT
where
M = regression line slope
N = number of data points (each point consists of a weight and a time)
W = measured weight
W = overall weight mean
T = time point
T = overall time point mean
I = regression line intercept (point where regression line intersects the vertical axis)
equals sum of cross-products (for example, for each of the N data points, subtract the overall weight mean from the weight
value and the overall time mean from the time value and multiply the two differences to get a cross-product. Then sum all N
cross-products).
equals sum of squared deviations (for example, for each of the N data points, subtract the overall mean time from the time
value and square the difference. Then sum all N squared differences).
RESULTS
Method 1: Report the moisture vapor transmission rate as the average value, in mg/day per container, and the standard de-
viation of the 15 slopes. Properly describe the container closure system tested.
Method 2: Report the moisture vapor transmission rate as the average value, in mg/day per cavity, and the standard devia-
tion of the 10 test unit slopes. Properly describe the container closure system tested.
Method 3: Report the moisture vapor transmission rate as the average value from day 0 to day 2, in mg/day per blister, and
the standard deviation of the 10 test unit slopes. Properly describe the container closure system tested.
Packaging System Classification for Multiple-Unit Containers for Solid Oral Dosage Forms
The following procedure and classification scheme is provided to evaluate the moisture vapor transmission characteristics of
multiple-unit containers. The information gathered should be used to make an informed judgment regarding the suitability of
the packaging system for SODF.
DESICCANT
Place a quantity of 4- to 8-mesh, anhydrous calcium chloride3 in a shallow container, taking care to exclude any fine pow-
der, dry at 110° for 1 h, and cool in a desiccator.
PROCEDURE
Select 12 containers of a uniform size and type, clean the sealing surfaces with a lint-free cloth, and close and open each
container 30 times. Apply the closure firmly and uniformly each time the container is closed. Close screw-capped containers
with a torque that is within the range of tightness specified in Table 2. Add Desiccant to 10 of the containers, designated test
containers, filling each to within 13 mm of the closure if the container volume is 20 mL or more, or filling each to two-thirds of
capacity if the container volume is less than 20 mL. If the interior of the container is more than 63 mm in depth, an inert filler
or spacer may be placed in the bottom to minimize the total weight of the container and Desiccant; the layer of Desiccant in
such a container shall be not less than 5 cm in depth. Close each container immediately after adding Desiccant, applying the
torque designated in Table 2 when closing screw-capped containers. To each of the remaining two containers, designated
controls, add a sufficient number of glass beads to attain a weight approximately equal to that of each of the test containers,
and close, applying the torque designated in Table 2 when closing screw-capped containers. Record the weight of the individ-
ual containers to the nearest 0.1 mg if the container volume is less than 20 mL; to the nearest mg if the container volume is 20
mL or more but less than 200 mL; or to the nearest centigram (10 mg) if the container volume is 200 mL or more; and store at
75 ± 3% RH and a temperature of 23 ± 2°. [NOTE—A saturated system of 35 g of sodium chloride with each 100 mL of water
placed in the bottom of a desiccator maintains the specified humidity. Other methods may be employed to maintain these
conditions.] After 336 ± 1 h (14 days), record the weight of the individual containers in the same manner. Completely fill five
empty containers of the same size and type as the containers under test with water or a noncompressible, free-flowing solid,
such as well-tamped fine glass beads, to the level indicated by the closure surface when in place. Transfer the contents of each
to a graduated cylinder, and determine the average container volume, in mL. Calculate the rate of moisture vapor transmis-
sion, in mg/day/L:
(1000/14V)[(TF − TI) − (CF − CI)]
3 Suitable 4- to 8-mesh, anhydrous calcium chloride is available commercially as Item JT1313-1 from VWR International (www.vwr.com; telephone
1-800-952-5000).
Official text. Reprinted from First Supplement to USP38-NF33.
254 á671ñ Containers—Performance Testing / Physical Tests DSC
Fit the containers with impervious seals obtained by heat-sealing the bottles with an aluminum foil–polyethylene laminate or
other suitable seal.4 Test as directed in the Procedure section.
Classification: High-density polyethylene containers meet the requirements if the moisture vapor transmission exceeds 10
mg/day/L in NMT 1 of the 10 test containers and exceeds 25 mg/day/L in none of them. Low-density polyethylene containers
meet the requirements if the moisture vapor transmission exceeds 20 mg/day/L in NMT 1 of the 10 test containers and ex-
ceeds 30 mg/day /L in none of them.
Polypropylene containers meet the requirements if the moisture vapor transmission exceeds 15 mg/day/L in NMT 1 of the
10 test containers and exceeds 25 mg/day/L in none of them.
Packaging System Classification for Single-Unit Containers and Unit-Dose Containers for Solid
Oral Dosage Forms
The following procedure and classification scheme are provided to evaluate the moisture vapor transmission characteristics
of single-unit containers and unit-dose containers. The information gathered should be used to make an informed judgment
regarding the suitability of the packaging system for SODF.
DESICCANT
Dry suitable desiccant pellets5 at 110° for 1 h prior to use, and cool in a desiccator. Use pellets weighing approximately 400
mg each and having a diameter of approximately 8 mm. [NOTE—If necessary because of limited unit-dose container size, pel-
lets weighing less than 400 mg each and having a diameter of less than 8 mm may be used.]
PROCEDURE
Method 1: Seal NLT 10 unit-dose containers with one pellet in each, and seal 10 additional, empty unit-dose containers to
provide the controls, using finger cots or padded forceps to handle the sealed containers. Number the containers, and record
the individual weights6 to the nearest mg. Weigh the controls as a unit, and divide the total weight by the number of controls
to obtain the average. Store all of the containers at 75 ± 3% RH and at a temperature of 23 ± 2°. [NOTE—A saturated system of
35 g of sodium chloride with each 100 mL of water placed in the bottom of a desiccator maintains the specified humidity.
Other methods may be employed to maintain these conditions.] After a 24-h interval, and at each multiple thereof (see Classi-
fication), remove the containers from the chamber, and allow them to equilibrate for 15 to 60 min in the weighing area.
Again, record the weight of the individual containers and the combined controls in the same manner. [NOTE—If any indicating
pellets turn pink during this procedure, or if the pellet weight increase exceeds 10%, terminate the test, and regard only earlier
determinations as valid.] Return the containers to the humidity chamber. Calculate, to two significant figures, the rate of mois-
ture vapor transmission, in mg/day, of each container taken:
(1/N)[(WF − WI) − (CF − CI)]
N = number of days expired in the test period (beginning after the initial 24-h equilibration period)
WF = final weight of each test container (mg)
Wi = initial weight of each test container (mg)
CF = average final weight of the controls (mg)
Ci = average inital weight of the controls (mg)
[NOTE—Where the moisture vapor transmission rates measured are less than 5 mg/day, and where the controls are observed
to reach equilibrium within 7 days, the individual moisture vapor transmission rates may be determined more accurately by
using the 7-day test container and control container weights as WI and CI, respectively, in the calculation. In this case, a suita-
ble test interval for Class A (see Classification) would be NLT 28 days following the initial 7-day equilibration period (a total of
35 days).]
Method 2: Use this procedure for packs (e.g., punch-out cards) that incorporate a number of separately sealed unit-dose
containers or blisters. Seal a sufficient number of packs, such that NLT four packs and a total of NLT 10 unit-dose containers or
blisters filled with one pellet in each unit are tested. Seal a corresponding number of empty packs, each pack containing the
same number of unit-dose containers or blisters as used in the test packs, to provide the controls. Store all of the containers at
4 A suitable laminate for sealing has, as the container layer, polyethylene of NLT 0.025 mm (0.001 in) and a second layer of aluminum foil of NLT 0.018 mm
(0.0007 in), with additional layers of suitable backing materials. A suitable seal can also be obtained by using glass plates and a sealing wax consisting of 60% of
refined amorphous wax and 40% of refined crystalline paraffin wax.
5 Suitable moisture-indicating desiccant pellets are available commercially from sources such as Unit Dose Supply, P.O. Box 104, Ringoes, NJ 08551-0104
Prescription Balances and Volumetric Apparatus á1176ñ). The use of an analytical balance on which weights can be recorded to 4 or 5 decimal places may permit
more precise characterization between containers and/or shorter test periods.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á671ñ Containers—Performance Testing 255
75 ± 3% RH and at a temperature of 23 ± 2°. [NOTE—A saturated system of 35 g of sodium chloride with each 100 mL of water
placed in the bottom of a desiccator maintains the specified humidity. Other methods may be employed to maintain these
conditions.] After 24 h, and at each multiple thereof (see Classification), remove the packs from the chamber, and allow them
to equilibrate for about 45 min. Record the weights of the individual packs, and return them to the chamber. Weigh the con-
trol packs as a unit, and divide the total weight by the number of control packs to obtain the average empty pack weight.
[NOTE—If any indicating pellets turn pink during the procedure, or if the average pellet weight increase in any pack exceeds
10%, terminate the test, and regard only earlier determinations as valid.] Calculate, to two significant figures, the average rate
of moisture vapor transmission, in mg/day, for each unit-dose container or blister in each pack taken:
[1/(N × X)][(WF − WI) − (CF − CI)]
N = number of days expired in the test period (beginning after the initial 24-h equilibration period)
X = number of separately sealed units per pack
WF = final weight of each test pack (mg)
Wi = initial weight of each test pack (mg)
CF = final weight of the control packs (mg)
Ci = initial weight of the control packs (mg)
Using the Desiccant stated for Method 1 and Method 2, the test and control containers or packs are weighed after every 24 h
and after suitable test intervals for the final weighings. WF and CF, are as follows: 24 h for Class D; 48 h for Class C; 7 days for
Class B; and NLT 28 days for Class A.
Classification: The individual unit-dose containers as tested in Method 1 are designated as follows: Class A if not more than 1
of 10 containers tested exceeds 0.5 mg/day in moisture vapor transmission rate and none exceeds 1 mg/day; Class B if NMT 1
of 10 containers tested exceeds 5 mg/day and none exceeds 10 mg/day; Class C if NMT 1 of 10 containers tested exceeds 20
mg/day and none exceeds 40 mg/day; and Class D if the containers tested meet none of the moisture vapor transmission rate
requirements.
The packs as tested in Method 2 are designated as follows: Class A if no pack tested exceeds 0.5 mg/day in average blister
moisture vapor transmission rate; Class B if no pack tested exceeds 5 mg/day in average blister moisture vapor transmission
rate; Class C if no pack tested exceeds 20 mg/day in average blister moisture vapor transmission rate; and Class D if the packs
tested meet none of the above average blister moisture vapor transmission rate requirements.
Packaging System Classification for Multiple-Unit Containers and Unit-Dose Containers for Liquid
Oral Dosage Forms
The following procedure and classification scheme are provided to evaluate the moisture vapor transmission characteristics
of multiple-unit containers. The information gathered should be used to make an informed judgment regarding the suitability
of the packaging system for LODF. [NOTE—Determine the weights of individual container–closure systems (bottle, inner seal, if
used, and closure) both as tare weights and fill weights, to the nearest 0.1 mg if the bottle capacity is less than 200 mL; to the
nearest mg if the bottle capacity is 200 mL or more but less than 1000 mL; or to the nearest centigram (10 mg) if the bottle
capacity is 1000 mL or more.]
Procedure
Select 12 bottles of a uniform size and type, and clean the sealing surfaces with a lint-free cloth. Fit each bottle with a seal,
closure liner (if applicable), and closure. Number each container–closure system, and record the tare weight.
Remove the closures and, using a pipet, fill 10 bottles with water to the fill capacity. Fill two containers with glass beads, to
the same weight as the filled test containers. If using screw closures, apply a torque that is within the range specified in Table
2, and store the sealed containers at a temperature of 25 ± 2° and a relative humidity of 40 ± 2%. After 336 ± 1 h (14 days),
record the weight of the individual containers, and calculate the water weight loss rate, in percent per year, for each bottle
taken:
[(W1i − WT) − (W14i − WT) − (WC1 − WC14)] × 365 × {[100/(W1i − WT)] × 14}
SPECTRAL TRANSMISSION
Apparatus7
Use a spectrophotometer of suitable sensitivity and accuracy, adapted for measuring the amount of light transmitted by
plastic materials used for pharmaceutical containers. In addition, the spectrophotometer is capable of measuring and record-
ing light transmitted in diffused as well as parallel rays.
Procedure
Select sections to represent the average wall thickness. Cut circular sections from two or more areas of the container, and
trim them as necessary to give segments of a size convenient for mounting in the spectrophotometer. After cutting, wash and
dry each specimen, taking care to avoid scratching the surfaces. If the specimen is too small to cover the opening in the speci-
men holder, mask the uncovered portion of the opening with opaque paper or masking tape, provided that the length of the
specimen is greater than that of the slit in the spectrophotometer. Immediately before mounting in the specimen holder, wipe
the specimen with lens tissue. Mount the specimen with the aid of a tacky wax, or by other convenient means, taking care to
avoid leaving fingerprints or other marks on the surfaces through which light must pass. Place the section in the spectropho-
tometer with its cylindrical axis parallel to the plane of the slit and approximately centered with respect to the slit. When prop-
erly placed, the light beam is normal to the surface of the section, and reflection losses are at a minimum.
Continuously measure the transmittance of the section with reference to air in the spectral region of interest with a record-
ing instrument or at intervals of about 20 nm with a manual instrument, in the region of 290–450 nm.
Limits
The observed spectral transmission does not exceed the limits given in Table 3 for containers intended for parenteral use.
The observed spectral transmission for plastic containers for products intended for oral or topical administration does not ex-
ceed 10% at any wavelength in the range 290–450 nm.
Table 3. Limits for Plastic Classes I to VI
Maximum Percentage of Spectral
Transmission at Any
Nominal Size Wavelength
(in mL) between 290 and 450 nm
1 50
2 45
5 40
10 35
20 30
50 15
[NOTE—Any container of a size intermediate to those listed above exhibits a spectral transmission not greater than that of the
next larger size container listed in Table 3. For containers larger than 50 mL, the limits for 50 mL apply.]
á695ñ CRYSTALLINITY
This test is provided to determine compliance with the crystallinity requirement where stated in the individual monograph
for a drug substance.
Procedure—A detailed test procedure is described under Optical Microscopy á776ñ.
7 For further details regarding apparatus and procedures, reference may be made to the following publications of ASTM International, 100 Barr Harbor Drive, West
Conshohocken, PA 19428-2959: “Standard Test Method of Test for Haze and Luminous Transmittance of Transparent Plastics”, ASTM Method D1003-11e1;
“Standard Practice for Computing the Colors of Objects by Using the CIE System”, ASTM Method E308-08.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á698ñ Deliverable Volume 257
PURPOSE
The following tests are designed to provide assurance that oral liquids will, when transferred from the original container,
deliver the volume of dosage form that is declared on the label.
SCOPE
These tests are applicable to products that are dispensed by pouring from the container. The tests apply whether the prod-
ucts are supplied as liquid preparations or liquid preparations that are constituted from solids upon the addition of a designa-
ted volume of a specific diluent. They are not required for an article packaged in single-unit containers when the monograph
includes the test for Uniformity of Dosage Units á905ñ.
DENSITY DETERMINATION
Because of the tendency of oral liquids to entrain air when shaken or transferred, a more accurate method for determining
the delivered volume is to first determine the delivered mass, and then, using the density of the material, to convert the mass
to delivered volume. In order to do that, a determination of the density of the material is required. The following is one meth-
od to determine density:
1. Tare a 100-mL volumetric flask containing 50.0 mL of water.
2. Add approximately 25 g of well-shaken product, and gently swirl the contents to mix.
3. Reweigh the flask.
4. From a buret, add an accurately measured amount of water to bring the flask contents to volume while gently swirling
the contents of the flask. Record the volume taken from the buret.
5. Calculate the density of the sample:
W/V
in which W is the weight, in g, of the material taken; and V is 50.0 mL minus the volume, in mL, of water necessary to adjust
the contents of the flask to volume. Other methods to determine the density may be employed depending on the formulation
(e.g., substantially nonaqueous formulations).
TEST PREPARATIONS
For the determination of deliverable volume, select NLT 30 containers, and proceed as follows for the dosage form designa-
ted.
Oral Solutions and Oral Suspensions—Shake the contents of 10 containers individually.
Powders That Are Labeled to State the Volume of Oral Liquid That Results When the Powder Is Constituted with the
Volume of Diluent Stated in the Labeling—Constitute 10 containers with the volume of diluent stated in the labeling, accu-
rately measured, and shake individually.
PROCEDURE
ACCEPTANCE CRITERIA
Figure 1. Decision scheme for multiple-unit containers. (AV = Average volume. LV = Labeled volume)
For Single-Unit Containers (see Figure 2)—The average volume of liquid obtained from the 10 containers is NLT 100%,
and the volume of each of the 10 containers lies within the range of 95%–110% of the volume declared in the labeling. If A,
the average volume is less than 100% of that declared in the labeling, but the volume of no container is outside the range of
95%–110%, or if B, the average volume is NLT 100% and the volume of NMT 1 container is outside the range of 95%–110%,
but within the range of 90%–115%, perform the test on 20 additional containers. The average volume of liquid obtained from
the 30 containers is NLT 100% of the volume declared in the labeling; and the volume obtained from NMT 1 of the 30 con-
tainers is outside the range of 95%–110%, but within the range of 90%–115% of the volume declared on the labeling.
Figure 2. Decision scheme for single-unit containers. (AV = average volume; LV = labeled volume)
Density refers to the average spatial distribution of mass in a material. The density of solids typically is expressed in g per
cm3, in contrast to fluids, where the density is commonly expressed in g per mL at a stated reference temperature.
The density of a solid particle can assume different values depending on the method used to measure the volume of the
particle. It is useful to distinguish among three different possibilities.
The true density of a substance is the average mass per unit volume, exclusive of all voids that are not a fundamental part of
the molecular packing arrangement. It is a property of a particular material, and hence should be independent of the method
of determination. The true density of a perfect crystal can be determined from the size and composition of the unit cell.
The pycnometric density, as measured by gas pycnometry, is a convenient density measurement for pharmaceutical powders.
In a gas pycnometer, the volume occupied by a known mass of powder is determined by measuring the volume of gas dis-
placed by the powder. The quotient of the mass and volume is the pycnometric density. The pycnometric density equals the
true density unless the material contains impenetrable voids, or sealed pores, that are inaccessible to the gas used in the pycn-
ometer.
The granular density includes contributions to particle volume from open pores smaller than some limiting size. The size limit
depends on the method of measurement. A common measurement technique is mercury porosimetry, where the limiting pore
size depends upon the maximum intrusion pressure. Because of the additional contribution from pore volume, the granular
density will never be greater than the true density. A related concept is the aerodynamic density, which is the density of the
particle with a volume defined by the aerodynamic envelope of the particle in a flowing stream. Both the closed and open
pores contribute to this volume, but the open pores fill with the permeating fluid. The aerodynamic density, therefore, de-
pends on the density of the test fluid if the particle is porous.
For brevity, the pycnometric density and the true density are both referred to as density. If needed, these quantities may be
distinguished based on the method of measurement.
The density of a material depends on the molecular packing. For gases and liquids, the density will depend only on tempera-
ture and pressure. For solids, the density will also vary with the crystal structure and degree of crystallinity. If the solids are
amorphous, the density may further depend upon the history of preparation and treatment. Therefore, unlike fluids, the densi-
ties of two chemically equivalent solids may be different, and this difference reflects a difference in solid-state structure. The
density of constituent particles is an important physical characteristic of pharmaceutical powders.
Beyond these definitions of particle density, the bulk density of a powder includes the contribution of interparticulate void
volume. Hence, the bulk density depends on both the density of powder particles and the packing of powder particles.
Gas pycnometry is a convenient and suitable method for the measurement of the density of powder particles. A simple sche-
matic of one type of gas pycnometer is shown in Figure 1.
The sample, with mass w and volume Vs, is placed inside a sealed test cell with an empty cell volume of Vc. The system refer-
ence pressure, Pr, is determined at the manometer while the valve that connects the reference volume with the test cell is
open. The valve is closed to separate the reference volume, Vr, from the test cell. The test cell is pressurized with the measure-
ment gas to an initial pressure, Pi. Then the valve is opened to connect the reference volume, V r, with the test cell, and the
pressure drops to the final pressure, Pf. If the measurement gas behaves ideally under the conditions of measurement, the sam-
ple volume, Vs, is given by the following expression:
Details of the instrumental design may differ, but all gas pycnometers rely on the measurement of pressure changes as a refer-
ence volume is added to, or deleted from, the test cell.
The measured density is a volume-weighted average of the densities of individual powder particles. The density will be in
error if the test gas sorbs onto the powder or if volatile contaminants are evolved from the powder during the measurement.
Sorption is prevented by an appropriate choice of test gas. Helium is the common choice. Volatile contaminants in the powder
are removed by degassing the powder under a constant purge of helium prior to the measurement. Occasionally, powders
may have to be degassed under vacuum. Two consecutive readings should yield sample volumes that are equal within 0.2% if
volatile contaminants are not interfering with the measurements. Because volatiles may be evolved during the measurement,
the weight of the sample should be taken after the pycnometric measurement of volume.
Method
Ensure that the reference volume and the calibration volume have been determined for the gas pycnometer by an appropri-
ate calibration procedure. The test gas is helium, unless another gas is specified in the individual monograph. The temperature
of the gas pycnometer should be between 15° and 30° and should not vary by more than 2° during the course of the meas-
urement. Load the test cell with the substance under examination that has been prepared according to the individual mono-
graph. Where á699Dñ is indicated, dry the substance under examination as directed for Loss on drying in the monograph unless
other drying conditions are specified in the monograph Density of solids test. Where á699Uñ is indicated, the substance under
examination is used without drying. Use a quantity of powder recommended in the operating manual for the pycnometer.
Seal the test cell in the pycnometer, and purge the pycnometer system with the test gas according to the procedure given in
the manufacturer's operating instructions. If the sample must be degassed under vacuum, follow the recommendations in the
individual monographs and the instructions in the operating manual for the pycnometer.
The measurement sequence above describes the procedure for the gas pycnometer shown in Figure 1. If the pycnometer
differs in operation or in construction from the one shown in Figure 1, follow the operating procedure given in the manual for
the pycnometer.
Repeat the measurement sequence for the same powder sample until consecutive measurements of the sample volume, Vs,
agree to within 0.2%. Unload the test cell and measure the final powder weight, w. Calculate the pycnometric density, r, of
the sample according to Equation 2.
á701ñ DISINTEGRATION
This general chapter is harmonized with the corresponding texts of the European Pharmacopoeia and/or the Japanese Phar-
macopoeia. The texts of these pharmacopeias are therefore interchangeable, and the methods of the European Pharmacopoeia
and/or the Japanese Pharmacopoeia may be used for demonstration of compliance instead of the present general chapter.
These pharmacopeias have undertaken not to make any unilateral change to this harmonized chapter.
Portions of the present general chapter text that are national USP text, and therefore not part of the harmonized text, are
marked with symbols (♦♦) to specify this fact.
This test is provided to determine whether tablets or capsules disintegrate within the prescribed time when placed in a liquid
medium at the experimental conditions presented below. ♦Compliance with the limits on Disintegration stated in the individual
monographs is required except where the label states that the tablets or capsules are intended for use as troches, or are to be
chewed, or are designed as extended-release dosage forms or delayed-release dosage forms. Determine the type of units un-
der test from the labeling and from observation, and apply the appropriate procedure to 6 or more dosage units.♦
For the purposes of this test, disintegration does not imply complete solution of the unit or even of its active constituent.
Complete disintegration is defined as that state in which any residue of the unit, except fragments of insoluble coating or cap-
sule shell, remaining on the screen of the test apparatus or adhering to the lower surface of the disk, if used, is a soft mass
having no palpably firm core.
APPARATUS
The apparatus consists of a basket-rack assembly, a 1000-mL, low-form beaker, 138 to 160 mm in height and having an
inside diameter of 97 to 115 mm for the immersion fluid, a thermostatic arrangement for heating the fluid between 35° and
39°, and a device for raising and lowering the basket in the immersion fluid at a constant frequency rate between 29 and 32
cycles per minute through a distance of not less than 53 mm and not more than 57 mm. The volume of the fluid in the vessel
is such that at the highest point of the upward stroke the wire mesh remains at least 15 mm below the surface of the fluid and
descends to not less than 25 mm from the bottom of the vessel on the downward stroke. At no time should the top of the
basket-rack assembly become submerged. The time required for the upward stroke is equal to the time required for the down-
ward stroke, and the change in stroke direction is a smooth transition, rather than an abrupt reversal of motion. The basket-
rack assembly moves vertically along its axis. There is no appreciable horizontal motion or movement of the axis from the ver-
tical.
Basket-Rack Assembly—The basket-rack assembly consists of six open-ended transparent tubes, each 77.5 ± 2.5 mm long
and having an inside diameter of 20.7 to 23 mm and a wall 1.0 to 2.8 mm thick; the tubes are held in a vertical position by
two plates, each 88 to 92 mm in diameter and 5 to 8.5 mm in thickness, with six holes, each 22 to 26 mm in diameter, equi-
distant from the center of the plate and equally spaced from one another. Attached to the under surface of the lower plate is a
woven stainless steel wire cloth, which has a plain square weave with 1.8- to 2.2-mm apertures and with a wire diameter of
0.57 to 0.66 mm. The parts of the apparatus are assembled and rigidly held by means of three bolts passing through the two
plates. A suitable means is provided to suspend the basket-rack assembly from the raising and lowering device using a point on
its axis.
The design of the basket-rack assembly may be varied somewhat, provided the specifications for the glass tubes and the
screen mesh size are maintained. The basket-rack assembly conforms to the dimensions found in Figure 1.
Disks—The use of disks is permitted only where specified or allowed ♦in the monograph. If specified in the individual mono-
graph,♦ each tube is provided with a cylindrical disk 9.5 ± 0.15 mm thick and 20.7 ± 0.15 mm in diameter. The disk is made of
a suitable transparent plastic material having a specific gravity of between 1.18 and 1.20. Five parallel 2 ± 0.1-mm holes ex-
tend between the ends of the cylinder. One of the holes is centered on the cylindrical axis. The other holes are centered
6 ± 0.2 mm from the axis on imaginary lines perpendicular to the axis and parallel to each other. Four identical trapezoidal-
shaped planes are cut into the wall of the cylinder, nearly perpendicular to the ends of the cylinder. The trapezoidal shape is
symmetrical; its parallel sides coincide with the ends of the cylinder and are parallel to an imaginary line connecting the cen-
ters of two adjacent holes 6 mm from the cylindrical axis. The parallel side of the trapezoid on the bottom of the cylinder has a
length of 1.6 ± 0.1 mm, and its bottom edges lie at a depth of 1.5 to 1.8 mm from the cylinder's circumference. The parallel
side of the trapezoid on the top of the cylinder has a length of 9.4 ± 0.2 mm, and its center lies at a depth of 2.6 ± 0.1 mm
from the cylinder's circumference. All surfaces of the disk are smooth. If the use of disks is specified ♦in the individual mono-
graph♦, add a disk to each tube, and operate the apparatus as directed under Procedure. The disks conform to dimensions
found in Figure 11.
PROCEDURE
♦Uncoated Tablets— Place 1 dosage unit in each of the six tubes of the basket and, if prescribed, add a disk. Operate the
♦
apparatus, using ♦water or♦ the specified medium as the immersion fluid, maintained at 37 ± 2°. At the end of the time limit
specified ♦in the monograph,♦ lift the basket from the fluid, and observe the tablets: all of the tablets have disintegrated com-
pletely. If 1 or 2 tablets fail to disintegrate completely, repeat the test on 12 additional tablets. The requirement is met if not
fewer than 16 of the total of 18 tablets tested are disintegrated.
1 The use of automatic detection employing modified disks is permitted where the use of disks is specified or allowed. Such disks must comply with the require-
♦Plain-Coated Tablets—Apply the test for Uncoated Tablets, operating the apparatus for the time specified in the individual
monograph.
Delayed-Release (Enteric-Coated) Tablets—Place 1 tablet in each of the six tubes of the basket and, if the tablet has a
soluble external sugar coating, immerse the basket in water at room temperature for 5 minutes. Then operate the apparatus
using simulated gastric fluid TS maintained at 37 ± 2° as the immersion fluid. After 1 hour of operation in simulated gastric
fluid TS, lift the basket from the fluid, and observe the tablets: the tablets show no evidence of disintegration, cracking, or
softening. Operate the apparatus, using simulated intestinal fluid TS maintained at 37 ± 2° as the immersion fluid, for the time
specified in the monograph. Lift the basket from the fluid, and observe the tablets: all of the tablets disintegrate completely. If
1 or 2 tablets fail to disintegrate completely, repeat the test on 12 additional tablets: not fewer than 16 of the total of 18
tablets tested disintegrate completely.
Buccal Tablets—Apply the test for Uncoated Tablets. After 4 hours, lift the basket from the fluid, and observe the tablets: all
of the tablets have disintegrated. If 1 or 2 tablets fail to disintegrate completely, repeat the test on 12 additional tablets: not
fewer than 16 of the total of 18 tablets tested disintegrate completely.
Sublingual Tablets—Apply the test for Uncoated Tablets. At the end of the time limit specified in the individual mono-
graph: all of the tablets have disintegrated. If 1 or 2 tablets fail to disintegrate completely, repeat the test on 12 additional
tablets: not fewer than 16 of the total of 18 tablets tested disintegrate completely.
Hard Gelatin Capsules—Apply the test for Uncoated Tablets. Attach a removable wire cloth, which has a plain square
weave with 1.8- to 2.2-mm mesh apertures and with a wire diameter of 0.60 to 0.655 mm, as described under Basket-Rack
Assembly, to the surface of the upper plate of the basket-rack assembly. Observe the capsules within the time limit specified in
the individual monograph: all of the capsules have disintegrated except for fragments from the capsule shell. If 1 or 2 capsules
fail to disintegrate completely, repeat the test on 12 additional capsules: not fewer than 16 of the total of 18 capsules tested
disintegrate completely.
Soft Gelatin Capsules—Proceed as directed under Hard Gelatin Capsules.♦
PURPOSE
This chapter provides quality attributes for products with approved labeling indicating that the tablets can be split to pro-
duce multiple portions that have an accurate fractional dose (labeled as functionally scored). The label claim of the split por-
tions should be a simple fractional part of the claim for the intact tablet based on the number of scores and the size of the split
portion (e.g., one-half, one-third, or one-quarter). At the time of splitting, the intact tablets should conform to the monograph
specification. With the exception of dose, each split portion from tablets labeled as having a functional score are expected to
conform to the quality attributes of the whole tablets. The split portions resulting from subdividing a functionally scored tablet
should conform to the tests for Splitting Tablets with Functional Scoring and Dissolution or Disintegration given in this chapter.
SCOPE
This chapter applies to tablets labeled as having a functional score and to the split portions that represent any labeled frac-
tion of the whole functionally scored tablet dose. Tablets should be split as part of the test procedure and the storage condi-
tions for the split portions should be defined in the test procedure. For Dissolution or Disintegration testing, analysts should use
only split portions from tablets determined to be acceptable by the Splitting Tablets with Functional Scoring test.
Test Procedure
An acceptable tablet breaks into the designed number of segments, and each split portion has NLT 75% and NMT 125% of
the expected weight of the split tablet portion. [NOTE—Set aside split tablet portions derived from acceptable tablets for subse-
quent testing for dissolution or disintegration.]
Acceptance criteria: NLT 28 of the 30 tablets are acceptable.
DISSOLUTION
Use split portions from tablets that are acceptable according to the Splitting Tablets with Functional Scoring test.
Immediate-Release Tablets
Dissolution for immediate-release tablets is performed at the S2 stage (see Dissolution á711ñ). Test 12 split tablet portions ac-
cording to the specified Medium, Apparatus, Times, and Analysis. The average of the 12 results is NLT Q, and no result is less
than Q – 15%.
Extended-Release Tablets
Perform dissolution testing of split tablet portions from extended-release tablets by one of the two alternative procedures.
The procedure to be used is specified in the monograph.
Procedure 1 (Procedure for Extended-Release Dosage Forms, Dissolution á711ñ): Individually test 12 split tablet portions and 12
intact tablets.
Medium, Apparatus, and Analysis: As given in the monograph following the appropriate test number found on the label-
ing. Dissolution profile test time points are determined as follows. From the appropriate dissolution test in the monograph, use
the time points given. At a minimum, use three time points with no more than one time point where the results exceed 85%
dissolved.
Calculate the similarity factor (f2) for the intact-tablet results and the split-tablet portion results:
Rt = cumulative percentage of the labeled drug dissolved at each of the selected n time points of the intact tablets
Tt = cumulative percentage of the labeled drug dissolved at each of the selected n time points of the split tablet portions
Acceptance criteria: The calculated f2 is NLT 50 (acceptable range: 50–100).
Procedure 2 (Procedure for Extended-Release Dosage Forms, Dissolution á711ñ): Use a split-tablet portion as the dosage unit.
Individually test 12 dosage units.
Medium, Apparatus, Times, and Analysis: As given in the monograph following the appropriate test number found on the
labeling.
Acceptance criteria: The percentages of the labeled amount released at the times specified conform to the L2 level criteria
of Acceptance Table 2 in á711ñ.
DISINTEGRATION
Disintegration testing is necessary only when used as a surrogate for dissolution testing as specified in the monograph. Fol-
low the procedure using split portions from tablets that are acceptable according to the Splitting Tablets with Functional Scoring
test as the dosage unit (see Disintegration á701ñ).
á711ñ DISSOLUTION
This general chapter is harmonized with the corresponding texts of the European Pharmacopoeia and/or the Japanese Phar-
macopoeia. These pharmacopeias have undertaken not to make any unilateral change to this harmonized chapter.
Portions of the present general chapter text that are national USP text, and therefore not part of the harmonized text, are
marked with symbols (♦♦) to specify this fact.
This test is provided to determine compliance with the dissolution requirements ♦where stated in the individual monograph♦
for dosage forms administered orally. In this general chapter, a dosage unit is defined as 1 tablet or 1 capsule or the amount
specified. ♦Of the types of apparatus described herein, use the one specified in the individual monograph. Where the label
states that an article is enteric-coated, and where a dissolution or disintegration test that does not specifically state that it is to
be applied to delayed-release articles is included in the individual monograph, the procedure and interpretation given for De-
layed-Release Dosage Forms is applied unless otherwise specified in the individual monograph. For hard or soft gelatin capsules
and gelatin-coated tablets that do not conform to the Dissolution specification, repeat the test as follows. Where water or a
medium with a pH of less than 6.8 is specified as the Medium in the individual monograph, the same Medium specified may be
used with the addition of purified pepsin that results in an activity of 750,000 Units or less per 1000 mL. For media with a pH
of 6.8 or greater, pancreatin can be added to produce not more than 1750 USP Units of protease activity per 1000 mL.
USP Reference Standards á11ñ—USP Prednisone Tablets RS.♦
APPARATUS
The assembly consists of the following: a vessel, which may be covered, made of glass or other inert, transparent material1; a
motor; a metallic drive shaft; and a cylindrical basket. The vessel is partially immersed in a suitable water bath of any conven-
ient size or heated by a suitable device such as a heating jacket. The water bath or heating device permits holding the temper-
ature inside the vessel at 37 ± 0.5° during the test and keeping the bath fluid in constant, smooth motion. No part of the as-
sembly, including the environment in which the assembly is placed, contributes significant motion, agitation, or vibration be-
yond that due to the smoothly rotating stirring element. An apparatus that permits observation of the specimen and stirring
element during the test is preferable. The vessel is cylindrical, with a hemispherical bottom and ♦with one of the following di-
mensions and capacities: for a nominal♦ capacity of 1 L, the height is 160 to 210 mm and its inside diameter is 98 to 106
mm; ♦for a nominal capacity of 2 L, the height is 280 to 300 mm and its inside diameter is 98 to 106 mm; and for a nominal
capacity of 4 L, the height is 280 to 300 mm and its inside diameter is 145 to 155 mm♦. Its sides are flanged at the top. A
fitted cover may be used to retard evaporation.2 The shaft is positioned so that its axis is not more than 2 mm at any point
from the vertical axis of the vessel and rotates smoothly and without significant wobble that could affect the results. A speed-
regulating device is used that allows the shaft rotation speed to be selected and maintained at the specified rate ♦given in the
individual monograph♦ within ±4%.
Shaft and basket components of the stirring element are fabricated of stainless steel, type 316, or other inert material, to the
specifications shown in Figure 1. A basket having a gold coating of about 0.0001 inch (2.5 mm) thick may be used. A dosage
unit is placed in a dry basket at the beginning of each test. The distance between the inside bottom of the vessel and the
bottom of the basket is maintained at 25 ± 2 mm during the test.
1 The materials should not sorb, react, or interfere with the specimen being tested.
2 If a cover is used, it provides sufficient openings to allow ready insertion of the thermometer and withdrawal of specimens.
Official text. Reprinted from First Supplement to USP38-NF33.
266 á711ñ Dissolution / Physical Tests DSC
Use the assembly from Apparatus 1, except that a paddle formed from a blade and a shaft is used as the stirring element.
The shaft is positioned so that its axis is not more than 2 mm from the vertical axis of the vessel at any point and rotates
smoothly without significant wobble that could affect the results. The vertical center line of the blade passes through the axis
of the shaft so that the bottom of the blade is flush with the bottom of the shaft. The paddle conforms to the specifications
shown in Figure 2. The distance of 25 ± 2 mm between the bottom of the blade and the inside bottom of the vessel is main-
tained during the test. The metallic or suitably inert, rigid blade and shaft comprise a single entity. A suitable two-part detach-
able design may be used provided the assembly remains firmly engaged during the test. The paddle blade and shaft may be
coated with a suitable coating so as to make them inert. The dosage unit is allowed to sink to the bottom of the vessel before
rotation of the blade is started. A small, loose piece of nonreactive material, such as not more than a few turns of wire helix,
may be attached to dosage units that would otherwise float. An alternative sinker device is shown in Figure 2a. Other validated
sinker devices may be used.
The assembly consists of a set of cylindrical, flat-bottomed glass vessels; a set of glass reciprocating cylinders; inert fittings
(stainless steel type 316 or other suitable material), and screens that are made of suitable nonsorbing and nonreactive material
and that are designed to fit the tops and bottoms of the reciprocating cylinders; and a motor and drive assembly to recipro-
cate the cylinders vertically inside the vessels and, if desired, index the reciprocating cylinders horizontally to a different row of
vessels. The vessels are partially immersed in a suitable water bath of any convenient size that permits holding the temperature
at 37 ± 0.5° during the test. No part of the assembly, including the environment in which the assembly is placed, contributes
significant motion, agitation, or vibration beyond that due to the smooth, vertically reciprocating cylinder. A device is used
that allows the reciprocation rate to be selected and maintained at the specified dip rate ♦given in the individual monograph♦
within ±5%. An apparatus that permits observation of the specimens and reciprocating cylinders is preferable. The vessels are
provided with an evaporation cap that remains in place for the duration of the test. The components conform to the dimen-
sions shown in Figure 3 unless otherwise specified ♦in the individual monograph♦.
The assembly consists of a reservoir and a pump for the Dissolution Medium; a flow-through cell; and a water bath that main-
tains the Dissolution Medium at 37 ± 0.5°. Use the specified cell size ♦as given in the individual monograph♦.
The pump forces the Dissolution Medium upwards through the flow-through cell. The pump has a delivery range between
240 and 960 mL per hour, with standard flow rates of 4, 8, and 16 mL per minute. It must deliver a constant flow (±5% of the
nominal flow rate); the flow profile is sinusoidal with a pulsation of 120 ± 10 pulses per minute. A pump without pulsation may
also be used. Dissolution test procedures using a flow-through cell must be characterized with respect to rate and any pulsa-
tion.
The flow-through cell (see Figures 4 and 5), of transparent and inert material, is mounted vertically with a filter system
(specified in the individual monograph) that prevents escape of undissolved particles from the top of the cell; standard cell
diameters are 12 and 22.6 mm; the bottom cone is usually filled with small glass beads of about 1-mm diameter with one
bead of about 5 mm positioned at the apex to protect the fluid entry tube; and a tablet holder (see Figures 4 and 5) is available
for positioning of special dosage forms, for example, inlay tablets. The cell is immersed in a water bath, and the temperature is
maintained at 37 ± 0.5°.
Figure 4. Apparatus 4, large cell for tablets and capsules (top), tablet holder for the large cell (bottom). (All measurements
are expressed in mm unless noted otherwise.)
Figure 5. Apparatus 4, small cell for tablets and capsules (top), tablet holder for the small cell (bottom). (All measurements
are expressed in mm unless noted otherwise.)
The apparatus uses a clamp mechanism and two O-rings to assemble the cell. The pump is separated from the dissolution
unit in order to shield the latter against any vibrations originating from the pump. The position of the pump should not be on
a level higher than the reservoir flasks. Tube connections are as short as possible. Use suitably inert tubing, such as polytef,
with about 1.6-mm inner diameter and chemically inert flanged-end connections.
APPARATUS SUITABILITY
The determination of suitability of a test assembly to perform dissolution testing must include conformance to the dimen-
sions and tolerances of the apparatus as given above. In addition, critical test parameters that have to be monitored periodical-
ly during use include volume and temperature of the Dissolution Medium, rotation speed (Apparatus 1 and Apparatus 2), dip
rate (Apparatus 3), and flow rate of medium (Apparatus 4).
Determine the acceptable performance of the dissolution test assembly periodically. ♦The suitability for the individual appara-
tus is demonstrated by the Performance Verification Test.
Performance Verification Test, Apparatus 1 and 2—Test USP Prednisone Tablets RS according to the operating condi-
tions specified. The apparatus is suitable if the results obtained are within the acceptable range stated in the technical data
sheet specific to the lot used and the apparatus tested.
Performance Verification Test, Apparatus 3—[To come.]
Performance Verification Test, Apparatus 4—[To come.]♦
PROCEDURE
Place the stated volume of the Dissolution Medium (±1%) in the vessel of the specified apparatus ♦given in the individual
monograph♦, assemble the apparatus, equilibrate the Dissolution Medium to 37 ± 0.5°, and remove the thermometer. Place 1
dosage unit in the apparatus, taking care to exclude air bubbles from the surface of the dosage unit, and immediately operate
the apparatus at the specified rate ♦given in the individual monograph♦. Within the time interval specified, or at each of the
times stated, withdraw a specimen from a zone midway between the surface of the Dissolution Medium and the top of the
rotating basket or blade, not less than 1 cm from the vessel wall. [NOTE—Where multiple sampling times are specified, replace
the aliquots withdrawn for analysis with equal volumes of fresh Dissolution Medium at 37° or, where it can be shown that re-
placement of the medium is not necessary, correct for the volume change in the calculation. Keep the vessel covered for the
duration of the test, and verify the temperature of the mixture under test at suitable times.] Perform the analysis ♦as directed in
the individual monograph♦ using a suitable assay method.3 Repeat the test with additional dosage form units.
If automated equipment is used for sampling or the apparatus is otherwise modified, verification that the modified appara-
tus will produce results equivalent to those obtained with the standard apparatus described in this general chapter is necessa-
ry.
Dissolution Medium—A suitable dissolution medium is used. Use the solvent specified ♦in the individual monograph♦. The
volume specified refers to measurements made between 20° and 25°. If the Dissolution Medium is a buffered solution, adjust
the solution so that its pH is within 0.05 unit of the specified pH ♦given in the individual monograph♦. [NOTE—Dissolved gases
can cause bubbles to form, which may change the results of the test. If dissolved gases influence the dissolution results, dis-
solved gases should be removed prior to testing.4]
Time—Where a single time specification is given, the test may be concluded in a shorter period if the requirement for mini-
mum amount dissolved is met. Specimens are to be withdrawn only at the stated times within a tolerance of ±2%.
♦Procedure for a Pooled Sample for Immediate-Release Dosage Forms—Use this procedure where Procedure for a Pooled
Sample is specified in the individual monograph. Proceed as directed for Immediate-Release Dosage Forms under Apparatus 1
and Apparatus 2 in the Procedure section. Combine equal volumes of the filtered solutions of the six or twelve individual speci-
mens withdrawn, and use the pooled sample as the test specimen. Determine the average amount of the active ingredient
dissolved in the pooled sample.♦
Use Method A or Method B and the apparatus specified ♦in the individual monograph♦. All test times stated are to be ob-
served within a tolerance of ±2%, unless otherwise specified.
Method A—
Procedure ♦(unless otherwise directed in the individual monograph)♦—
ACID STAGE—Place 750 mL of 0.1 N hydrochloric acid in the vessel, and assemble the apparatus. Allow the medium to equili-
brate to a temperature of 37 ± 0.5°. Place 1 dosage unit in the apparatus, cover the vessel, and operate the apparatus at the
specified rate ♦given in the monograph♦.
After 2 hours of operation in 0.1 N hydrochloric acid, withdraw an aliquot of the fluid, and proceed immediately as directed
under Buffer Stage.
Perform an analysis of the aliquot using a suitable assay method. ♦The procedure is specified in the individual monograph.♦
BUFFER STAGE—[NOTE—Complete the operations of adding the buffer and adjusting the pH within 5 minutes.]
With the apparatus operating at the rate specified ♦in the monograph♦, add to the fluid in the vessel 250 mL of 0.20 M triba-
sic sodium phosphate that has been equilibrated to 37 ± 0.5°. Adjust, if necessary, with 2 N hydrochloric acid or 2 N sodium
hydroxide to a pH of 6.8 ± 0.05. Continue to operate the apparatus for 45 minutes, or for the specified time ♦given in the
individual monograph♦. At the end of the time period, withdraw an aliquot of the fluid, and perform the analysis using a suita-
3 Test specimens are filtered immediately upon sampling unless filtration is demonstrated to be unnecessary. Use an inert filter that does not cause adsorption of
the active ingredient or contain extractable substances that would interfere with the analysis.
4 One method of deaeration is as follows: Heat the medium, while stirring gently, to about 41°, immediately filter under vacuum using a filter having a porosity of
0.45 mm or less, with vigorous stirring, and continue stirring under vacuum for about 5 minutes. Other validated deaeration techniques for removal of dissolved
gases may be used.
Official text. Reprinted from First Supplement to USP38-NF33.
272 á711ñ Dissolution / Physical Tests DSC
ble assay method. ♦The procedure is specified in the individual monograph. The test may be concluded in a shorter time peri-
od than that specified for the Buffer Stage if the requirement for the minimum amount dissolved is met at an earlier time.♦
Method B—
Procedure ♦(unless otherwise directed in the individual monograph)♦—
ACID STAGE—Place 1000 mL of 0.1 N hydrochloric acid in the vessel, and assemble the apparatus. Allow the medium to
equilibrate to a temperature of 37 ± 0.5°. Place 1 dosage unit in the apparatus, cover the vessel, and operate the apparatus at
the rate specified ♦in the monograph♦. After 2 hours of operation in 0.1 N hydrochloric acid, withdraw an aliquot of the fluid,
and proceed immediately as directed under Buffer Stage.
Perform an analysis of the aliquot using a suitable assay method. ♦The procedure is specified in the individual monograph.♦
BUFFER STAGE—[NOTE—For this stage of the procedure, use buffer that previously has been equilibrated to a temperature of
37 ± 0.5°.] Drain the acid from the vessel, and add to the vessel 1000 mL of pH 6.8 phosphate buffer, prepared by mixing 0.1
N hydrochloric acid with 0.20 M tribasic sodium phosphate (3:1) and adjusting, if necessary, with 2 N hydrochloric acid or 2 N
sodium hydroxide to a pH of 6.8 ± 0.05. [NOTE—This may also be accomplished by removing from the apparatus the vessel
containing the acid and replacing it with another vessel containing the buffer and transferring the dosage unit to the vessel
containing the buffer.]
Continue to operate the apparatus for 45 minutes, or for the specified time ♦given in the individual monograph♦. At the end
of the time period, withdraw an aliquot of the fluid, and perform the analysis using a suitable assay method. ♦The procedure is
specified in the individual monograph. The test may be concluded in a shorter time period than that specified for the Buffer
Stage if the requirement for minimum amount dissolved is met at an earlier time.♦
Place the stated volume of the Dissolution Medium in each vessel of the apparatus, assemble the apparatus, equilibrate the
Dissolution Medium to 37 ± 0.5°, and remove the thermometer. Place 1 dosage-form unit in each of the six reciprocating cylin-
ders, taking care to exclude air bubbles from the surface of each dosage unit, and immediately operate the apparatus as speci-
fied ♦in the individual monograph♦. During the upward and downward stroke, the reciprocating cylinder moves through a to-
tal distance of 9.9 to 10.1 cm. Within the time interval specified, or at each of the times stated, raise the reciprocating cylin-
ders and withdraw a portion of the solution under test from a zone midway between the surface of the Dissolution Medium and
the bottom of each vessel. Perform the analysis as directed ♦in the individual monograph♦. If necessary, repeat the test with
additional dosage-form units.
Dissolution Medium—Proceed as directed for Immediate-Release Dosage Forms under Apparatus 1 and Apparatus 2.
Time—Proceed as directed for Immediate-Release Dosage Forms under Apparatus 1 and Apparatus 2.
Proceed as directed for Delayed-Release Dosage Forms, Method B under Apparatus 1 and Apparatus 2 using one row of vessels
for the acid stage media and the following row of vessels for the buffer stage media and using the volume of medium specified
(usually 300 mL).
Time—Proceed as directed for Immediate-Release Dosage Forms under Apparatus 1 and Apparatus 2.
Place the glass beads into the cell specified ♦in the monograph♦. Place 1 dosage unit on top of the beads or, if specified ♦in
the monograph♦, on a wire carrier. Assemble the filter head, and fix the parts together by means of a suitable clamping device.
Introduce by the pump the Dissolution Medium warmed to 37 ± 0.5° through the bottom of the cell to obtain the flow rate
specified ♦in the individual monograph♦ and measured with an accuracy of 5%. Collect the eluate by fractions at each of the
times stated. Perform the analysis as directed ♦in the individual monograph♦. Repeat the test with additional dosage-form units.
Dissolution Medium—Proceed as directed for Immediate-Release Dosage Forms under Apparatus 1 and Apparatus 2.
Time—Proceed as directed for Immediate-Release Dosage Forms under Apparatus 1 and Apparatus 2.
Proceed as directed for Delayed-Release Dosage Forms under Apparatus 1 and Apparatus 2, using the specified media.
Time—Proceed as directed for Delayed-Release Dosage Forms under Apparatus 1 and Apparatus 2.
INTERPRETATION
Unless otherwise specified ♦in the individual monograph♦, the requirements are met if the quantities of active ingredient dis-
solved from the dosage units tested conform to Acceptance Table 1. Continue testing through the three stages unless the re-
sults conform at either S1 or S2. The quantity, Q, is the amount of dissolved active ingredient ♦specified in the individual mono-
graph♦, expressed as a percentage of the labeled content of the dosage unit; the 5%, 15%, and 25% values in Acceptance
Table 1 are percentages of the labeled content so that these values and Q are in the same terms.
Acceptance Table 1
Number
Stage Tested Acceptance Criteria
S1 6 Each unit is not less than Q + 5%.
Average of 12 units (S1 + S2) is equal to or greater than Q, and no unit is less than
S2 6 Q − 15%.
Average of 24 units (S1 + S2 +S3) is equal to or greater than Q, not more than 2 units are less than Q −
S3 12 15%, and no unit is less than Q − 25%.
♦Immediate-Release Dosage Forms Pooled Sample—Unless otherwise specified in the individual monograph, the require-
ments are met if the quantities of active ingredient dissolved from the pooled sample conform to the accompanying Accept-
ance Table for a Pooled Sample. Continue testing through the three stages unless the results conform at either S1 or S2. The
quantity, Q, is the amount of dissolved active ingredient specified in the individual monograph, expressed as a percentage of
the labeled content.
Acceptance Table for a Pooled Sample
Number
Stage Tested Acceptance Criteria
Average amount dissolved is not less than
S1 6 Q + 10%.
S2 6 Average amount dissolved (S1 + S2) is equal to or greater than Q + 5%.
S3 12 Average amount dissolved (S1 + S2 + S3) is equal to or greater than Q.
Unless otherwise specified ♦in the individual monograph♦, the requirements are met if the quantities of active ingredient dis-
solved from the dosage units tested conform to Acceptance Table 2. Continue testing through the three levels unless the results
conform at either L1 or L2. Limits on the amounts of active ingredient dissolved are expressed in terms of the percentage of
labeled content. The limits embrace each value of Qi, the amount dissolved at each specified fractional dosing interval. Where
more than one range is specified ♦in the individual monograph♦, the acceptance criteria apply individually to each range.
Acceptance Table 2
Number
Level Tested Criteria
No individual value lies outside each of the stated ranges and no individual value is less than the stated
L1 6 amount at the final test time.
The average value of the 12 units (L1 + L2) lies within each of the stated ranges and is not less than the
stated amount at the final test time; none is more than 10% of labeled content outside each of the
stated ranges; and none is more than 10% of labeled content below the stated amount at the final test
L2 6 time.
Acid Stage—Unless otherwise specified ♦in the individual monograph♦, the requirements of this portion of the test are met
if the quantities, based on the percentage of the labeled content, of active ingredient dissolved from the units tested conform
to Acceptance Table 3. Continue testing through all levels unless the results of both acid and buffer stages conform at an earlier
level.
Acceptance Table 3
Number
Level Tested Criteria
A1 6 No individual value exceeds 10% dissolved.
Average of the 12 units (A1 + A2) is not more than 10% dissolved, and no individual unit is greater than
A2 6 25% dissolved.
Average of the 24 units (A1 + A2 + A3) is not more than 10% dissolved, and no individual unit is greater
A3 12 than 25% dissolved.
Buffer Stage—Unless otherwise specified ♦in the individual monograph♦, the requirements are met if the quantities of active
ingredient dissolved from the units tested conform to Acceptance Table 4. Continue testing through the three levels unless the
results of both stages conform at an earlier level. The value of Q in Acceptance Table 4 is 75% dissolved unless otherwise speci-
fied ♦in the individual monograph♦. The quantity, Q, ♦specified in the individual monograph♦ is the total amount of active in-
gredient dissolved in both the Acid and Buffer Stages, expressed as a percentage of the labeled content. The 5%, 15%, and
25% values in Acceptance Table 4 are percentages of the labeled content so that these values and Q are in the same terms.
Acceptance Table 4
Number
Level Tested Criteria
B1 6 Each unit is not less than Q + 5%.
Average of 12 units (B1 + B2) is equal to or greater than Q, and no unit is less than
B2 6 Q – 15%.
Average of 24 units (B1 + B2 + B3) is equal to or greater than Q, not more than 2 units are less than Q –
B3 12 15%, and no unit is less than Q – 25%.
To determine the range of temperatures within which an official liquid distills, or the percentage of the material that distills
between two specified temperatures, use Method I or Method II as directed in the individual monograph. The lower limit of the
range is the temperature indicated by the thermometer when the first drop of condensate leaves the tip of the condenser, and
the upper limit is the Dry Point, i.e., the temperature at which the last drop of liquid evaporates from the lowest point in the
distillation flask, without regard to any liquid remaining on the side of the flask, or the temperature observed when the propor-
tion specified in the individual monograph has been collected.
NOTE—Cool all liquids that distill below 80° to between 10° and 15° before measuring the sample to be distilled.
Method I
Apparatus—Use apparatus similar to that specified for Method II, except that the distilling flask is of 50- to 60-mL capacity,
and the neck of the flask is 10 to 12 cm long and 14 to 16 mm in internal diameter. The perforation in the upper insulating
board, if one is used, should be such that when the flask is set into it, the portion of the flask below the upper surface of the
insulating material has a capacity of 3 to 4 mL.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á730ñ Plasma Spectrochemistry 275
Procedure—Proceed as directed for Method II, but place in the flask only 25 mL of the liquid to be tested.
Method II
in which t is the corrected boiling temperature, in Celsius scale; t0 is the measured boiling temperature, in Celsius scale; and p
is the barometric pressure at the time of measurement, in mm Hg.
Plasma-based instrumental techniques that are useful for pharmaceutical analyses fall into two major categories: those based
on the inductively coupled plasma, and those where a plasma is generated at or near the surface of the sample. An inductively
coupled plasma (ICP) is a high-temperature excitation source that desolvates, vaporizes, and atomizes aerosol samples and
ionizes the resulting atoms. The excited analyte ions and atoms can then subsequently be detected by observing their emis-
sion lines, a method termed inductively coupled plasma–atomic emission spectroscopy (ICP–AES), also known as inductively
coupled plasma–optical emission spectroscopy (ICP–OES); or the excited or ground state ions can be determined by a techni-
que known as inductively coupled plasma–mass spectrometry (ICP–MS). ICP–AES and ICP–MS may be used for either single- or
multi-element analysis, and they provide good general-purpose procedures for either sequential or simultaneous analyses over
an extended linear range with good sensitivity.
An emerging technique in plasma spectrochemistry is laser-induced breakdown spectroscopy (LIBS). In LIBS, a solid, liquid,
or gaseous sample is heated directly by a pulsed laser, or indirectly by a plasma generated by the laser. As a result, the sample
is volatilized at the laser beam contact point, and the volatilized constituents are reduced to atoms, molecular fragments, and
larger clusters in the plasma that forms at or just above the surface of the sample. Emission from the atoms and ions in the
sample is collected, typically using fiber optics or a remote viewing system, and measured using an array detector such as a
charge-coupled device (CCD). LIBS can be used for qualitative analysis or against a working standard curve for quantitative
analysis. Although LIBS is not currently in wide use by the pharmaceutical industry, it might be suited for at-line or on-line
measurements in a production setting as well as in the laboratory. Because of its potential, it should be considered a viable
technique for plasma spectrochemistry in the pharmaceutical laboratory. However, because LIBS is still an emerging technique,
details will not be further discussed in this general chapter.1
SAMPLE PREPARATION
Sample preparation is critical to the success of plasma-based analysis and is the first step in performing any analysis via ICP–
AES or ICP–MS. Plasma-based techniques are heavily dependent on sample transport into the plasma, and because ICP–AES
and ICP–MS share the same sample introduction system, the means by which samples are prepared may be applicable to ei-
ther technique. The most conventional means by which samples are introduced into the plasma is via solution nebulization. If
solution nebulization is employed, solid samples must be dissolved in order to be presented into the plasma for analysis. Sam-
ples may be dissolved in any appropriate solvent. There is a strong preference for the use of aqueous or dilute nitric acid solu-
tions, because there are minimal interferences with these solvents compared to other solvent choices. Hydrogen peroxide, hy-
drochloric acid, sulfuric acid, perchloric acid, combinations of acids, or various concentrations of acids can all be used to dis-
solve the sample for analysis. Dilute hydrofluoric acid may also be used, but great care must be taken to ensure the safety of
the analyst, as well as to protect the quartz sample introduction equipment when using this acid; specifically, the nebulizer,
spray chamber, and inner torch tube should be manufactured from hydrofluoric acid-tolerant materials. Additionally, alterna-
tive means of dissolving the sample can be employed. These include, but are not limited to, the use of dilute bases, straight or
diluted organic solvents, combinations of acids or bases, and combinations of organic solvents.
When samples are introduced into the plasma via solution nebulization, it is important to consider the potential matrix ef-
fects and interferences that might arise from the solvent. The use of an appropriate internal standard and/or matching the
standard matrix with samples should be applied for ICP–AES and ICP–MS analyses in cases where accuracy and precision are
not adequate. In either event, the selection of an appropriate internal standard should consider the analyte in question, ioniza-
tion energy, wavelengths or masses, and the nature of the sample matrix.
Where a sample is found not to be soluble in any acceptable solvent, a variety of digestion techniques can be employed.
These include hot-plate digestion and microwave-assisted digestions, including open-vessel and closed-vessel approaches. The
decision regarding the type of digestion technique to use depends on the nature of the sample being digested, as well as on
the analytes of interest.
Open-vessel digestion is generally not recommended for the analysis of volatile metals, e.g., selenium and mercury. The suit-
ability of a digestion technique, whether open-vessel or closed-vessel, should be supported by spike recovery experiments in
order to verify that, within an acceptable tolerance, volatile metals have not been lost during sample preparation. Use acids,
bases, and hydrogen peroxide of ultra-high purity, especially when ICP–MS is employed. Deionized water must be at least 18
megaohm. Check diluents for interferences before they are used in an analysis. Because it is not always possible to obtain or-
ganic solvents that are free of metals, use organic solvents of the highest quality possible with regard to metal contaminants.
It is important to consider the selection of the type, material of construction, pretreatment, and cleaning of analytical lab-
ware used in ICP–AES and ICP–MS analyses. The material must be inert and, depending on the specific application, resistant to
caustics, acids, and/or organic solvents. For some analyses, diligence must be exercised to prevent the adsorption of analytes
onto the surface of a vessel, particularly in ultra-trace analyses. Contamination of the sample solutions from metal and ions
present in the container can also lead to inaccurate results.
The use of labware that is not certified to meet Class A tolerances for volumetric flasks is acceptable if the linearity, accuracy,
and precision of the method have been experimentally demonstrated to be suitable for the purpose at hand.
SAMPLE INTRODUCTION
There are two ways to introduce the sample into the nebulizer: by a peristaltic pump and by self-aspiration. The peristaltic
pump is preferred and serves to ensure that the flow rate of sample and standard solution to the nebulizer is the same irrespec-
tive of sample viscosity. In some cases, where a peristaltic pump is not required, self-aspiration can be used.
A wide variety of nebulizer types is available, including pneumatic (concentric and cross-flow), grid, and ultrasonic nebuliz-
ers. Micronebulizers, high-efficiency nebulizers, direct-injection high-efficiency nebulizers, and flow-injection nebulizers are also
available. The selection of the nebulizer for a given analysis should consider the sample matrix, analyte, and desired sensitivity.
Some nebulizers are better suited for use with viscous solutions or those containing a high concentration of dissolved solids,
whereas others are better suited for use with organic solutions.
Note that the self-aspiration of a fluid is due to the Bernoulli, or Venturi, effect. Not all types of nebulizers will support self-
aspiration. The use of a concentric nebulizer, for example, is required for self-aspiration of a solution.
Once a sample leaves the nebulizer as an aerosol, it enters the spray chamber, which is designed to permit only the smallest
droplets of sample solution into the plasma; as a result, typically only 1% to 2% of the sample aerosol reaches the ICP, al-
though some special-purpose nebulizers have been designed that permit virtually all of the sample aerosol to enter the ICP. As
with nebulizers, there is more than one type of spray chamber available for use with ICP–AES or ICP–MS. Examples include the
Scott double-pass spray chamber, as well as cyclonic spray chambers of various configurations. The spray chamber must be
1 Yueh
F-Y, Singh JP, Zhang H. Laser-induced breakdown spectroscopy, elemental analysis. In: Encyclopedia of Analytical Chemistry: Instrumentation and Applications.
New York: Wiley; 2000:2066–2087.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á730ñ Plasma Spectrochemistry 277
compatible with the sample and solvent and must equilibrate and wash out in as short a time as possible. When a spray cham-
ber is selected, the nature of the sample matrix, the nebulizer, the desired sensitivity, and the analyte should all be considered.
Gas and liquid chromatography systems can be interfaced with ICP–AES and ICP–MS for molecular speciation, ionic specia-
tion, or other modes of separation chemistry, based on elemental emission or mass spectrometry.
Ultimately, the selection of sample introduction hardware should be demonstrated experimentally to provide sufficient spe-
cificity, sensitivity, linearity, accuracy, and precision of the analysis at hand.
In addition to solution nebulization, it is possible to analyze solid samples directly via laser ablation (LA). In such instances,
the sample enters the torch as a solid aerosol. LA–ICP–AES and LA–ICP–MS are better suited for qualitative analyses of pharma-
ceutical compounds because of the difficulty in obtaining appropriate standards. Nonetheless, quantitative analyses can be
performed if it can be demonstrated through appropriate method validation that the available standards are adequate.2
STANDARD PREPARATION
Single- or multi-element standard solutions, whose concentrations are traceable to primary reference standards, such as
those of the National Institute of Standards and Technology (NIST), can be purchased for use in the preparation of working
standard solutions. Alternatively, standard solutions of elements can be accurately prepared from standard materials and their
concentrations, determined independently, as appropriate. Working standard solutions, especially those used for ultra-trace
analyses, may have limited shelf life. As a general rule, working standard solutions should be retained for no more than 24
hours unless stability is demonstrated experimentally. The selection of the standard matrix is of fundamental importance in the
preparation of element standard solutions. Spike recovery experiments should be conducted with specific sample matrices in
order to determine the accuracy of the method. If sample matrix effects cause excessive inaccuracies, standards, blanks, and
sample solutions should be matrix matched, if possible, in order to minimize matrix interferences.
In cases where matrix matching is not possible, an appropriate internal standard or the method of standard additions should
be used for ICP–AES or ICP–MS. Internal standards can also be introduced through a T connector into the sample uptake tub-
ing. In any event, the selection of an appropriate internal standard should consider the analytes in question, their ionization
and excitation energies, their chemical behavior, their wavelengths or masses, and the nature of the sample matrix. Ultimately,
the selection of an internal standard should be demonstrated experimentally to provide sufficient specificity, sensitivity, lineari-
ty, accuracy, and precision of the analysis at hand.
The method of standard additions involves adding a known concentration of the analyte element to the sample at no fewer
than two concentration levels plus an unspiked sample preparation. The instrument response is plotted against the concentra-
tion of the added analyte element, and a linear regression line is drawn through the data points. The absolute value of the x-
intercept multiplied by any dilution factor is the concentration of the analyte in the sample.
The presence of dissolved carbon at concentrations of a small percentage in aqueous solutions enhances ionization of seleni-
um and arsenic in an inductively coupled argon plasma. This phenomenon frequently results in a positive bias for ICP–AES and
ICP–MS selenium and arsenic quantification measurements, which can be remedied by using the method of standard addi-
tions or by adding a small percentage of carbon, such as analytically pure glacial acetic acid, to the linearity standards.
ICP
The components that make up the ICP excitation source include the argon gas supply, torch, radio frequency (RF) induction
coil, impedance-matching unit, and RF generator. Argon gas is almost universally used in an ICP. The plasma torch consists of
three concentric tubes designated as the inner, the intermediate, and the outer tube. The intermediate and outer tubes are
almost universally made of quartz. The inner tube can be made of quartz or alumina if the analysis is conducted with solutions
containing hydrofluoric acid. The nebulizer gas flow carries the aerosol of the sample solution into and through the inner tube
of the torch and into the plasma. The intermediate tube carries the intermediate (sometimes referred to as the auxiliary) gas.
The intermediate gas flow helps to lift the plasma off the inner and intermediate tubes to prevent their melting and the depo-
sition of carbon and salts on the inner tube. The outer tube carries the outer (sometimes referred to as the plasma or coolant)
gas, which is used to form and sustain the toroidal plasma. The tangential flow of the coolant gas through the torch constricts
the plasma and prevents the ICP from expanding to fill the outer tube, keeping the torch from melting. An RF induction coil,
also called the load coil, surrounds the torch and produces an oscillating magnetic field, which in turn sets up an oscillating
current in the ions and electrons produced from the argon. The impedance-matching unit serves to efficiently couple the RF
energy from the generator to the load coil. The unit can be of either the active or the passive type. An active matching unit
adjusts the impedance of the RF power by means of a capacitive network, whereas the passive type adjusts the impedance
directly through the generator circuitry. Within the load coil of the RF generator, the energy transfer between the coil and the
argon creates a self-sustaining plasma. Collisions of the ions and electrons liberated from the argon ionize and excite the ana-
lyte atoms in the high-temperature plasma. The plasma operates at temperatures of 6,000 to 10,000 K, so most covalent
bonds and analyte-to-analyte interactions have been eliminated.
2 For additional information on laser ablation, see Russo R, Mao X, Borisov O, Liu H. Laser ablation in atomic spectrometry. In: Encyclopedia of Analytical Chemistry:
Instrumentation and Applications. New York: Wiley; 2000.
Official text. Reprinted from First Supplement to USP38-NF33.
278 á730ñ Plasma Spectrochemistry / Physical Tests DSC
ICP–AES
An inductively coupled plasma can use either an optical or a mass spectral detection system. In the former case, ICP–AES,
analyte detection is achieved at an emission wavelength of the analyte in question. Because of differences in technology, a
wide variety of ICP–AES systems are available, each with different capabilities, as well as different advantages and disadvantag-
es. Simultaneous-detection systems are capable of analyzing multiple elements at the same time, thereby shortening analysis
time and improving background detection and correction. Sequential systems move from one wavelength to the next to per-
form analyses, and often provide a larger number of analytical lines from which to choose. Array detectors, including charge-
coupled devices and charge-injection devices, with detectors on a chip, make it possible to combine the advantages of both
simultaneous and sequential systems. These types of detection devices are used in the most powerful spectrometers, providing
rapid analysis and a wide selection of analytical lines.
The ICP can be viewed in either axial or radial (also called lateral) mode. The torch is usually positioned horizontally in axially
viewed plasmas and is viewed end on, whereas it is positioned vertically in radially viewed plasmas and is viewed from the side.
Axial viewing of the plasma can provide higher signal-to-noise ratios (better detection limits and precision); however, it also
incurs greater matrix and spectral interferences. Methods validated on an instrument with a radial configuration will probably
not be completely transferable to an instrument with an axial configuration, and vice versa.
Additionally, dual-view instrument systems are available, making it possible for the analyst to take advantage of either torch
configuration. The selection of the optimal torch configuration will depend on the sample matrix, analyte in question, analyti-
cal wavelength(s) used, cost of instrumentation, required sensitivity, and type of instrumentation available in a given laborato-
ry.
Regardless of torch configuration or detector technology, ICP–AES is a technique that provides a qualitative and/or quantita-
tive measurement of the optical emission from excited atoms or ions at specific wavelengths. These measurements are then
used to determine the analyte concentration in a given sample. Upon excitation, an atom or atomic ion emits an array of dif-
ferent frequencies of light that are characteristic of the distinct energy transition allowed for that element. The intensity of the
light is generally proportional to the analyte concentration. It is necessary to correct for the background emission from the
plasma. Sample concentration measurements are usually determined from a working curve of known standards over the con-
centration range of interest. It is, however, also possible to perform a single-point calibration under certain circumstances, such
as with limit tests, if the methodology has been validated for sufficient specificity, sensitivity, linearity, accuracy, precision, rug-
gedness, and robustness.
Because there are distinct transitions between atomic energy levels, and because the atoms in an ICP are rather dilute, emis-
sion lines have narrow bandwidths. However, because the emission spectra from the ICP contain many lines, and because
“wings” of these lines overlap to produce a nearly continuous background on top of the continuum that arises from the re-
combination of argon ions with electrons, a high-resolution spectrometer is required in ICP–AES. The decision regarding which
spectral line to measure should include an evaluation of potential spectral interferences. All atoms in a sample are excited si-
multaneously; however, the presence of multiple elements in some samples can lead to spectral overlap. Spectral interference
can also be caused by background emission from the sample or plasma. Modern ICPs usually have background correction
available, and a number of background correction techniques can be applied. Simple background correction typically involves
measuring the background emission intensity at some point away from the main peak and subtracting this value from the total
signal being measured. Mathematical modeling to subtract the interfering signal as a background correction can also be per-
formed with certain types of ICP–AES spectrometers.
The selection of the analytical spectral line is critical to the success of an ICP–AES analysis, regardless of torch configuration
or detector type. Though some wavelengths are preferred, the final choice must be made in the context of the sample matrix,
the type of instrument being used, and the sensitivity required. Analysts might choose to start with the wavelengths recom-
mended by the manufacturer of their particular instrument and select alternative wavelengths based on manufacturer recom-
mendations or published wavelength tables.3,4,5,6,7 Ultimately, the selection of analytical wavelengths should be demonstrated
experimentally to provide sufficient specificity, sensitivity, linearity, accuracy, and precision of the analysis at hand.
Forward power, gas flow rates, viewing height, and torch position can all be optimized to provide the best signal. However,
it must also be kept in mind that these same variables can influence matrix and spectral interferences.
In general, it is desirable to operate the ICP under robust conditions, which can be gauged on the basis of the MgII/MgI line
pair at (280.270 nm/285.213 nm). If that ratio of intensities is above 6.0 in an aqueous solution, the ICP is said to be robust,
and is less susceptible to matrix interferences. A ratio of about 10.0 is generally what is sought. Note that the term robust con-
ditions is unrelated to robustness as applied to analytical method validation. Operation of an instrument with an MgII/MgI ratio
greater than 6.0 is not mandated, but is being suggested as a means of optimizing instrument parameters in many circum-
stances.
The analysis of the Group I elements can be an exception to this strategy. When atomic ions are formed from elements in
this group, they assume a noble gas electron configuration, with correspondingly high excitation energy. Because the first ex-
3 PaylingR, Larkins P. Optical Emission Lines of the Elements. New York: Wiley; 2000.
4 HarrisonGR. Massachusetts Institute of Technology Wavelength Tables [also referred to as MIT Wavelength Tables]. Cambridge, MA: MIT Press; 1969.
5 Winge RK, Fassel VA, Peterson VJ, Floyd MA. Inductively Coupled Plasma Atomic Emission Spectroscopy: An Atlas of Spectral Information. New York: Elsevier; 1985.
6 Boumans PWJM. Spectrochim Acta A. 1981;36B:169.
7 Boumans PWJM. Line Coincidence Tables for Inductively Coupled Plasma Atomic Emission Spectrometry. 2nd ed.; Oxford, UK: Pergamon; 1984.
cited state of these ions is extremely high, few are excited, so emission intensity is correspondingly low. This situation can be
improved by reducing the fractional ionization, which can in turn be achieved by using lower forward power settings in com-
bination with adjusted viewing height or nebulizer gas flow, or by adding an ionization suppression agent to the samples and
standards.
When organic solvents are used, it is often necessary to use a higher forward power setting, higher intermediate and outer
gas flows, and a lower nebulizer gas flow than would be employed for aqueous solutions, as well as a reduction in the nebuliz-
er gas flow. When using organic solvents, it may also be necessary to bleed small amounts of oxygen into the torch to prevent
carbon buildup in the torch.
Calibration
The wavelength accuracy for ICP–AES detection must comply with the manufacturer's applicable operating procedures. Be-
cause of the inherent differences among the types of instruments available, there is no general system suitability procedure
that can be employed. Calibration routines recommended by the instrument manufacturer for a given ICP–AES instrument
should be followed. These might include, but are not limited to, use of a multi-element wavelength calibration with a refer-
ence solution, internal mercury (Hg) wavelength calibration, and peak search. The analyst should perform system checks in
accordance with the manufacturer's recommendations.
Standardization
The instrument must be standardized for quantification at time of use. However, because ICP–AES is a technique generally
considered to be linear over a range of 6 to 8 orders of magnitude, it is not always necessary to continually demonstrate linear-
ity by the use of a standard curve composed of multiple standards. Once a method has been developed and is in routine use,
it is possible to calibrate with a blank and a single standard. One-point standardizations are suitable for conducting limit tests
on production materials and final products if the methodology has been rigorously validated for sufficient specificity, sensitivi-
ty, linearity, accuracy, precision, ruggedness, and robustness. The use of a single-point standardization is also acceptable for
qualitative ICP–AES analyses, where the purpose of the experiment is to confirm the presence or absence of elements without
the requirement of an accurate quantification.
An appropriate blank solution and standards that bracket the expected range of the sample concentrations should be as-
sayed and the detector response plotted as a function of analyte concentration, as in the case where the concentration of a
known component is being determined within a specified tolerance. However, it is not always possible to employ a bracketing
standard when an analysis is performed at or near the detection limit. This lack of use of a bracketing standard is acceptable
for analyses conducted to demonstrate the absence or removal of elements below a specified limit. The number and concen-
trations of standard solutions used should be based on the purpose of the quantification, the analyte in question, the desired
sensitivity, and the sample matrix. Regression analysis of the standard plot should be employed to evaluate the linearity of de-
tector response, and individual monographs may set criteria for the residual error of the regression line. Optimally, a correla-
tion coefficient of not less than 0.99, or as indicated in the individual monograph, should be demonstrated for the working
curve. Here, too, however, the nature of the sample matrix, the analyte(s), the desired sensitivity, and the type of instrumenta-
tion available may dictate a correlation coefficient lower than 0.99. The analyst should use caution when proceeding with such
an analysis, and should employ additional working standards.
To demonstrate the stability of the system's initial standardization, a solution used in the initial standard curve must be reas-
sayed as a check standard at appropriate intervals throughout the analysis of the sample set. The reassayed standard should
agree with its expected value to within ±10%, or as specified in an individual monograph, for single-element analyses when
analytical wavelengths are between 200 and 500 nm, or concentrations are >1 mg per mL. The reassayed standard should
agree with its theoretical value to within ±20%, or as specified in an individual monograph, for multi-element analyses, when
analytical wavelengths are <200 nm or >500 nm, or at concentrations of <1 mg per mL. In cases where an individual mono-
graph provides different guidance regarding the reassayed check standard, the requirements of the monograph take prece-
dence.
Procedure
Follow the procedure as directed in the individual monograph for the instrumental parameters. Because of differences in
manufacturers' equipment configurations, the manufacturer's suggested default conditions may be used and modified as nee-
ded. The specification of definitive parameters in a monograph does not preclude the use of other suitable operating condi-
tions, and adjustments of operating conditions may be necessary. Alternative conditions must be supported by suitable valida-
tion data, and the conditions in the monograph will take precedence for official purposes. Data collected from a single sample
introduction are treated as a single result. This result might be the average of data collected from replicate sequential readings
from a single solution introduction of the appropriate standard or sample solution. Sample concentrations are calculated ver-
sus the working curve generated by plotting the detector response versus the concentration of the analyte in the standard
solutions. This calculation is often performed directly by the instrument.
ICP–MS
When an inductively coupled plasma uses a mass spectral detection system, the technique is referred to as inductively cou-
pled plasma–mass spectrometry (ICP–MS). In this technique, analytes are detected directly at their atomic masses. Because
these masses must be charged to be detected in ICP–MS, the method relies on the ability of the plasma source to both atom-
ize and ionize sample constituents. As is the case with ICP–AES, a wide variety of ICP–MS instrumentation systems are availa-
ble.
The systems most commonly in use are quadrupole-based systems. Gaining in interest is time-of-flight ICP–MS. Although
still not in widespread use, this approach may see greater use in the future. Additionally, high-resolution sector field instru-
ments are available.
Regardless of instrument design or configuration, ICP–MS provides both a qualitative and a quantitative measurement of the
components of the sample. Ions are generated from the analyte atoms by the plasma. The analyte ions are then extracted
from the atmospheric-pressure plasma through a sampling cone into a lower-pressure zone, ordinarily held at a pressure near
1 Torr. In this extraction process, the sampled plasma gases, including the analyte species, form a supersonic beam, which
dictates many of the properties of the resulting analyte ions. A skimmer cone, located behind the sampling cone, “skims” the
supersonic beam of ions as they emerge from the sampling cone. Behind the skimmer cone is a lower-pressure zone, often
held near a milliTorr. Lastly, the skimmed ions pass a third-stage orifice to enter a zone held near a microTorr, where they
encounter ion optics and are passed into the mass spectrometer. The mass spectrometer separates the ions according to their
mass-to-charge (m/z) ratios. The ICP–MS has a mass range up to 240 atomic mass units (amu). Depending on the equipment
configuration, analyte adducts can form with diluents, with argon, or with their decomposition products. Also formed are ox-
ides and multiply-charged analyte ions, which can increase the complexity of the resulting mass spectra. Interferences can be
minimized by appropriate optimization of operational parameters, including gas flows (central, intermediate, and outer gas
flow rates), sample-solution flow, RF power, extraction-lens voltage, etc., or by the use of collision or reaction cells, or cool
plasma operation, if available on a given instrument. Unless a laboratory is generating or examining isotopes that do not natu-
rally occur, a list of naturally occurring isotopes will provide the analyst with acceptable isotopes for analytical purposes. Iso-
topic patterns also serve as an aid to element identification and confirmation. Additionally, tables of commonly found interfer-
ences and polyatomic isobaric interferences and correction factors can be used.
ICP–MS generally offers considerably lower (better) detection limits than ICP–AES, largely because of the extremely low
background that it generates. This ability is a major advantage of ICP–MS for determination of very low analyte concentrations
or when elimination of matrix interferences is required. In the latter case, some interferences can be avoided simply by addi-
tional dilution of the sample solution. In some applications, analytes can be detected below the parts per trillion (ppt) level
using ICP–MS. As a general rule, ICP–MS as a technique requires that samples contain significantly less total dissolved solids
than does ICP–AES.
The selection of the analytical mass to use is critical to the success of an ICP–MS analysis, regardless of instrument design.
Though some masses are often considered to be the primary ones, because of their high natural abundance, an alternative
mass for a given element is often used to avoid spectral overlaps (isobaric interferences). Selection of an analytical mass must
always be considered in the context of the sample matrix, the type of instrument being used, and the concentrations to be
measured. Analysts might choose to start with masses recommended by the manufacturer of their particular instrument and
select alternate masses based on manufacturer's recommendations or published tables of naturally occurring isotopes.8
Optimization of an ICP–MS method is also highly dependent on the plasma parameters and means of sample introduction.
Forward power, gas flow rates, and torch position may all be optimized to provide the best signal. When organic solvents are
used, it is often necessary to use a higher forward power setting and a lower nebulizer flow rate than would be used for aque-
ous solutions. Additionally, when organic solvents are used, it might be necessary to introduce small amounts of oxygen into
the central or intermediate gas to prevent carbon buildup in the torch or on the sampler cone orifice. The use of a platinum-
tipped sampling or skimmer cone may also be required in order to reduce cone degradation with some organic solvents.
Calibration
The mass spectral accuracy for ICP–MS detection must be in accordance with the applicable operating procedures. Because
of the inherent differences between the types of instruments available, there is no general system suitability procedure that can
be employed. Analysts should refer to the tests recommended by the instrument manufacturer for a given ICP–MS instrument.
These may include, but are not limited to, tuning on a reference mass or masses, peak search, and mass calibration. The ana-
lyst should perform system checks recommended by the instrument manufacturer.
Standardization
The instrument must be standardized for quantification at the time of use. Because the response (signal vs. concentration) of
ICP–MS is generally considered to be linear over a range of 6 to 8 orders of magnitude, it is not always necessary to continually
8 Horlick G, Montaser A. Analytical characteristics of ICPMS. In: Montaser A, Editor. Inductively Coupled Plasma Mass Spectrometry. New York: Wiley-VCH; 1998:516–
518.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á730ñ Plasma Spectrochemistry 281
demonstrate linearity by the use of a working curve. Once a method has been developed and is in routine use, it is common
practice to calibrate with a blank and a single standard. One-point standardizations are suitable for conducting limit tests on
production materials and final products, provided that the methodology has been rigorously validated for sufficient specificity,
sensitivity, linearity, accuracy, precision, ruggedness, and robustness. An appropriate blank solution and standards that bracket
the expected range of the sample concentrations should be assayed and the detector response plotted as a function of analyte
concentration. The number and concentration of standard solutions used should be based on the analyte in question, the ex-
pected concentrations, and the sample matrix, and should be left to the discretion of the analyst. Optimally, a correlation coef-
ficient of not less than 0.99, or as indicated in the individual monograph, should be demonstrated for the working standard
curve. Here, too, however, the nature of the sample matrix, the analyte, the desired sensitivity, and the type of instrumenta-
tion available might dictate a correlation coefficient lower than 0.99. The analyst should use caution when proceeding with
such an analysis and should employ additional working standards.
To demonstrate the stability of the system since initial standardization, a solution used in the initial standard curve must be
reassayed as a check standard at appropriate intervals throughout the analysis of the sample set. Appropriate intervals may be
established as occurring after every fifth or tenth sample, or as deemed adequate by the analyst, on the basis of the analysis
being performed. The reassayed standard should agree with its expected value to within ±10% for single-element analyses
when analytical masses are free of interferences and when concentrations are >1 ng per mL. The reassayed standard should
agree with its expected value to within ±20% for multi-element analyses, or when concentrations are <1 ng per mL. In cases
where an individual monograph provides different guidance regarding the reassayed check standard, the requirements of the
monograph take precedence.
The method of standard additions should be employed in situations where matrix interferences are expected or suspected.
This method involves adding a known concentration of the analyte element to the sample solution at no fewer than two con-
centration levels. The instrument response is plotted against the concentration of the added analyte element, and a linear re-
gression line is drawn through the data points. The absolute value of the x-intercept multiplied by any dilution factor is the
concentration of the analyte in the sample.
Procedure
Follow the procedure as directed in the individual monograph for the detection mode and instrument parameters. The spec-
ification of definitive parameters in a monograph does not preclude the use of other suitable operating conditions, and adjust-
ments of operating conditions may be necessary. Alternative conditions must be supported by suitable validation data, and the
conditions in the monograph will take precedence for official purposes. Because of differences in manufacturers' equipment
configurations, the analyst may wish to begin with the manufacturer's suggested default conditions and modify them as nee-
ded. Data collected from a single sample introduction are treated as a single result. Data collected from replicate sequential
readings from a single introduction of the appropriate standard or sample solutions are averaged as a single result. Sample
concentrations are calculated versus the working curve generated by plotting the detector response versus the concentration
of the analyte in the standard solutions. With modern instruments, this calculation is often performed by the instrument.
GLOSSARY
MULTIPLY-CHARGED IONS: Atoms that, when subjected to the high-ionization temperature of the ICP, can form doubly or triply
charged ions (X++, X+++, etc.). When detected by MS, the apparent mass of these ions will be 1/2 or 1/3 that of the atomic mass.
NEBULIZER: Used to form a consistent sample aerosol that mixes with the argon gas, which is subsequently sent into the ICP.
OUTER (OR COOLANT OR PLASMA) GAS: The main gas supply for the plasma.
PLASMA GAS: See Outer (or Coolant or Plasma) Gas.
RADIAL VIEWING: A configuration of the plasma for AES in which the plasma is viewed orthogonal to the spectrometer optic
path. Also called “side-on viewing.” See also Lateral Viewing.
REACTION CELL: Similar to Collision Cell, but operating on a different principle. Designed to reduce or eliminate spectral interfer-
ences.
SAMPLING CONE: A metal cone (usually nickel-, aluminum-, or platinum-tipped) with a small opening, through which ionized
sample material flows after leaving the plasma.
SEQUENTIAL: A type of detector configuration for AES or MS in which discrete emission lines or isotopic peaks are observed by
scanning or hopping across the spectral range by means of a monochromator or scanning mass spectrometer.
SIMULTANEOUS: A type of detector configuration for AES or MS in which all selected emission lines or isotopic peaks are ob-
served at the same time by using a polychromator or simultaneous mass spectrometer, offering increased analysis speed for
analyses of multi-element samples.
SKIMMER CONE: A metal cone through which ionized sample flows after leaving the sampling cone and before entering the
high-vacuum region of an ICP–MS.
STANDARD ADDITIONS: A method used to determine the actual analyte concentration in a sample when viscosity or matrix ef-
fects might cause erroneous results.
TORCH: A series of three concentric tubes, usually manufactured from quartz, in which the ICP is formed.
The procedure set forth in this chapter determines the amount of volatile matter of any kind that is driven off under the
conditions specified. For substances appearing to contain water as the only volatile constituent, the procedure given in the
chapter, Water Determination á921ñ, is appropriate, and is specified in the individual monograph.
Unless otherwise directed in the individual monograph, conduct the determination on a 1- to 2-g test specimen. Mix the
substance to be tested and, if it is in the form of large particles, reduce the particle size to about 2 mm by quickly crushing
before weighing out the test specimen. Tare an appropriate glass-stoppered, shallow weighing bottle that has been dried for
about 30 minutes under the same conditions to be employed in the determination and cooled to room temperature in a des-
iccator. Put the test specimen in the bottle, replace the cover, and accurately weigh the bottle and the contents. By gentle,
sidewise shaking, distribute the test specimen as evenly as practicable to a depth of about 5 mm generally, and not more than
10 mm in the case of bulky materials. Place the loaded bottle in the drying chamber, removing the stopper and leaving it also
in the chamber. Dry the test specimen at the temperature and for the time specified in the monograph. [NOTE—The tempera-
ture specified in the monograph is to be regarded as being within the range of ±2° of the stated figure.] When “dry to con-
stant weight” is specified in a monograph, drying shall be continued until two consecutive weighings do not differ by more
than 0.50 mg per g of substance taken, the second weighing following an additional hour of drying. Upon opening the cham-
ber, close the bottle promptly, and allow it to come to room temperature in a desiccator before weighing.
If the substance melts at a lower temperature than that specified for the determination of Loss on Drying, maintain the bottle
with its contents for 1 to 2 hours at a temperature 5° to 10° below the melting temperature, then dry at the specified temper-
ature.
Where capsules are to be tested, use a portion of the mixed contents of not fewer than 4 capsules.
Where tablets are to be tested, use powder from not fewer than 4 tablets.
Where the individual monograph directs that loss on drying be determined by thermogravimetric analysis, a sensitive elec-
trobalance is to be used.
Where drying in vacuum over a desiccant is directed in the individual monograph, a vacuum desiccator or a vacuum drying
pistol, or other suitable vacuum drying apparatus, is to be used.
Where drying in a desiccator is specified, exercise particular care to ensure that the desiccant is kept fully effective by fre-
quent replacement.
Where drying in a capillary-stoppered bottle1 in vacuum is directed in the individual monograph, use a bottle or tube fitted
with a stopper having a 225 ± 25-mm diameter capillary, and maintain the heating chamber at a pressure of 5 mm or less of
mercury. At the end of the heating period, admit dry air to the heating chamber, remove the bottle, and with the capillary
stopper still in place allow it to cool to room temperature in a desiccator before weighing.
1 Available as an “antibiotic moisture content flask” from Kimble-Kontes, 1022 Spruce St., Vineland, NJ 08362-1502.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á735ñ X-Ray Fluorescence Spectrometry 283
This procedure is provided for the purpose of determining the percentage of test material that is volatilized and driven off
under the conditions specified. The procedure, as generally applied, is nondestructive to the substance under test; however,
the substance may be converted to another form such as an anhydride.
Perform the test on finely powdered material, and break up lumps, if necessary, with the aid of a mortar and pestle before
weighing the specimen. Weigh the specimen to be tested without further treatment, unless a preliminary drying at a lower
temperature, or other special pretreatment, is specified in the individual monograph. Unless other equipment is designated in
the individual monograph, conduct the ignition in a suitable muffle furnace or oven that is capable of maintaining a tempera-
ture within 25° of that required for the test, and use a suitable crucible, complete with cover, previously ignited for 1 hour at
the temperature specified for the test, cooled in a desiccator, and accurately weighed.
Unless otherwise directed in the individual monograph, transfer to the tared crucible an accurately weighed quantity, in g,
of the substance to be tested, about equal to that calculated by the formula:
10/L
in which L is the limit (or the mean value of the limits) for Loss on ignition, in percentage. Ignite the loaded uncovered crucible,
and cover at the temperature (±25°) and for the period of time designated in the individual monograph. Ignite for successive
1-hour periods where ignition to constant weight is indicated. Upon completion of each ignition, cover the crucible, and allow
it to cool in a desiccator to room temperature before weighing.
INTRODUCTION
X-ray fluorescence (XRF) spectrometry is an instrumental method based on the measurement of characteristic X-ray photons
caused by the excitation of atomic inner-shell electrons by a primary X-ray source. The XRF method can be used for both qual-
itative and quantitative analysis of liquids, powders, and solid materials. The X-rays produced by an X-ray tube include charac-
teristic lines that correspond to the anode material and a continuum known as Bremsstrahlung radiation. Both types of X-rays
can be used to excite atoms and thus induce X-rays. XRF instrumentation can be divided into one of two categories: Wave-
length Dispersive X-ray Fluorescence (WDXRF) and Energy Dispersive X-ray Fluorescence (EDXRF). The main factor distinguish-
ing these technologies is the method used to separate the spectrum emitted by the atoms in the sample. The energy of the X-
ray photon is characteristic for a given electron transition in an atom and is qualitative in nature. The intensity of the emitted
radiation is indicative of the number of atoms in the sample and constitutes the quantitative nature of the method.
Installation Qualification
See also USP general information chapter Analytical Instrument Qualification á1058ñ.
Operational Qualification
The purpose of operational qualification (OQ) is to verify that the system operates within target tolerances using appropriate
samples with known spectral properties. OQ is a check of the key operating parameters and should be performed following
installation, repairs, or major maintenance that can affect the performance of the instruments. Note that all calibration samples
must be handled with cotton or nitrile gloves and must be stored in sealed plastic containers. Alternatively, they may be fixed
in the instrument.
The OQ tests and specifications in the following sections are typical examples only (see Tables 1 and 3). Other tests and
standards can be used to establish tolerances for these purposes. The instrument vendor often makes samples and test param-
eters available as part of the IQ package.
The Al–Cu energy calibration sample is a disc of Al–Cu alloy (EN AW-AlCu6BiPb; Alloy 2011, ASTM B211) that has been se-
lected for use in EDXRF spectrometers. This alloy is generally available, is resistant to corrosion, and provides adequate intensi-
ties for both Al Ka and Cu Ka. These characteristic lines cover the typical energies used for XRF analysis. Also, sufficient infor-
mation can be obtained from spectra recorded on this material to assess detector resolution. A more complete compositional
specification for this calibration sample is given in Table 2.
Table 2. Specification of the Concentration of the Alloying Elements in Al–Cu Alloy (the Remaining Balance Is Aluminum)
Concentration Limits
Element (in % by Weight)
Cu 5–6
Zn 0.3 maximum
Fe 0.7 maximum
Bi 0.2–0.6
Pb 0.2–0.6
Si 0.4 maximum
When the energy range of the analytical lines for which the spectrometer will be used includes energies above 50 keV, a
pure W metal (99.5% minimum) should be used instead of the Al–Cu alloy. This metal is stable, is generally available, and
provides well-defined characteristic lines at 8.40 keV (L-lines) and 59.3 keV (K-lines).
Table 3. WDXRF OQ Specifications
Acceptance
Test Procedure Criteria
Perform according to the manufacturer's procedure. Re- The angle corresponding to the peak maximum should differ
Peak angle peat for each crystal. less than 0.10 degree 2q of the angle measured at IQ.
Full width half maximum at specified wavelengths and at
the same measurement conditions at the time of IQ. Re-
Detector resolution peat for each detector available. NMT 20% change
Measure count rate from the specified monitor specimen
at a specified wavelength at the same measurement con-
ditions at the time of IQ. Repeat for each detector availa-
Count rate ble. <10% change from initial measurements at IQ
Use Inconel 625 (Special Metals Corporation, New Hartford, NY) as a sample for WDXRF spectrometers. Other designations
for this alloy are UNS N06625, DIN 2.4856, ASTM B443, ASME SB-443, and AMS 5599. This is a nickel-based alloy that in-
cludes chromium, molybdenum, and niobium as the most important alloying elements. A more complete compositional speci-
fication is given in Table 4.
Table 4. Elemental Concentrations in Inconel 625
Concentration Limits
Element (in % by Weight)
Ni 58.0 minimum
Cr 20.0–23.0
Fe 5.0 maximum
Mo 8.0–10.0
Nba 3.15–4.15
a This could include Ta.
A polished piece of Inconel 625 should never require resurfacing when stored and used appropriately.
The radiation characteristics of nickel can be detected by all detectors that are used in sequential WDXRF, whether they are
flow proportional, sealed gas, or scintillation detectors. In combination with Mo K (and L) radiation, the tests regarding peak
position and detector response of WDXRF spectrometers can be completed readily. Table 5 includes the typical wavelengths or
energies that are used for XRF analysis.
Table 5. Energies and Wavelengthsa for Al, Ni, Cu, Mo, and W
Al Ni Cu Mo W
Ka1 Line transition K–L2,3 K–L2,3 K–L2,3 K–L3 K–L3
Ka1 Energy (eV) 1487 7473 8041 17479 59310
Ka1 Wavelength (Å) 8.340 1.659 1.542 0.709 0.209
La1 Line transition N/Ab N/A N/A L3–M5 L3–M5
La1 Energy (eV) N/A N/A N/A 2293.2 8398.2
La1 Wavelength (Å) N/A N/A N/A 5.4066 1.47632
a From: Deslattes RD, Kessler EG, Indelicato P, et al. X-ray transition energies: new approach to a comprehensive evaluation. Rev Mod Phys. 2003;75(1):35–99.
For Al, Ni, and Cu, the Ka1 and Ka2 energies from Table V were averaged with 2:1 weighting. Wavelength conversion uses the hc/E value from page 94. Values
for Mo and W are taken from Table VI.
b N/A = not applicable.
Performance Qualification
The purpose of performance qualification (PQ) is to determine that the instrument is capable of meeting the user's require-
ments for all critical-to-quality measurements.
Depending on typical use, the specifications for PQ may be different from the manufacturer's specifications. Method-specific
PQ tests, also known as system suitability tests, may be used in lieu of PQ requirements for validated methods.
Specific procedures, acceptance criteria, and time intervals for characterizing XRF performance depend on the instrument
and its intended application. Demonstrating stable instrument performance over extended periods of time provides some as-
surance that reliable measurements can be obtained from sample spectra using previously validated XRF experiments.
PROCEDURE
General Recommendations
Analysts should check the suitability of all reagents and materials for contamination before using depending on the method
used. The analysis of a ubiquitous element often requires the use of the purest grade of reagent or material available. Sample
holders and support windows should be appropriate for the analysis and the instrument configuration. Analysts should evalu-
ate the cleaning of equipment used to prepare samples for XRF analysis in order to avoid cross-contamination of samples.
SAMPLES
Liquids
Liquid samples can be introduced directly to the XRF spectrometer provided that the solution consists of a single phase and
has sufficiently low volatility. Analysis of liquid samples requires use of a special liquid sample holder and a commercially availa-
ble support window composed of a suitable polymer film. Alternatively, liquid samples can be transferred onto the surface of a
disk and dried before analysis. The experiment typically is conducted using a purge gas. The liquid sample can be spiked di-
rectly with solution standards at appropriate concentrations to facilitate accuracy, precision, and specificity tests as required for
method validation.
Powders
Prepared powders may be measured directly in a liquid sample holder. Alternatively, they may be pressed into pellets. If the
powder has poor self-binding properties, it may require a binder such as a wax or ethyl cellulose.
Powders also can be prepared for XRF analysis by fusing the sample material into a glass using a flux, typically sodium tetra-
borate, lithium tetraborate, and lithium metaborate. Because the temperatures required to melt the flux and dissolve the sam-
ple are relatively high (800°–1300°), this procedure is not suitable for the analysis of volatile elements such as mercury and
arsenic.
Powder samples can be mixed with appropriate quantities of a certified reference material to facilitate accuracy, precision,
and specificity tests as required for method validation. Alternatively, powder samples can be spiked with appropriate quantities
of solution-based standards and then dried, ground if necessary, and thoroughly mixed before analysis. Standard additions can
be used in instances when physical or chemical properties of the powder may introduce an analyte response bias.
Standards
Appropriate reference materials that are traceable to the National Institute of Standards and Technology, or equivalent, can
be used in the preparation of XRF standards.
Analysis
For the instrumental parameters (if applicable) follow the procedure in the individual monograph. Because of differences in
manufacturers' equipment configurations, analysts can use the manufacturer's suggested default conditions. At the time of
use, the instrument must be standardized for the intended use. Analysts should use calibration standards to bracket the expec-
ted range of typical analyte concentrations. When they perform an analysis at or near the detection limit, analysts cannot al-
ways use a bracketing standard, which is an acceptable strategy for limit tests. Analysts should use regression analysis of the
standard plot to evaluate the linearity of detector response, and individual monographs may set criteria for the residual error of
the regression line.
To demonstrate the stability of the system's initial standardization, at appropriate intervals throughout their tests on the
sample set analysts must re-assay the calibration standard used in the initial standard curve as a check standard. The use of an
independently prepared standard also is acceptable. Unless otherwise indicated in the individual monograph, the re-assayed
standard should agree with its expected value to within ±2% for Assay or ±20% for an impurity analysis.
Sample concentrations are calculated versus the working curve generated by plotting the instrument response versus the
concentration of the analyte in the standard solutions.
Current Good Manufacturing Practice regulations [21 CFR 211.194(a)(2)] indicate that users of analytical methods described
in USP–NF are not required to validate the accuracy and reliability of these methods but rather must verify their suitability un-
der actual conditions of use. In this context, and according to these regulations, validation is required when an XRF procedure
is used to test a nonofficial article and when this procedure is used as an alternative to the official procedure for testing an
official article (see USP–NF General Notices 6.30). On the other hand, verification must be performed the first time an official
article is tested using a USP procedure (for informational purposes only, refer to Verification of Compendial Procedures á1226ñ).
Validation
The objective of an XRF method validation is to demonstrate that the measurement procedure is suitable for its intended
purpose, including quantitative determination of the main component in a drug substance or a drug product (Category I as-
says), quantitative determination of impurities or limit tests (Category II), and identification tests (Category IV). Depending on
the category of the test (see Table 2 in Validation of Compendial Procedures á1225ñ), the analytical method validation process for
XRF requires the testing of linearity, range, accuracy, specificity, precision, quantitation limit, and robustness.
Performance characteristics that demonstrate the suitability of an XRF method are similar to those required for any analytical
procedure. A discussion of the applicable general principles is found in á1225ñ. Specific acceptance criteria for each validation
parameter must be consistent with the intended use of the method. The samples for validation should be independent of the
calibration set.
ACCURACY
For Category I assays or Category II tests, analysts can determine accuracy by conducting recovery studies using the appro-
priate matrix spiked with known concentrations of elements. An appropriate certified standard material provided by USP also
can be used. It also is an acceptable practice to compare assay results obtained using the XRF method under validation to
those from an established analytical method. When analysts use the method of standard additions, accuracy assessments are
based on the final intercept concentration, not the recovery calculated from the individual standard additions.
Acceptance Criteria: 98.0%–102.0% recovery for drug substances and drug product assay, 70.0%–150.0% recovery for
impurity analysis. These acceptance criteria should be met throughout the validated range.
PRECISION
Repeatability
Analysts should assess the analytical method by measuring the concentrations of six separate standards at 100% of the assay
test concentration. Alternatively, they can measure the concentrations of three replicates of three separate samples at different
concentrations. The three concentrations should be close enough so that the repeatability is constant across the concentration
range. In this case, the repeatability at the three concentrations is pooled for comparison to the acceptance criteria.
Acceptance Criteria: The relative standard deviation is NMT 1.0% for drug substance assay, NMT 2.0% for drug product
assay, and NMT 20.0% for impurity analysis.
Intermediate Precision
Analysts should establish the effect of random events on the method's analytical precision. Typical variables include perform-
ing the analysis on different days, using different instrumentation, or having two or more analysts perform the method. As a
minimum, any combination of at least two of these factors totaling six experiments will provide an estimation of intermediate
precision.
Acceptance Criteria: The relative standard deviation is NMT 1.0% for drug substance assay, NMT 3.0% for drug product
assay, and NMT 25.0% for impurity analysis.
SPECIFICITY
The procedure must unequivocally assess each analyte element in the presence of components that may be expected to be
present, including any matrix components.
Acceptance Criteria: Demonstrated by meeting the Accuracy requirement.
QUANTITATION LIMIT
The limit of quantitation (LOQ) can be estimated by calculating the standard deviation of NLT 6 replicate measurements of a
blank and multiplying by 10. Other suitable approaches may be used (see á1225ñ). A measurement of a test sample prepared
from a representative sample matrix and spiked so that the concentration is similar to the estimated LOQ concentration must
be performed to confirm accuracy.
Acceptance Criteria: The analytical procedure should be capable of determining the analyte precisely and accurately at a
level equivalent to 50% of the specification.
LINEARITY
Analysts should demonstrate a linear relationship between the analyte concentration and corrected XRF response by prepar-
ing no fewer than five standards at concentrations that encompass the anticipated concentration of the test sample. The
standard curve then should be evaluated using appropriate statistical methods such as a least squares regression. The correla-
tion coefficient (R), y-intercept, and slope of the regression line must be determined.
For experiments that do not have a linear relationship between analyte concentration and XRF response, appropriate statisti-
cal methods must be applied to describe the analytical response.
Acceptance Criteria: R is NLT 0.995 for Category I assays and NLT 0.99 for Category II quantitative tests.
RANGE
The range is the interval between the upper and lower concentration (amounts) of analyte in the sample (including the up-
per and lower concentrations) for which it has been demonstrated that the analytical procedure has a suitable level of preci-
sion, accuracy, and linearity. Range is demonstrated by meeting the linearity and accuracy requirement.
Acceptance Criteria: For 100.0% centered acceptance criteria: 80.0%–120.0%. For non-centered acceptance criteria:
10% below the lower limit of the specification to 10% above the upper limit of the specification. For content uniformity: 70.0–
130.0%. For Category II the range requirements are 50.0%–120.0% of the acceptance criteria.
ROBUSTNESS
The reliability of an analytical measurement should be demonstrated by deliberate changes to experimental parameters. For
XRF this can include measuring the stability of the analyte under specified storage conditions.
Acceptance Criteria: The measurement of a standard or sample response following a change in experimental parameters
should differ from the same standard measured using established parameters by NMT ±2.0% for a drug product assay and
NMT ±20.0% for an impurity analysis.
Verification
The objective of an XRF method verification is to demonstrate that the procedure as prescribed in a specific monograph is
being executed with suitable accuracy, sensitivity, and precision. According to á1226ñ, if the verification of the compendial
procedure according to the monograph is not successful, the procedure may not be suitable for use with the article under test.
It may be necessary to develop and validate an alternative procedure as allowed in USP–NF General Notices 6.30.
Although complete revalidation of a compendial XRF method is not required, verification of compendial XRF methods
should at minimum include the execution of the validation parameters for specificity, accuracy, precision, and limit of quanti-
tation, when appropriate, as indicated under Validation (above).
INTRODUCTION
Mass spectrometry (MS) is an analytical technique based on the measurement of the mass-to-charge ratio of ionic species
related to the analyte under investigation. MS can be used to determine the molecular mass and elemental composition of an
analyte as well as provide an in-depth structural elucidation of the analyte.
In addition to being recognized as a powerful structure-elucidation tool, MS is also extensively used for quantitative meas-
urements. For additional information, see general information chapter Applications of Mass Spectrometry á1736ñ, which provides
a detailed discussion of MS.
Currently available MS instrumentation offers a wide range of capabilities for qualitative and quantitative analysis, which re-
sults in a wide range of potential MS experimental approaches for a given measurement need. Because of the diversity of ap-
proaches, this chapter does not present specific procedures but instead provides experimental and system suitability informa-
tion for MS procedures.
QUALITATIVE ANALYSIS
MS is a sensitive and highly specific technique for the identification of analytes. Identification or verification of structure (i.e.,
comparison against an authentic standard) by MS is particularly powerful when used in conjunction with a separation techni-
que such as gas chromatography (GC) or high-performance liquid chromatography (HPLC). Additional degrees of specificity
can be obtained by the use of tandem mass spectrometry (MS/MS) or high-resolution mass spectrometry (HRMS). Also, tighter
tolerances on mass accuracy are also afforded using high-resolution mass spectrometry (HRMS).
Experimental Parameters
The following MS experimental parameters should be defined for a qualitative (e.g., identification) procedure.
MASS RESOLUTION
Unit mass resolution is sufficient for most identification tests. When higher resolution is required, the resolution is specified in
the procedure, and demonstration of adequate resolution is included in the system suitability tests for the procedure.
MASS ACCURACY
A mass accuracy or agreement of ±0.50 mass units for singly charged ions from a known standard should be sufficient for
most applications. A mass accuracy or agreement of ±0.05% from a standard should be sufficient for the identification of large
molecules (above 2000 m/z) when multiply charged ions are employed for the identification test.
When higher mass accuracy is required, the required mass accuracy is specified in the procedure. A demonstration of the
mass accuracy then is included in the system suitability tests for the procedure or as part of the instrument's established per-
formance qualification (PQ) procedure discussed later in this chapter.
MASS RANGE
The mass range to be scanned is also presented in the procedure. The mass range must encompass all ions used as part of
the identification confirmation.
System Suitability
The system suitability for the MS procedure should include demonstration of adequate performance for the following exper-
imental attributes.
MASS RESOLUTION
A demonstration of the appropriate resolution is included in the system suitability tests for a procedure. The performance
test in the instrument's established PQ, executed daily or prior to the time of use, may suffice. When resolutions greater than
unit mass are required for a procedure, the system suitability test includes demonstration of adequate resolution along with
acceptance criteria.
MASS ACCURACY
A demonstration of the mass accuracy is included in the system suitability tests for the procedure or is part of the instru-
ment's established PQ procedure. A mass accuracy or agreement of ±0.50 mass units for singly charged ions from a known
standard should be sufficient for most applications. If higher mass accuracy is required for a given procedure, the appropriate
acceptance criteria are specified.
Interpretation Directions
An identification or verification experiment involves comparison of the compound of interest to an authentic standard (e.g.,
a USP Reference Standard). For this form of identification or verification, the standard and the sample are run under identical
conditions and should have results that are within acceptable experimental error for the procedure. Applications of MS for
identification purposes in a compendial monograph may incorporate the mass spectral data alone or may combine this spec-
tral identification with the chromatographic retention time if more specificity is dictated by the particular monograph identifi-
cation needs.
The procedure should also provide specific instructions that define a successful mass spectral match. If the procedure repre-
sents the only spectroscopic identification test or if no other identification tests (e.g., peptide mapping or amino acid analysis)
provide structural information, the spectrum of the standard should closely resemble that of the sample and a minimum of
three structurally relevant ions, preferably one of which is an ion representing the molecular mass of the analyte, should be
used for the comparison. In the case where only the ion representing the intact molecule is produced, the accurate mass or
MS/MS spectrum of the molecular ion may be used to strengthen the identification. If other structurally relevant identification
tests are also conducted, the comparison can be conducted with fewer ions as long as one of the ions represents the molecular
mass of the analyte. For example, the MS identification test for a protein may examine only an ion that represents the molecu-
lar mass if several other tests are conducted to confirm the protein structure.
QUANTITATIVE ANALYSIS
The sensitivity and specificity of MS also make it a suitable analytical tool for the quantification of analytes. Quantification is
particularly powerful when used in conjunction with a separation technique such as GC or HPLC. Further degrees of specificity
can be obtained by the use of MS/MS or HRMS.
Experimental Parameters
The following experimental parameters should be defined within a quantitative (e.g., identification) MS procedure.
MASS RESOLUTION
Unit mass resolution is sufficient for most quantitative tests. When higher resolution is required, the resolution is specified in
the procedure, and demonstration of the mass accuracy is included in the system suitability tests for the procedure.
MASS ACCURACY
The mass accuracies listed in the previous qualitative section should be sufficient for most quantitative applications. When
higher mass accuracy is required, it is specified in the procedure. A demonstration of the mass accuracy is included in the sys-
tem suitability tests for the procedure or as part of the instrument's established PQ procedure.
MASS SELECTION
The masses to be monitored (e.g., mass range, individual masses, or MS/MS transitions) are presented in the procedure.
System Suitability
The system suitability for the MS procedure should include demonstration of adequate performance for the following exper-
imental attributes, as appropriate for the procedure.
MASS RESOLUTION
A demonstration of the appropriate resolution is included in the system suitability tests for the procedure or is part of the
instrument's established PQ procedure. When resolutions greater than unit mass are required for a procedure, the system suit-
ability test includes demonstration of adequate resolution along with acceptance criteria.
MASS ACCURACY
A demonstration of the mass accuracy is included in the system suitability tests for the procedure or is part of the instru-
ment's established PQ procedure. A mass accuracy or agreement of ±0.50 mass units for singly charged ions from a known
standard should be sufficient for most applications. If higher mass accuracy is required for a given procedure, the appropriate
acceptance criteria are specified.
PRECISION
The system suitability includes a demonstration of adequate precision. See Table 1 for maximum limits for precision. Typical-
ly, system suitability limits are set to tighter limits than those employed to ensure adequate precision for the validation experi-
ments. (See also the section on Validation and Verification of Mass Spectrometry Analytical Procedures.)
LINEARITY
The system suitability includes a demonstration of adequate linearity. See Table 1 for appropriate linearity limits. (See also
the section on Validation and Verification of Mass Spectrometry Analytical Procedures.)
ACCURACY
In certain situations, quality control (or check) samples may also be appropriate for inclusion in the procedure to ensure the
quality of the measurement. Typically, these quality control samples are of known analyte concentration and are prepared
identically to the test samples. If used, quality control (or check) samples are also prepared as a verification of time-of-applica-
tion method accuracy. The procedure specifies the number or analysis order of quality control (or check) samples needed. Ac-
ceptance criteria for calibration and quality control (or check) sample results should be aligned with the validation require-
ments as required by the application type (i.e., Category I or II) as outlined in Table 1.
QUANTITATION LIMIT
In certain applications (e.g., limits tests), it may be necessary to include demonstration of the ability to detect the analyte at
a prescribed level. For these applications, the procedure specifies the limit and success criteria (e.g., signal-to-noise ratio).
Qualification of an MS instrument can be divided into three elements: installation qualification (IQ), operational qualification
(OQ), and PQ. For additional information, see Analytical Instrument Qualification á1058ñ, which may be a helpful, but not man-
datory resource.
Installation Qualification
IQ provides evidence that the hardware and software are installed to accommodate safe and effective use of the instrument
at the desired location.
Operational Qualification
In OQ, an instrument's performance is characterized using standards to verify that the system operates within target specifi-
cations. The purpose of OQ is to demonstrate that instrument performance is suitable for a given application. Because so
many different approaches are available for measuring MS spectra, OQ using standards with known spectral properties is rec-
ommended. Because of the diversity of MS instrumentation, interfaces, and experimental approaches, MS instruments should
be qualified against target specifications for the intended application, not simply the specifications supplied by the manufac-
turer.
Performance Qualification
PQ helps to determine that the instrument is capable of meeting the user's requirements for all critical-to-quality measures.
PQ documentation should describe the following:
• the definition of the specific performance criteria and detailed test procedures, including test samples and instrument pa-
rameters;
• the elements that will be measured to evaluate the criteria and the predefined specifications;
• the test interval, which may be daily or time-of-use measurements;
• the use of bracketing samples or groups of samples; and
• corrective actions that will be implemented if the spectrometer does not pass the specifications.
Periodic PQ should include a subset of the OQ tests to ensure that the instrument as supplied is performing at a level that
produces data that are suitable for their intended use. Depending on typical use, the specifications for PQ may be higher or
lower than the manufacturer's installation specifications. Method-specific PQ tests, also known as system suitability tests, may
be used in lieu of PQ requirements for validated procedures.
Because of the diversity of MS instrumental configurations and experimental designs, a standard sample or experiment for
all PQ assessments may not be available. Thus, method-specific PQ tests or system suitability tests often are needed. The PQ
experimental design should be sufficiently robust to ensure proper instrument performance for the intended application, in-
cluding the specifications associated with the measurement. At minimum, PQ experiments should include the following.
• For qualitative applications, the PQ experiment includes a check of the mass accuracy of the instrumentation. A mass ac-
curacy or agreement of ±0.50 mass units for singly charged ions from a known standard should be sufficient for most
applications.
• For quantitative applications, the PQ experiment includes checks of mass accuracy and precision. A mass accuracy or
agreement of ±0.50 mass units for singly charged ions from a known standard should be sufficient for most applications.
The success criteria for precision is established via consideration of the instrument and method capability, and provides
sufficient controls relative to the specification for the measurement in question.
Specific procedures, acceptance criteria, and time intervals for characterizing MS spectrometer performance depend on the
instrument and its intended applications. Many MS applications use previously validated experiments that relate MS spectra to
a chemical property of interest. Analysts typically demonstrate stable instrument performance over extended periods of time.
This practice provides some assurance that reliable measurements can be taken from sample spectra using previously validated
MS experiments.
Validation is required only when an MS procedure is an alternative to the official procedure for testing an official article.
The objective of validating an MS procedure is to demonstrate that the measurement is suitable for its intended purpose,
including quantitative determination of the main component in a drug substance or a drug product (Category I assays), quan-
titative determination of impurities (Category II), and identification tests (Category IV). [NOTE—For additional information on
the different category definitions, see Validation of Compendial Procedures á1225ñ.] Depending on the category of the test, ana-
lytical procedure validation requires the testing of linearity, range, accuracy, specificity, precision, quantitation limit, and ro-
bustness. These analytical performance characteristics apply to externally standardized methods and to the method of stand-
ard additions.
Chapter á1225ñ provides definitions and general guidance about analytical procedures validation without indicating specific
validation criteria for each characteristic. The intention of the following sections is to provide the user with specific validation
criteria that represent the minimum expectations for this technology. For each particular application, tighter criteria may be
needed in order to demonstrate suitability for the intended use.
The required validation performance characteristics of an MS analytical procedure, assuming the typical Category I USP
specifications of 98.0%–102.0% for drug substances and 95.0%–105.0% for drug products, are listed in Table 1. The actual
validation performance characteristics would be dependent upon the specifications in place and should provide sufficient evi-
dence that the measurement capability is sufficient for those specifications. A procedure validation protocol must specify the
required validation experiments and validation criteria. These criteria are determined according to the intended purpose of the
analytical procedure.
The objective of analytical procedure validation is to demonstrate that the analytical procedure is suitable for its intended
purpose by conducting experiments and obtaining results that meet predefined acceptance criteria. MS analytical procedures
can include quantitative tests for major component and impurities content, limit tests for the presence of impurities, quantifi-
cation of a component in a product or formulation, or identification tests.
VALIDATION PARAMETERS
Performance characteristics that demonstrate the suitability of an analytical procedure are similar to those required for any
analytical procedure. For additional information on the applicable general principles, see á1225ñ. Specific acceptance criteria
for each validation parameter must be consistent with the intended use of the analytical procedure.
The performance characteristics that are required as part of a validation for each of the analytical procedure categories are
given in Table 1.
SPECIFICITY
The purpose of a specificity test is to demonstrate that measurements of the intended analyte signals are free of interference
from components and impurities in the test material. Specificity tests can be conducted to compare spectra of components
and impurities that are known from synthetic processes, formulations, and test preparations. Specificity is also to be demon-
strated for any materials added as part of the procedure (e.g., specificity versus isotope-labeled internal standards).
For an identification MS analytical procedure (Category I and II), validation experiments may include multidimensional MS
experiments to validate correct assignments of an ion's structure or origin.
LINEARITY
A linear relationship is exhibited between the analyte concentration and instrument response. This is demonstrated by meas-
uring analyte responses from NLT five standard solutions at concentrations that encompass the anticipated concentration
range of analyte(s) in the test solution. For Category I, standard solutions can be prepared from reference materials in appro-
priate solvents. For Category II (MS analytical procedures that are used to quantitate impurities), linearity samples are prepared
by spiking suitable test samples that contain low amounts of analyte or by spiking matrix samples at concentrations of the
expected range. The standard curve then is constructed using appropriate statistical analytical procedures such as a least-
squares regression. The correlation coefficient (R), y-intercept, slope of the regression line, and residual root mean square are
then determined. Absolute values determined for these factors are appropriate for the procedure being validated.
RANGE
The range between the low and high concentrations of analyte is given by the quantitative MS analytical procedure. This
typically is based on test article specifications in the USP monograph. It is the range within which the analytical procedure can
demonstrate an acceptable degree of linearity, accuracy, and precision and can be obtained from an evaluation of that analyti-
cal procedure.
Recommended ranges for various MS analytical procedures are as follows.
• For Category I—assay of a drug substance (or a finished product): 80%–120% of the test concentration;
• For Category I—content uniformity: a minimum of 70%–130% of the test concentration;
• For Category II—determination of an impurity: 50%–120% of the acceptance criteria.
ACCURACY
The accuracy of a quantitative MS analytical procedure is determined across the required analytical range. Typically, three
levels of concentrations are evaluated using triplicate preparations at each level.
Preparation of accuracy samples: For drug substance assays (Category I), accuracy is determined by analyzing a reference
standard of known purity. For drug product assays (Category I), a composite sample of reference standard and other compo-
nents in a pharmaceutical finished product should be used for analytical procedure validation. The assay results are compared
to the theoretical value of the reference standard to estimate errors or percent recovery. For the quantitation of impurities
(Category II), the accuracy of the analytical procedure can be determined by conducting studies with drug substances or prod-
ucts spiked with known concentrations of the analyte under test.
Assay results from the analytical procedure being validated may be compared to those of an established alternative analytical
procedure.
PRECISION
Repeatability: The analytical procedure is assessed by measuring the concentrations of three replicates of separate standard
solutions at three different concentrations that encompass the analytical range. Alternatively, the concentrations of six separate
standard solutions at 100% of the test concentration can be measured. The relative standard deviation from the replicate
measurements is then evaluated to determine if the solutions meet the acceptance criteria.
Intermediate precision: The effect of random events on the analytical precision of the analytical procedure is to be establish-
ed. Typical variables include performing the analysis on different days, using different instruments that are suitable as specified
in the analytical procedure, or having the analytical procedure performed by two or more analysts.
QUANTITATION LIMIT
The quantitation limit is validated by measuring six replicates of test samples spiked with analyte at 50% of specification.
From these replicates, analysts are then able to determine accuracy and precision. Examples of specifications for Category II
quantitative determinations are that the measured concentration is within 70%–130% of the spike concentration and the rela-
tive standard deviation is NMT 15%.
ROBUSTNESS
The reliability of an analytical measurement is demonstrated with deliberate changes to critical experimental parameters.
These can include measuring the stability of the analyte under specified storage, chromatographic, or ionization conditions.
U.S. Current Good Manufacturing Practices regulations [21 CFR 211.194(a)(2)] indicate that users of analytical procedures
described in USP–NF do not need to validate procedures that are provided in a monograph. Instead, they must simply verify
the suitability of the procedures under actual conditions of use.
The objective of an MS procedure verification is to demonstrate that the procedure as prescribed in a specific monograph
can be executed by the user with suitable accuracy, specificity, and precision using the instruments, analysts, and sample ma-
trices available. According to the general information chapter Verification of Compendial Procedures á1226ñ, if the verification of
the compendial procedure by following the monograph is not successful, the procedure may not be suitable for use with the
article under test. It may be necessary to develop and validate an alternative procedure as allowed in General Notices and Re-
quirements 6.30.
Verification of a compendial MS procedure includes at minimum the execution of the validation parameters for specificity,
accuracy, precision, and limit of quantitation, when appropriate, as indicated in the Validation and Verification of Mass Spec-
trometry Analytical Procedures section in this chapter.
For Pharmacopeial purposes, the melting range, melting temperature, or melting point is defined as those points of temper-
ature within which, or the point at which, the first detectable liquid phase is detected to the temperature at which no solid
phase is apparent, except as defined otherwise for Classes II and III below. A melting transition may be instantaneous for a
highly pure material, but usually a range is observed from the beginning to the end of the process. Factors influencing this
transition include the sample size, the particle size, the efficiency of heat diffusion, and the heating rate, among other varia-
bles, that are controlled by procedure instructions. In order to achieve consistency and repeatability during melting point de-
terminations, the following conditions should be applied. Use dried material that has been gently pulverized and introduced
into a capillary tube to a nominal height of 3 mm, and perform the melting determination at a heating rate of 1°/min. In some
articles, the melting process is accompanied by simultaneous decomposition, which is visually evidenced as a side event like
darkening of the material, charring, bubbling, or other incident. The visual impact of this side reaction frequently obscures the
end of the melting process, which it may be impossible to accurately determine. In those circumstances, only the beginning of
the melting can be accurately established; and it is to be reported as the melting temperature. The accuracy of the apparatus
to be used as described below should be checked at suitable intervals by the use of one or more of the available USP Melting
Point Reference Standards, preferably those that melt nearest the melting temperatures of the compounds being tested (see
USP Reference Standards á11ñ). The USP Melting Point Reference Standards are intended to check the accuracy of the device
and are not suitable to calibrate.
Eight procedures for the determination of melting range or temperature are given herein, varying in accordance with the
nature of the substance. When no class is designated in the monograph, use the procedure for Class Ia for crystalline or amor-
phous substances and the procedure for Class II for waxy substances.
The procedure known as the mixed-melting point determination, whereby the melting range or temperature of a solid un-
der test is compared with that of an intimate mixture of equal parts of the solid and an authentic specimen of it, e.g., the
corresponding USP Reference Standard, if available, may be used as a confirmatory identification test. Agreement of the obser-
vations on the original and the mixture constitutes reliable evidence of chemical identity.
APPARATUS
Apparatus with cameras or other computerized equipment with advantages in terms of accuracy, sensitivity, or precision
may be used provided that the apparatus is properly qualified.
Apparatus I: An example of a suitable melting range Apparatus I consists of a glass container for a bath of transparent
fluid, a suitable stirring device, an accurate thermometer, and a controlled source of heat. The bath fluid is selected with a
view to the temperature required, but light paraffin is used generally and certain liquid silicones are well adapted to the higher
temperature ranges. The fluid is deep enough to permit immersion of the thermometer to its specified immersion depth so
that the bulb is still about 2 cm above the bottom of the bath. The heat may be supplied by an open flame or electrically. The
capillary tube is about 10 cm long and 0.8–1.2 mm in internal diameter with walls 0.2–0.3 mm in thickness.
Apparatus II: An instrument may be used in the procedures for Classes I, Ia, and Ib. An example of a suitable melting
range Apparatus II consists of a block of metal that may be heated at a controlled rate, its temperature being monitored by a
sensor. The block accommodates the capillary tube containing the test substance and permits monitoring of the melting proc-
ess, typically by means of a beam of light and a detector. The detector signal may be processed by a microcomputer to deter-
mine and display the melting point or range, or the detector signal may be plotted to allow visual estimation of the melting
point or range.
PROCEDURES
Procedure for Class I, Apparatus I: Reduce the substance under test to a very fine powder, and, unless otherwise direc-
ted, render it anhydrous when it contains water of hydration by drying it at the temperature specified in the monograph, or,
when the substance contains no water of hydration, dry it over a suitable desiccant for NLT 16 h (or at the conditions stated in
Loss on Drying á731ñ, if appropriate).
Charge a capillary glass tube, one end of which is sealed, with a sufficient amount of the dry powder to form a column in
the bottom of the tube 3 mm high when packed down as closely as possible by moderate tapping on a solid surface. Due to
the instrument design, alternative sample sizes may be instructed by the instrument manufacturer.
Heat the bath until the temperature is about 10° below the expected melting point. Remove the thermometer, and quickly
attach the capillary tube to the thermometer by wetting both with a drop of the liquid of the bath or otherwise, and adjust its
height so that the material in the capillary is level with the thermometer bulb. Replace the thermometer, and continue the
heating, with constant stirring, sufficiently to cause the temperature to rise at a rate of about 3°/min. When the temperature is
about 3° below the lower limit of the expected melting range, reduce the heating so that the temperature rises at a rate of
about 1° /min. Continue heating until melting is complete.
The temperature at which the column of the substance under test is observed to collapse definitely against the side of the
tube at any point indicates the beginning of melting, and the temperature at which the test substance becomes liquid
throughout corresponds to the end of melting or the melting point. The two temperatures fall within the limits of the melting
range. If melting occurs with decomposition, the melting temperature corresponding to the beginning of the melting is within
the range specified.
Procedure for Class Ia, Apparatus I: Prepare the test substance and charge the capillary as directed in Procedure for Class
I, Apparatus I. Heat the bath until the temperature is about 10° below the expected melting point and is rising at a rate of
about 1°/min. Insert the capillary as directed in Procedure for Class I, Apparatus I when the temperature is about 5° below the
lower limit of the expected melting range, and continue heating until melting is complete. Record the melting range as direc-
ted in Procedure for Class I, Apparatus I.
Procedure for Class Ib, Apparatus I: Place the test substance in a closed container and cool to 10°, or lower, for at least
2 h. Without previous powdering, charge the cooled material into the capillary tube as directed in Procedure for Class I, Appara-
tus I, then immediately place the charged tube in a vacuum desiccator and dry at a pressure not exceeding 20 mm of mercury
for 3 h. Immediately upon removal from the desiccator, fire-seal the open end of the tube, and as soon as practicable proceed
with the determination of the melting range as follows. Heat the bath until the temperature is about 10° below the expected
melting range, then introduce the charged tube, and heat at a rate of rise of about 1°/min until melting is complete. Record
the melting range as directed in Procedure for Class I, Apparatus I.
If the particle size of the material is too large for the capillary, precool the test substance as directed above, then with as little
pressure as possible gently crush the particles to fit the capillary, and immediately charge the tube.
Procedure for Class I, Apparatus II: Prepare the substance under test and charge the capillary tube as directed in Proce-
dure for Class I, Apparatus I. Operate the apparatus according to the manufacturer's instructions. Heat the block until the tem-
perature is about 10° below the expected melting point. Insert the capillary tube into the heating block, and continue heating
at a rate of temperature increase of about 1°/min until melting is complete.
The temperature at which the detector signal first leaves its initial value indicates the beginning of melting, and the temper-
ature at which the detector signal reaches its final value corresponds to the end of melting, or the melting point. The two
temperatures fall within the limits of the melting range. If melting occurs with decomposition, the melting temperature corre-
sponding to the beginning of the melting is within the range specified.
Procedure for Class Ia, Apparatus II: Prepare the test substance and charge the capillary as directed in Procedure for Class
I, Apparatus I. Operate the apparatus according to the manufacturer's instructions. Heat the block until the temperature is
about 10° below the expected melting point and is rising at a rate of about 1°/min. Insert the capillary as directed in Procedure
for Class I, Apparatus I when the temperature is about 5° below the lower limit of the expected melting range, and continue
heating until melting is complete. Record the melting range as directed in Procedure for Class I, Apparatus I. If melting occurs
with decomposition, the melting temperature corresponding to the beginning of the melting is within the range specified.
Procedure for Class Ib, Apparatus II: Place the test substance in a closed container and cool to 10°, or lower, for at least
2 h. Without previous powdering, charge the cooled material into the capillary tube as directed in Procedure for Class I, Appara-
tus I, then immediately place the charged tube in a vacuum desiccator, and dry at a pressure not exceeding 20 mm of mercury
for 3 h. Immediately upon removal from the desiccator, fire-seal the open end of the tube, and as soon as practicable proceed
with the determination of the melting range as follows. Operate the apparatus according to the manufacturer's instructions.
Heat the block until the temperature is about 10° below the expected melting range, then introduce the charged tube, and
heat at a rate of rise of about 1°/min until melting is complete. Record the melting range as directed in Procedure for Class I,
Apparatus I.
If the particle size of the material is too large for the capillary, precool the test substance as directed above, then with as little
pressure as possible gently crush the particles to fit the capillary, and immediately charge the tube.
Procedure for Class II: Carefully melt the material to be tested at as low a temperature as possible, and draw it into a
capillary tube, which is left open at both ends, to a depth of about 10 mm. Cool the charged tube at 10°, or lower, for 24 h,
or in contact with ice for at least 2 h. Then attach the tube to the thermometer by suitable means, adjust it in a water bath so
that the upper edge of the material is 10 mm below the water level, and heat as directed in Procedure for Class I, Apparatus I
except, within 5° of the expected melting temperature, to regulate the rate of rise of temperature of about 1.0°/min. The tem-
perature at which the material is observed to rise in the capillary tube is the melting temperature.
Procedure for Class III: Melt a quantity of the test substance slowly, while stirring, until it reaches a temperature of 90°–
92°. Remove the source of the heat, and allow the molten substance to cool to a temperature of 8°–10° above the expected
melting point. Chill the bulb of a suitable thermometer to 5°, wipe it dry, and while it is still cold dip it into the molten sub-
stance so that approximately the lower half of the bulb is submerged. Withdraw it immediately, and hold it vertically away
from the heat until the wax surface dulls, then dip it for 5 min into a water bath having a temperature NMT 16°.
Fix the thermometer securely in a test tube so that the lower point is 15 mm above the bottom of the test tube. Suspend
the test tube in a water bath adjusted to about 16°, and raise the temperature of the bath at the rate of about 2°/min to 30°,
then change to a rate of about 1°/min, and note the temperature at which the first drop of melted substance leaves the ther-
mometer. Repeat the determination twice on a freshly melted portion of the test substance. If the variation of three determina-
tions is less than 1°, take the average of the three as the melting point. If the variation of three determinations is 1° or greater
than 1°, make two additional determinations and take the average of the five.
The following tests and specifications apply to articles such as creams, gels, jellies, lotions, ointments, pastes, powders, and
aerosols, including pressurized and nonpressurized topical sprays that are packaged in containers in which the labeled content
is not more than 150 g or 150 mL.
PROCEDURE FOR DOSAGE FORMS OTHER THAN AEROSOLS—For containers labeled by weight, select a sample of 10 filled containers,
and remove any labeling that might be altered in weight during the removal of the container contents. Thoroughly cleanse
and dry the outside of the containers by a suitable means, and weigh individually. Quantitatively remove the contents from
each container, cutting the latter open and washing with a suitable solvent, if necessary, taking care to retain the closure and
other parts of each container. Dry, and again weigh each empty container together with its corresponding parts. The differ-
ence between the two weights is the net weight of the contents of the container. For containers labeled by volume, pour the
contents of 10 containers into 10 suitable graduated cylinders, and allow to drain completely. Record the volume of the con-
tents of each of the 10 containers. The average net content of the 10 containers is not less than the labeled amount, and the
net content of any single container is not less than 90% of the labeled amount where the labeled amount is 60 g or 60 mL or
less, or not less than 95% of the labeled amount where the labeled amount is more than 60 g or 60 mL but not more than
150 g or 150 mL. If this requirement is not met, determine the content of 20 additional containers. The average content of the
30 containers is not less than the labeled amount, and the net content of not more than 1 of the 30 containers is less than
90% of the labeled amount where the labeled amount is 60 g or 60 mL or less, or less than 95% of the labeled amount where
the labeled amount is more than 60 g or 60 mL but not more than 150 g or 150 mL.
PROCEDURE FOR AEROSOLS—Select a sample of 10 filled containers, and remove any labeling that might be altered in weight
during the removal of the container contents. Thoroughly cleanse and dry the outsides of the containers by suitable means,
and weigh individually. Remove the contents from each container by employing any safe technique (e.g., chill to reduce the
internal pressure, remove the valve, and pour). Remove any residual contents with suitable solvents, then rinse with a few por-
tions of methanol. Retain as a unit the container, the valve, and all associated parts, and heat them at 100° for 5 minutes.
Cool, and again weigh each of the containers together with its corresponding parts. The difference between the original
weight and the weight of the empty aerosol container is the net fill weight. Determine the net fill weight for each container
tested. The requirements are met if the net weight of the contents of each of the 10 containers is not less than the labeled
amount.
INTRODUCTION
Nuclear magnetic resonance (NMR) spectroscopy is an analytical method based on the magnetic properties of certain atom-
ic nuclei. As is the case with other types of spectroscopy, absorption or emission of electromagnetic energy at characteristic
frequencies provides structural information. NMR differs from other types of spectroscopy because the discrete energy levels
between which the transitions take place are present only when the nuclei are placed in a magnetic field.
Although widely recognized as one of the most powerful structure-elucidation tools available, with proper experimental de-
sign, it can also be used for accurate qualitative and quantitative measurements. See general information chapter Applications
of Nuclear Magnetic Resonance Spectroscopy á1761ñ. [NOTE—Above 1000 chapters are for informational purposes only.]
Qualification of an NMR instrument can be divided into three elements: Installation Qualification (IQ), Operational Qualifica-
tion (OQ), and Performance Qualification (PQ). For further discussion, see general information chapter Analytical Instrument
Qualification á1058ñ.
Installation Qualification
The IQ requirements provide evidence that the hardware and software are installed to accommodate safe and effective use
of the instrument at the desired location.
Operational Qualification
In OQ, an instrument's performance is characterized using standards to verify that the system operates within target specifi-
cations. The purpose of OQ is to demonstrate that instrument performance is suitable for a given application. Because so
many different approaches are available for measuring NMR spectra, OQ using standards with known spectral properties is
recommended. Generally, sealed NMR tubes are available as reference standards for measuring signal-to-noise (S/N) and line-
shape.
Performance Qualification
PQ helps to determine that the instrument is capable of meeting the user's requirements for all critical-to-quality (CTQ)
measures. PQ documentation should describe the following:
1. The definition of the specific performance criteria and detailed test procedures including test samples and instrument pa-
rameters.
2. The elements that will be measured to evaluate the criteria and the predefined specifications.
3. The test interval, which may be time-of-use.
4. The use of bracketing samples or groups of samples.
5. The defined corrective actions that will be implemented if the spectrometer does not pass the specifications.
Periodic PQ should include a subset of the OQ tests to ensure that those aspects of the instrument that are being supplied
are performing at a level that produces data that are suitable for its intended use. Depending on typical use, the specifications
for PQ may be higher or lower than the manufacturer's installation specifications. Typical CTQs include S/N ratio and resolu-
tion tests for all nuclei of interest. Method-specific PQ tests, also known as system suitability tests, may be used in lieu of PQ
requirements for validated procedures.
The PQ samples and tests in the following subsections are typical examples only. Other tests and samples may be used to
establish specifications for specific purposes. Instrument vendors often provide samples and test parameters that can be used
as part of the PQ package.
Sample: 1% chloroform in acetone-d6 (³ 500 MHz), 3% chloroform in acetone-d6, degassed and sealed
Spectral width: < 1 KHz
Data acquisition time: NLT 10 s
Tip angle: 90°
Relaxation time: 60 s
Spinning rate: Static or 20 Hz
Pulse sequence: Delay-pulse-acquire with no decoupling
Processing: No line broadening, zero-filling to 128 k
Figure 1. 1H NMR spectrum of chloroform in acetone-d6 obtained at 400 MHz. The linewidth measured at 0.55% and 0.11%
of the 13C satellites was 2.7 and 5.5 Hz, respectively.
Shim the magnet with special attention to the off-axis shims, acquire a single acquisition, phase to pure absorption, and
measure the linewidth at 50%, 0.55%, and 0.11% maximum intensity. The linewidth should pass specifications at these posi-
tions, and, in addition, the lineshape should be Lorentzian. On modern NMR spectrometers, the lineshape is frequently ob-
tained on a nonspinning sample because the off-axis shims can be set so well that there is essentially no difference between
spectra obtained spinning and nonspinning. In addition, two-dimensional spectra should be obtained on a static sample.
Sample: 0.1% ethylbenzene in chloroform-d, 1% ethylbenzene in chloroform-d (< 200 MHz) degassed and sealed
Spectral width: 10 ppm
Data acquisition time: 400 ms
Tip angle: 90°
Relaxation delay: 60 s
Spinning rate: 0 or approximately 20 Hz
Pulse sequence: Delay-pulse-acquire with no decoupling
Processing: Exponential with 1-Hz line broadening
Referencing: Tetramethylsilane (TMS) = 0.0 ppm or the center of the quartet = 2.65 ppm
Figure 2. 1H NMR spectrum of 0.1% ethylbenzene obtained at 400 MHz with an S/N ratio of 550:1
The concentration of ethylbenzene should be chosen to achieve S/N ratio specifications in the range of 20–1000. Concen-
trations that typically result in measurements outside that range are of limited utility in assessing the performance of the instru-
ment. Nevertheless, established standard solutions are conventionally used. The magnet should be shimmed as well as possi-
ble. Ideally, this test should be run immediately after the lineshape test because most of the shims will be nearly maximized.
Acquire a single acquisition, phase the spectrum in pure absorption mode, and measure the S/N of the ethylbenzene quartet.
This experiment can be run with or without sample spinning. With a spinning sample, the S/N value that is measured should
be only about 10% higher than that obtained with a nonspinning sample if the off-axis shims are well adjusted. A higher ratio
would indicate that the determination would benefit from further shimming with the off-axis shims.
Most modern spectrometers have software that perform the S/N measurement after the operator has identified the signal
and noise regions. Manual calculations can also be made. Measure the amplitude (A) from the center of the baseline to the
peak of the highest of the central two lines in the quartet. Measure the peak-to-peak noise height (H) from the lowest noise
peak to the highest noise peak in the 3–5 ppm region. The noise may be vertically multiplied by a factor for accurate measure-
ment of high S/N spectra. Calculate the S/N as follows:
S/N = k × 2.5 × A/H [1]
where k is the vertical expansion factor of the noise region used. The factor of 2.5 converts the peak-to-peak S/N to root-
mean-squared (rms) noise, which is the standard convention for reporting S/N in NMR spectroscopy. Computerized S/N cal-
culations can be used provided the specifications are set and tested by the same procedure. At the discretion of the spectro-
scopist, an S/N value lower than that specified by the manufacturer may be used if it is judged to be sufficient for the current
application.
Figure 3. 13C NMR spectrum of the ASTM standard 40% p-dioxane in benzene-d6 (v/v) obtained at 100.6 MHz, with an S/N
ratio of 140:1
With a well-shimmed magnet, acquire a single acquisition following a minimum delay of 300 s, phase the spectrum in pure
absorption mode, and measure the height of the benzene triplet at approximately 128.4 ppm from the center of the baseline.
The peak-to-peak noise can be measured as above with appropriate vertical expansion of 80–120 ppm. S/N calculations can
be made as in Equation 1 or by computer calculation.
The benzene-d6 triplet has no nuclear Overhauser enhancement (NOE). Consequently, this test verifies the performance of
only the 13C channel.
Figure 4. 13C NMR spectrum of 10% ethylbenzene obtained using a cryogenically cooled dual 1H/13C probe at 150.9 MHz,
with an S/N ratio of 640:1
The shimming should be sufficient to pass the resolution and lineshape tests described above. The measurement of
S/N is done from the peak height of the larger resonance of the two near 128 ppm. The noise is measured as above in the
region of 80–120 ppm, with appropriate vertical expansion. S/N is calculated by the computer or as in Equation 1.
Specific procedures, acceptance criteria, and time intervals for characterizing NMR spectrometer performance depend on
the instrument and its intended application. Many NMR applications use previously validated experiments that relate NMR
spectra to a physical or chemical property of interest. Stable instrument performance over extended periods of time should be
demonstrated. This practice provides some assurance that reliable measurements can be taken from sample spectra using pre-
viously validated NMR experiments.
NMR spectroscopy has been used for a wide range of applications such as structure elucidation; thermodynamic, kinetic,
and mechanistic studies; and quantitative analysis. Some of these applications are beyond the scope of compendial methods.
All characteristics of the signal—chemical shift, multiplicity, linewidth, coupling constants, relative intensity, and relaxation
time—contribute analytical information.
Qualitative Applications
Comparison of a spectrum from the literature or from an authentic standard with that of a test sample may be used to con-
firm the identity of a compound and to detect the presence of impurities that generate extraneous signals. The NMR spectra
of simple structures can be adequately described by the value of the chemical shifts and coupling constants, and by the rela-
tive number of nuclei represented by the integral of each signal. (The software of modern instruments have available programs
that generate simulated spectra using these data.) Experimental details, such as the solvent used, and the chemical shift refer-
ence, must also be provided.
For unknown samples, NMR analysis, usually coupled with other analytical techniques, is a powerful tool for structure eluci-
dation. Chemical shifts provide information on the chemical environment of the nuclei. Extensive literature is available with
correlation charts and rules for predicting chemical shifts. The multiplicity of the signals provides important structural informa-
tion. The magnitude of the scalar coupling constant, J, between residual protons on substituted aromatic, olefinic, or cycloalkyl
structures is used to identify the relative position of the substituents. Routine 13C spectra are obtained under proton decou-
pling conditions that remove all heteronuclear 13C-1H couplings. As a result of this decoupling, the carbon signals appear as
singlets, unless other nuclei that are not decoupled are present (e.g., 19F, 31P).
Chemical exchange is an example of the effect of intermolecular and intramolecular rate processes on NMR spectra. If a pro-
ton can experience different environments by virtue of such a process (tautomerism, rotation about a bond, exchange equili-
bria, ring inversion, etc.), the appearance of the spectrum will be a function of the rate of the process. Slow processes (on an
NMR time scale) result in more than one signal from the interconverting species; fast processes average these signals to one
line; and intermediate processes produce broad signals, which sometimes cannot be easily found in the spectra.
The software of modern FT-NMR spectrometers allows for sequences of pulses much more complex than the repetitive accu-
mulation of transients described above. Such experiments include homonuclear or heteronuclear multidimensional analysis,
which determines the correlation of couplings and may simplify the interpretation of otherwise complex spectra.
See chapter á1761ñ for detailed descriptions of common two-dimensional experiments.
Quantitative Applications
where I = integral; N = normalization factor; and [ ]1H = 1H relative molar concentration, and the
subscripts A and RS represent the analyte and reference standard, respectively.
The mass of the analyte is thus calculated according to the following equation.
where MA = mass of the analyte, MM = molar mass, and P = purity of the reference standard.
(c) A common application of absolute quantitation is the determination of the purity of a sample. The weight
% purity is given by
where MS is the total mass of the sample with contributions from the analyte plus any contaminants that
may be present in the sample such as water and salts. Combining Equations 3 and 4, the weight % purity is
given by
where V = volume.
Application to weight % purity: Weight % purity values may be similarly calculated as in Equation 5.
(C) The internal and external reference standard methods each have their own set of advantages and disadvantages.
(1) Chemical interactions: Preparation of the reference standard and the test material in separate solutions avoids
chemical interactions between the test sample and reference standard that may otherwise occur with an internal
reference standard.
(2) Spectral overlap: The use of an external reference standard also avoids potential overlap between peaks of the
reference standard and test sample that can occur with an internal standard.
(3) Calibration: Once an NMR response has been calibrated with external reference standard solutions, this calibra-
tion may be applied to any other sample in the same solvent given that i) the instrument has been demonstra-
ted to be stable over the time between when the calibration is done and when data is acquired on the test ma-
terial, ii) system suitability has been established on the day that the measurement on the test material is made,
and iii) absolute integrals are compared. In the case of internal reference standards, the measurement on the
reference standard and test sample is made under absolutely identical conditions.
(4) Accuracy and precision: Multiple external reference standard solutions may be prepared to average the errors
in the mass and volume measurements during sample preparation, thereby improving the accuracy of the cali-
brated NMR response. In the case of internal reference standards, single measurements of the reference stand-
ard and analyte are made for each replicate test solution. The combined errors from the mass measurements of
the reference standard and test sample as well as instrumental electronic variations determine the standard devi-
ation of the average MA or weight % purity values.
If an NMR procedure is provided in a monograph, verification of suitability (see á1226ñ) under actual conditions of use is
required. Validation is required only when an NMR method is an alternative to the official procedure for testing an official arti-
cle. The objective of validation of a procedure relying on the NMR method is to demonstrate that the measurement is suitable
for its intended purpose, including the following: quantitative determination of the main component in a drug substance or a
drug product (Category I assays), quantitative determination of impurities (Category II), and identification tests (Category IV).
[NOTE—For a definition of the different categories, see Validation of Compendial Procedures á1225ñ.] Depending on the category
of the test, analytical procedure validation requires the testing of specificity, linearity, range, accuracy, precision, quantitation
limit, and robustness. These analytical performance characteristics apply to externally standardized methods and to the meth-
od of standard additions.
Chapter á1225ñ provides definitions and general guidance on analytical procedures validation without indicating specific val-
idation criteria for each characteristic. The intention of the following sections is to provide the user with specific validation cri-
teria that represent the minimum expectations for this technology. For each particular application, tighter criteria may be nee-
ded in order to demonstrate suitability for the intended use.
The objective of an analytical procedure validation is to demonstrate that the analytical procedure is suitable for its intended
purpose by conducting experiments and obtaining results that meet predefined acceptance criteria. NMR analytical proce-
dures can include the following: quantitative tests for major component and impurities content, limit tests for the presence of
impurities, quantification of component in a product or formulation, and/or identification tests.
Performance characteristics that demonstrate the suitability of an analytical procedure are similar to those required for any
analytical procedure. A discussion of the applicable general principles is found in chapter á1225ñ. Specific acceptance criteria
for each validation parameter must be consistent with the intended use of the analytical procedure.
The performance characteristics that are required as part of a validation for each of the analytical procedure categories is
given below.
SPECIFICITY
The purpose of a specificity test is to demonstrate that measurements of the intended analyte signals are free of interference
from components and impurities in the test material. Specificity may be applied to all categories and is a requirement for Cate-
gory IV. Specificity tests can be conducted to compare NMR spectra of other components and impurities that are known from
synthetic processes and formulations and test preparations. For an identification NMR analytical procedure (Category IV), vali-
dation experiments may include multidimensional NMR experiments to validate correct assignments of chemical shifts and to
confirm the structure of the analyte.
Validation criteria: Specificity is ensured by use of a reference standard wherever possible and demonstrable lack of inter-
ference from other components.
LINEARITY
A linear relationship is exhibited between the analyte concentration and instrument response; this should be demonstrated
by measuring responses of analyte from NLT five standard solutions at concentrations encompassing the anticipated concen-
tration range of analyte(s) of the test solution. For Category I, standard solutions can be prepared from reference materials in
an appropriate NMR solvent. For Category II, NMR analytical procedures that are used to quantitate impurities, linearity sam-
ples can be prepared by spiking suitable test samples that contain low amounts of analyte or by spiking matrix samples at
concentrations of the expected range. The standard curve should then be constructed using appropriate statistical analytical
procedures such as a least squares regression. The correlation coefficient (R), y-intercept, and slope of the regression line
should be determined. Absolute values determined for these factors should be appropriate for the procedure being validated.
Validation criteria: The correlation coefficient (R) must be NLT 0.995 for Category I assays and NLT 0.99 for Category II
quantitative tests.
RANGE
The range between the low and high concentrations of analyte is given by the quantitative NMR analytical procedure. This
is normally based on test article specifications in the USP monograph. It is the range within which the analytical procedure can
demonstrate an acceptable degree of linearity, accuracy, and precision, and may be obtained from an evaluation of that ana-
lytical procedure. Recommended ranges for various NMR analytical procedures are given below.
Validation criteria: For Category I tests, the validation range for 100.0% centered acceptance criteria is 80.0%–120.0%.
For noncentered acceptance criteria, the validation range is 10.0% below the lower limit to 10.0% above the upper limit. For
content uniformity, it is 70.0%–130.0%. For Category II quantitative tests, the validation range covers 50.0%–120.0% of the
acceptance criteria.
ACCURACY
The accuracy of a quantitative NMR analytical procedure should be determined across the required analytical range. Typical-
ly, three levels of concentrations are evaluated using triplicate preparations at each level.
For drug substance assays (Category I), accuracy can be determined by analyzing a reference standard of known purity. For
drug product (Category I), a composite sample of reference standard and other components in a pharmaceutical finished
product should be used for analytical procedure validation. The assay results are compared to the theoretical value of the refer-
ence standard to estimate errors or percent recovery. For the quantitation of impurities (Category II), the accuracy of the ana-
lytical procedure can be determined by conducting studies with drug substances or products spiked with known concentra-
tions of the analyte under test. It is also acceptable to compare assay results from the analytical procedure being validated to
those of an established, alternative analytical procedure.
Validation criteria: 98.0%–102.0% recovery for drug substances, 95.0%–105.0% recovery for compounded pharmaceut-
ical finished products assay, and 80.0%–120.0% recovery for the quantitative impurity analysis. These criteria should be met
throughout the intended range.
PRECISION
Repeatability: The analytical procedure should be assessed by measuring the concentrations of six separate standard solu-
tions at 100% of the test concentration. The relative standard deviation from the replicate measurements should be evaluated
to meet acceptance criteria. Alternatively they can measure the concentrations of three replicates of three separate sample sol-
utions at different concentrations. The three concentrations should be close enough so that the repeatability is constant across
the concentration range. If this is done, the repeatability at the three concentrations is pooled for comparison to the accept-
ance criteria.
Validation criteria: The relative standard deviation is NMT 1.0% for drug substances, NMT 2.0% for compounded phar-
maceutical finished products, and NMT 20.0% for the quantitative impurity analysis.
Intermediate precision: The effect of random events on the analytical precision of the analytical procedure should be es-
tablished. Typical variables include performing the analysis on different days, using different instrumentation that are suitable
as specified in the analytical procedure, and/or having the analytical procedure performed by two or more analysts. As a mini-
mum, any combination of at least two of these factors totaling six experiments will provide an estimation of intermediate pre-
cision.
Validation criteria: The relative standard deviation is NMT 1.0% for drug substances, NMT 3.0% for compounded phar-
maceutical finished products, and NMT 25.0% for quantitative impurity analysis.
The QL can be validated by measuring six replicates of test samples spiked with analyte at 50% of specification.
From these replicates, accuracy and precision can be determined. Examples of specifications for Category II quantitative de-
terminations are that the measured concentration is within 70.0%–130.0% of the spike concentration and the relative stand-
ard deviation is NMT 15%.
ROBUSTNESS
The reliability of an analytical measurement should be demonstrated with deliberate changes to critical experimental param-
eters. This can include measuring the stability of the analyte under specified storage conditions, slightly varied inter-pulse de-
lay, probe temperature, and possible interfering species, to list a few examples. Robustness is required for Category I and Cate-
gory II, quantitative methods.
U.S. Current Good Manufacturing Practices regulations [21 CFR 211.194(a)(2)] indicate that users of analytical procedures
described in USP–NF do not need to validate these procedures if provided in a monograph. Instead, they must simply verify
their suitability under actual conditions of use.
The objective of an NMR procedure verification is to demonstrate that the procedure as prescribed in a specific monograph
can be executed by the user with suitable accuracy, specificity, and precision using the instruments, analysts, and sample ma-
trices available. According to general information chapter Verification of Compendial Procedures á1226ñ, if the verification of the
compendial procedure by following the monograph is not successful, the procedure may not be suitable for use with the arti-
cle under test. It may be necessary to develop and validate an alternative procedure as allowed in General Notices and Require-
ments 6.30.
Verification of a compendial NMR procedure should at minimum include the execution of the validation parameters for spe-
cificity, accuracy, precision, and limit of quantitation, when appropriate, as indicated in this section.
GLOSSARY
Internal standard: An internal standard (IS) is a substance added to a sample solution at a known concentration. One
should select an IS with at least a single NMR resonance that does not overlap with those of the analyte. The ratio of a
specific internal standard peak area and that of an analyte peak area is used to determine the concentration of the ana-
lyte. The number of nuclei corresponding to the integrated peaks in the IS and analyte spectra must be known.
NMR reference: An NMR reference, also known as an NMR shift reference, is a substance added to a sample and from
which the chemical shift for the δ scale is established. Common examples for proton and carbon NMR analyses are tetra-
methylsilane (TMS) for use in organic solvents and the sodium salt of 2,2-dimethyl-2-silapentane-5-sulfonic acid (DSS) or
sodium-3-trimethylsilylpropionate (TMSP) for use in aqueous media. In both cases, the chemical shift of the methyl peaks
is defined as 0.0 ppm.
Reference standard: A reference standard is a substance authenticated by appropriate experimental means to be of a
specific chemical structure. In NMR spectroscopy, a reference standard is typically used for the qualitative analysis of a test
material. Structure can be confirmed if one directly compares the chemical shifts and multiplicities of the peaks in the
NMR spectrum of the test material against the spectrum of the reference standard acquired under comparable experi-
mental conditions.
Optical microscopy for particle characterization can generally be applied to particles 1 mm and greater. The lower limit is
imposed by the resolving power of the microscope. The upper limit is less definite and is determined by the increased difficulty
associated with the characterization of larger particles. Various alternative techniques are available for particle characterization
outside the applicable range of optical microscopy. Optical microscopy is particularly useful for characterizing particles that are
not spherical. This method may also serve as a base for the calibration of faster and more routine methods that may be devel-
oped.
Apparatus—Use a microscope that is stable and protected from vibration. The microscope magnification (product of the
objective magnification, ocular magnification, and additional magnifying components) must be sufficient to allow adequate
characterization of the smallest particles to be classified in the test specimen. The greatest numerical aperture of the objective
should be sought for each magnification range. Polarizing filters may be used in conjunction with suitable analyzers and retar-
dation plates. Color filters of relatively narrow spectral transmission should be used with achromatic objectives, are preferable
with apochromats, and are required for appropriate color rendition in photomicrography. Condensers corrected at least for
spherical aberration should be used in the microscope substage and with the lamp. The numerical aperture of the substage
condenser should match that of the objective under the conditions of use and is affected by the actual aperture of the con-
denser diaphragm and by the presence of immersion oils.
Adjustment—The precise alignment of all elements of the optical system and proper focusing are essential. The focusing of
the elements should be done in accordance with the recommendations of the microscope manufacturer. Critical axial align-
ment is recommended.
Illumination—A requirement for good illumination is a uniform and adjustable intensity of light over the entire field of
view; Kohler illumination is preferred. With colored particles, choose the color of the filters used so as to control the contrast
and detail of the image.
Visual Characterization—The magnification and numerical aperture should be sufficiently high to allow adequate resolu-
tion of the images of the particles to be characterized. Determine the actual magnification using a calibrated stage micrometer
to calibrate an ocular micrometer. Errors can be minimized if the magnification is sufficient that the image of the particle is at
least 10 ocular divisions. Each objective must be calibrated separately. To calibrate the ocular scale, the stage micrometer scale
and the ocular scale should be aligned. In this way, a precise determination of the distance between ocular stage divisions can
be made. Several different magnifications may be necessary to characterize materials having a wide particle size distribution.
Photographic Characterization—If particle size is to be determined by photographic methods, take care to ensure that
the object is sharply focused at the plane of the photographic emulsion. Determine the actual magnification by photograph-
ing a calibrated stage micrometer, using photographic film of sufficient speed, resolving power, and contrast. Exposure and
processing should be identical for photographs of both the test specimen and the determination of magnification. The appa-
rent size of a photographic image is influenced by the exposure, development, and printing processes as well as by the resolv-
ing power of the microscope.
Preparation of the Mount—The mounting medium will vary according to the physical properties of the test specimen.
Sufficient but not excessive contrast between the specimen and the mounting medium is required to ensure adequate detail of
the specimen edge. The particles should rest in one plane and be adequately dispersed to distinguish individual particles of
interest. Furthermore, the particles must be representative of the distribution of sizes in the material and must not be altered
during preparation of the mount. Care should be taken to ensure that this important requirement is met. Selection of the
mounting medium must include a consideration of the analyte solubility.
Crystallinity Characterization—The crystallinity of a material may be characterized to determine compliance with the crys-
tallinity requirement where stated in the individual monograph of a drug substance. Unless otherwise specified in the individu-
al monograph, mount a few particles of the specimen in mineral oil on a clean glass slide. Examine the mixture using a polariz-
ing microscope: the particles show birefringence (interference colors) and extinction positions when the microscope stage is
revolved.
Limit Test of Particle Size by Microscopy—Weigh a suitable quantity of the powder to be examined (for example, 10 to
100 mg), and suspend it in 10 mL of a suitable medium in which the powder does not dissolve, adding, if necessary, a wetting
agent. A homogeneous suspension of particles can be maintained by suspending the particles in a medium of similar or
matching density and by providing adequate agitation. Introduce a portion of the homogeneous suspension into a suitable
counting cell, and scan under a microscope an area corresponding to not less than 10 mg of the powder to be examined.
Count all the particles having a maximum dimension greater than the prescribed size limit. The size limit and the permitted
number of particles exceeding the limit are defined for each substance.
Particle Size Characterization—The measurement of particle size varies in complexity depending on the shape of the par-
ticle, and the number of particles characterized must be sufficient to ensure an acceptable level of uncertainty in the measured
parameters. Additional information on particle size measurement, sample size, and data analysis is available, for example, in
ISO 9276. For spherical particles, size is defined by the diameter. For irregular particles, a variety of definitions of particle size
exist. In general, for irregularly shaped particles, characterization of particle size must also include information on the type of
diameter measured as well as information on particle shape. Several commonly used measurements of particle size are defined
below (see Figure 1):
Feret's Diameter—The distance between imaginary parallel lines tangent to a randomly oriented particle and perpendicular
to the ocular scale.
Martin's Diameter—The diameter of the particle at the point that divides a randomly oriented particle into two equal projec-
ted areas.
Projected Area Diameter—The diameter of a circle that has the same projected area as the particle.
Length—The longest dimension from edge to edge of a particle oriented parallel to the ocular scale.
Width—The longest dimension of the particle measured at right angles to the length.
Particle Shape Characterization—For irregularly shaped particles, characterization of particle size must also include infor-
mation on particle shape. The homogeneity of the powder should be checked using appropriate magnification. The following
defines some commonly used descriptors of particle shape (see Figure 2):
INTRODUCTION
Many pharmaceutical substances are optically active in the sense that they rotate an incident plane of polarized light so that
the transmitted light emerges at a measurable angle to the plane of the incident light. This property is characteristic of some
crystals and of many pharmaceutical liquids or solutions of solids. Where the property is possessed by a liquid or by a solute in
solution, it is generally the result of the presence of one or more asymmetric centers, usually a carbon atom with four different
substituents. The number of optical isomers is 2n, where n is the number of asymmetric centers. Polarimetry, the measurement
of optical rotation, of a pharmaceutical article may be the only convenient means for distinguishing optically active isomers
from each other and thus is an important criterion of identity and purity.
Substances that may show optical rotatory properties are chiral. Those that rotate light in a clockwise direction as viewed
towards the light source are dextrorotatory, or (+) optical isomers, and those that rotate light in a counterclockwise direction are
called levorotatory or (–) optical isomers. (The symbols d- and l-, formerly used to indicate dextro- and levorotatory isomers, are
no longer sanctioned owing to confusion with D- and L-, which refer to configuration relative to D-glyceraldehyde. The sym-
bols R and S and a and b are also used to indicate configuration, the arrangement of atoms or groups of atoms in space.)
The physicochemical properties of nonsuperimposable chiral substances rotating plane polarized light in opposite directions
to the same extent, enantiomers, are identical, except for this property and in their reactions with other chiral substances.
Enantiomers often exhibit profound differences in pharmacology and toxicology, owing to the fact that biological receptors
and enzymes themselves are chiral. Many articles from natural sources, such as amino acids, proteins, alkaloids, antibiotics,
glycosides, and sugars, exist as chiral compounds. Synthesis of such compounds from nonchiral materials usually results in
equal amounts of the enantiomers, i.e., racemates. Racemates have a net null optical rotation, and their physical properties
may differ from those of the component enantiomers. Use of stereoselective or stereospecific synthetic methods or separation
of racemic mixtures can be used to obtain individual optical isomers.
Measurement of optical rotation is performed using a polarimeter.1 The general equation used in polarimetry is:
where [a] is the specific rotation at wavelength l, t is the temperature, a is the observed rotation in degrees (°), l is the path-
length in decimeters, and c is the concentration of the analyte in g per 100 mL. Thus, [a] is 100 times the measured value, in
degrees (°), for a solution containing 1 g in 100 mL, measured in a cell having a pathlength of 1.0 dm under defined condi-
tions of incident wavelength of light and temperature. For some Pharmacopeial articles, especially liquids such as essential oils,
the optical rotation requirement is expressed in terms of the observed rotation, a, measured under conditions defined in the
monograph.
Historically, polarimetry was performed using an instrument where the extent of optical rotation is estimated by visual
matching of the intensity of split fields. For this reason, the D-line of the sodium lamp at the visible wavelength of 589 nm was
most often employed.2 Specific rotation determined at the D-line is expressed by the symbol:
and much of the data available are expressed in this form. Use of lower wavelengths, such as those available with the mercury
lamp lines isolated by means of filters of maximum transmittance at approximately 546, 436, 405, 365, and 325 nm in a pho-
toelectric polarimeter, has been found to provide advantages in sensitivity with a consequent reduction in the concentration of
the test compound. In general, the observed optical rotation at 436 nm is about double and at 365 nm about three times that
at 589 nm.2 Reduction in the concentration of the solute required for measurement may sometimes be accomplished by con-
version of the substance under test to one that has a significantly higher optical rotation. Optical rotation is also affected by
the solvent used for the measurement, and this is always specified.
It is now common practice to use other light sources, such as xenon or tungsten halogen, with appropriate filters, because
these may offer advantages of cost, long life, and broad wavelength emission range, over traditional light sources.
PROCEDURES
Specific Rotation
The reference Specific Rotation á781Sñ in a monograph signifies that specific rotation is to be calculated from observed opti-
cal rotations in the Test solution or Sample solution obtained as directed therein. Unless otherwise directed, measurements of
optical rotation are made in a 1.0-dm tube at 589 nm at 25° C.2 Where a photoelectric polarimeter is used, a single measure-
ment, corrected for the solvent blank, is made. Where a visual polarimeter is employed, the average of no fewer than five de-
terminations, corrected for the reading of the same tube with a solvent blank, is used. Temperature, which applies to the solu-
tion or the liquid under test, should be maintained within 0.5° of the stated value. Use the same cell for sample and blank.
Maintain the same angular orientation of the cell in each reading. Place the cell so that the light passes through it in the same
direction each time. Unless otherwise specified, specific rotation is calculated on the dried basis where Loss on Drying is speci-
fied in the monograph, or on the anhydrous basis where Water Determination is specified.
Optical rotation of solutions should be determined within 30 min of preparation. In the case of substances known to under-
go racemization or mutarotation, care should be taken to standardize the time between adding the solute to the solvent and
introduction of the solution into the polarimeter tube.
1 Suitable calibrators are available from the Office of Standard Reference Materials, National Institute of Standards and Technology (NIST), Gaithersburg, MD
20899, as current lots of Standard Reference Materials, Dextrose and Sucrose. Alternatively, calibration may be checked using a Polarization Reference Standard,
which consists of a plate of quartz mounted in a holder perpendicular to the light path. These standards are available, traceable to NIST, by contacting Rudolph
Research Analytical, located at 55 Newburgh Road, Hackettstown, NJ 07840, USA.
2 All references of wavelengths are in vacuum. The sodium D-line is 589.44 in vacuum and 589.3 in air.
Angular Rotation
The reference Angular Rotation á781Añ in a monograph signifies, unless otherwise directed, that the optical rotation of the
neat liquid is measured in a 1.0-dm tube at 589 nm at 25° C, corrected for the reading of the dry empty tube.2
Sieving is one of the oldest methods of classifying powders and granules by particle size distribution. When using a woven
sieve cloth, the sieving will essentially sort the particles by their intermediate size dimension (i.e., breadth or width). Mechani-
cal sieving is most suitable where the majority of the particles are larger than about 75 mm. For smaller particles, the light
weight provides insufficient force during sieving to overcome the surface forces of cohesion and adhesion that cause the parti-
cles to stick to each other and to the sieve, and thus cause particles that would be expected to pass through the sieve to be
retained. For such materials, other means of agitation such as air-jet sieving or sonic sifting may be more appropriate. Never-
theless, sieving can sometimes be used for some powders or granules having median particle sizes smaller than 75 mm where
the method can be validated. In pharmaceutical terms, sieving is usually the method of choice for classification of the coarser
grades of single powders or granules. It is a particularly attractive method in that powders and granules are classified only on
the basis of particle size, and in most cases the analysis can be carried out in the dry state.
Among the limitations of the sieving method are the need for an appreciable amount of sample (normally at least 25 g, de-
pending on the density of the powder or granule, and the diameter of test sieves) and difficulty in sieving oily or other cohe-
sive powders or granules that tend to clog the sieve openings. The method is essentially a two-dimensional estimate of size
because passage through the sieve aperture is frequently more dependent on maximum width and thickness than on length.
This method is intended for estimation of the total particle size distribution of a single material. It is not intended for deter-
mination of the proportion of particles passing or retained on one or two sieves.
Estimate the particle size distribution as described under Dry Sieving Method, unless otherwise specified in the individual
monograph. Where difficulty is experienced in reaching the endpoint (i.e., material does not readily pass through the sieves)
or when it is necessary to use the finer end of the sieving range (below 75 mm), serious consideration should be given to the
use of an alternative particle-sizing method.
Sieving should be carried out under conditions that do not cause the test sample to gain or lose moisture. The relative hu-
midity of the environment in which the sieving is carried out should be controlled to prevent moisture uptake or loss by the
sample. In the absence of evidence to the contrary, analytical test sieving is normally carried out at ambient humidity. Any
special conditions that apply to a particular material should be detailed in the individual monograph.
Principles of Analytical Sieving—Analytical test sieves are constructed from a woven-wire mesh, which is of simple weave
that is assumed to give nearly square apertures and is sealed into the base of an open cylindrical container. The basic analytical
method involves stacking the sieves on top of one another in ascending degrees of coarseness, and then placing the test pow-
der on the top sieve.
The nest of sieves is subjected to a standardized period of agitation, and then the weight of material retained on each sieve
is accurately determined. The test gives the weight percentage of powder in each sieve size range.
This sieving process for estimating the particle size distribution of a single pharmaceutical powder is generally intended for
use where at least 80% of the particles are larger than 75 mm. The size parameter involved in determining particle size distribu-
tion by analytical sieving is the length of the side of the minimum square aperture through which the particle will pass.
TEST SIEVES
Test sieves suitable for pharmacopeial tests conform to the most current edition of International Organization for Standardi-
zation Specification ISO 3310-1: Test Sieves—Technical Requirements and Testing (see Table 1). Unless otherwise specified in
the monograph, use those ISO sieves listed as principal sizes in Table 1. Unless otherwise specified in the monograph, use
those ISO sieves listed in Table 1 as recommended in the particular region.
Table 1. Sizes of Standard Sieve Series in Range of Interest
ISO Nominal Aperture
Principal Sizes Supplementary Sizes US Sieve Recommended European Japan
R 20/3 R 20 R 40/3 No. USP Sieves (microns) Sieve No. Sieve No.
11.20 mm 11.20 mm 11.20 mm 11200
10.00 mm
9.50 mm
Sieves are selected to cover the entire range of particle sizes present in the test specimen. A nest of sieves having a Ö2 pro-
gression of the area of the sieve openings is recommended. The nest of sieves is assembled with the coarsest screen at the top
and the finest at the bottom. Use micrometers or millimeters in denoting test sieve openings. [NOTE—Mesh numbers are provi-
ded in the table for conversion purposes only.] Test sieves are made from stainless steel or, less preferably, from brass or other
suitable nonreactive wire.
Calibration and recalibration of test sieves is in accordance with the most current edition of ISO 3310-1. Sieves should be
carefully examined for gross distortions and fractures, especially at their screen frame joints, before use. Sieves may be calibra-
ted optically to estimate the average opening size, and opening variability, of the sieve mesh. Alternatively, for the evaluation
of the effective opening of test sieves in the size range of 212 to 850 mm, Standard Glass Spheres are available. Unless other-
wise specified in the individual monograph, perform the sieve analysis at controlled room temperature and at ambient relative
humidity.
Cleaning Test Sieves—Ideally, test sieves should be cleaned using only an air jet or a liquid stream. If some apertures remain
blocked by test particles, careful gentle brushing may be used as a last resort.
Test Specimen—If the test specimen weight is not given in the monograph for a particular material, use a test specimen
having a weight between 25 and 100 g, depending on the bulk density of the material, and test sieves having a 200-mm di-
ameter. For 76-mm sieves, the amount of material that can be accommodated is approximately 1/7th that which can be ac-
commodated on a 200-mm sieve. Determine the most appropriate weight for a given material by test sieving accurately
weighed specimens of different weights, such as 25, 50, and 100 g, for the same time period on a mechanical shaker. [NOTE—
If the test results are similar for the 25-g and 50-g specimens, but the 100-g specimen shows a lower percentage through the
finest sieve, the 100-g specimen size is too large.] Where only a specimen of 10 to 25 g is available, smaller diameter test sieves
conforming to the same mesh specifications may be substituted, but the endpoint must be redetermined. The use of test sam-
ples having a smaller mass (e.g., down to 5 g) may be needed. For materials with low apparent particle density, or for materi-
als mainly comprising particles with a highly isodiametrical shape, specimen weights below 5 g for a 200-mm screen may be
necessary to avoid excessive blocking of the sieve. During validation of a particular sieve analysis method, it is expected that
the problem of sieve blocking will have been addressed.
If the test material is prone to picking up or losing significant amounts of water with varying humidity, the test must be
carried out in an appropriately controlled environment. Similarly, if the test material is known to develop an electrostatic
charge, careful observation must be made to ensure that such charging is not influencing the analysis. An antistatic agent,
such as colloidal silicon dioxide and/or aluminum oxide, may be added at a 0.5 percent (m/m) level to minimize this effect. If
both of the above effects cannot be eliminated, an alternative particle-sizing technique must be selected.
Agitation Methods—Several different sieve and powder agitation devices are commercially available, all of which may be
used to perform sieve analyses. However, the different methods of agitation may give different results for sieve analyses and
endpoint determinations because of the different types and magnitude of the forces acting on the individual particles under
test. Methods using mechanical agitation or electromagnetic agitation, and that can induce either a vertical oscillation or a
horizontal circular motion, or tapping or a combination of both tapping and horizontal circular motion are available. Entrain-
ment of the particles in an air stream may also be used. The results must indicate which agitation method was used and the
agitation parameters used (if they can be varied), because changes in the agitation conditions will give different results for the
sieve analysis and endpoint determinations, and may be sufficiently different to give a failing result under some circumstances.
Endpoint Determination—The test sieving analysis is complete when the weight on any of the test sieves does not change
by more than 5% or 0.1 g (10% in the case of 76-mm sieves) of the previous weight on that sieve. If less than 5% of the total
specimen weight is present on a given sieve, the endpoint for that sieve is increased to a weight change of not more than 20%
of the previous weight on that sieve.
If more than 50% of the total specimen weight is found on any one sieve, unless this is indicated in the monograph, the test
should be repeated, but with the addition to the sieve nest of a more coarse sieve, intermediate between that carrying the
excessive weight and the next coarsest sieve in the original nest, i.e., addition of the ISO series sieve omitted from the nest of
sieves.
SIEVING METHODS
Mechanical Agitation
Dry Sieving Method—Tare each test sieve to the nearest 0.1 g. Place an accurately weighed quantity of test specimen on the
top (coarsest) sieve, and replace the lid. Agitate the nest of sieves for 5 minutes. Then carefully remove each from the nest
without loss of material. Reweigh each sieve, and determine the weight of material on each sieve. Determine the weight of
material in the collecting pan in a similar manner. Reassemble the nest of sieves, and agitate for 5 minutes. Remove and weigh
each sieve as previously described. Repeat these steps until the endpoint criteria are met (see Endpoint Determination under
Test Sieves). Upon completion of the analysis, reconcile the weights of material. Total losses must not exceed 5% of the weight
of the original test specimen.
Repeat the analysis with a fresh specimen, but using a single sieving time equal to that of the combined times used above.
Confirm that this sieving time conforms to the requirements for endpoint determination. When this endpoint has been valida-
ted for a specific material, then a single fixed time of sieving may be used for future analyses, providing the particle size distri-
bution falls within normal variation.
If there is evidence that the particles retained on any sieve are aggregates rather than single particles, the use of mechanical
dry sieving is unlikely to give good reproducibility, and a different particle size analysis method should be used.
Air Jet and Sonic Sifter Sieving—Different types of commercial equipment that use a moving air current are available for siev-
ing. A system that uses a single sieve at a time is referred to as air jet sieving. It uses the same general sieving methodology as
that described under the Dry Sieving Method, but with a standardized air jet replacing the normal agitation mechanism. It re-
quires sequential analyses on individual sieves starting with the finest sieve to obtain a particle size distribution. Air jet sieving
often includes the use of finer test sieves than those used in ordinary dry sieving. This technique is more suitable where only
oversize or undersize fractions are needed.
In the sonic sifting method, a nest of sieves is used, and the test specimen is carried in a vertically oscillating column of air
that lifts the specimen and then carries it back against the mesh openings at a given number of pulses per minute. It may be
necessary to lower the sample amount to 5 g, when sonic sifting is employed.
The air jet sieving and sonic sieving methods may be useful for powders or granules when mechanical sieving techniques are
incapable of giving a meaningful analysis.
These methods are highly dependent upon proper dispersion of the powder in the air current. This requirement may be
hard to achieve if the method is used at the lower end of the sieving range (i.e., below 75 mm), when the particles tend to be
more cohesive, and especially if there is any tendency for the material to develop an electrostatic charge. For the above rea-
sons endpoint determination is particularly critical, and it is very important to confirm that the oversize material comprises sin-
gle particles and is not composed of aggregates.
INTERPRETATION
The raw data must include the weight of test specimen, the total sieving time, and the precise sieving methodology and the
set values for any variable parameters, in addition to the weights retained on the individual sieves and in the pan. It may be
convenient to convert the raw data into a cumulative weight distribution, and if it is desired to express the distribution in
terms of a cumulative weight undersize, the range of sieves used should include a sieve through which all the material passes.
If there is evidence on any of the test sieves that the material remaining on it is composed of aggregates formed during the
sieving process, the analysis is invalid.
á791ñ pH
INTRODUCTION
For compendial purposes, pH is defined as the value given by a suitable, properly standardized, potentiometric sensor and
measuring system. [NOTE—The measuring system has traditionally been referred to as the “pH meter”. While the pH meter is
still in common use, the measuring system can also be embedded inside the pH sensor, and the pH signal can be transmitted
digitally to an external device such as a computer, Programmable Logic Controller (PLC), Distributed Control System (DCS),
data acquisition system, terminal, or other microprocessor-controlled device.] By definition, pH is equal to −log10[aH+] where
aH+ is the activity of the hydrogen (H+) or hydronium ion (H3O+), and the hydrogen ion activity very closely approximates the
hydrogen ion concentration.
The practical pH scale is defined:
pH = pHs + [(E − ES)/k]
E = measured potential where the galvanic cell contains the solution under test (pH)
ES = measured potential where the galvanic cell contains the appropriate buffer solution for standardization (pHs)
k = change in potential per unit change in pH and is derived from the Nernst equation (as follows)
k = loge(10) × (RT/nF)
R = 8.314 J/mole/°K
T = temperature (°K)
n = moles per half-reaction
F = Faraday constant, 96485 C/mole
The resulting equation is [0.05916 + 0.0001984(T − 25°)] volts at temperature T. Values of k from 15°–35° are provided in
Table 1.
Table 1. Values of k for Various Temperatures
Temperature k
(°C) (V)
15.00 0.05718
20.00 0.05817
25.00 0.05916
30.00 0.06016
35.00 0.06115
Values of k at other temperatures can be determined from the equation above. For practical purposes, values of k are deter-
mined from pH sensor calibration.
pH MEASUREMENT SYSTEM
The measurement system consists of: (1) a measuring electrode sensitive to hydrogen-ion activity, typically a glass electrode,
though other electrode types are possible, (2) a suitable reference electrode, for example, a silver–silver chloride electrode, and
(3) a voltage measurement system with an input resistance capable of measuring at a high input impedance of the pH sensor.
The measuring and reference electrode may be separated or combined. The voltage measurement system may be separated
from the pH sensor or integrated into the sensor. For most applications, a temperature measurement will be necessary for
compensation of the Nernst temperature influence described above. A temperature device may be embedded into the pH sen-
sor, or an external temperature device may be used.
INSTRUMENT REQUIREMENTS
The measurement system shall be capable of performing a 2-point pH calibration (see below). The accuracy of the pH meas-
urement system is described in the Calibration section. The resolution of the pH measurement system shall be at least 0.01 pH.
The instrument shall be capable of temperature-compensating the pH sensor measurement to convert the millivolt signal to
pH units at any temperature, either automatically using a temperature device built into the sensor system or by manual entry
of the sample temperature into the measurement system. The accuracy of the temperature measurement system shall be ±1°
C. The resolution of the temperature measurement system shall be at least 0.1° C. Lab-based pH measurements are typically
performed at 25 ± 2° C unless otherwise specified in the individual monograph or herein. However, temperatures outside this
range are acceptable if samples are more conveniently prepared at alternative temperatures. Examples of non-lab-based meas-
urements include test samples inside process pipes, vessels, tanks, and other non-standard processing conditions. [NOTE—The
definitions of pH, the pH scale, and the values assigned to the buffer solutions for standardization are for the purpose of estab-
lishing a practical, operational system so that results may be compared between laboratories. The measured pH values do not
necessarily correspond exactly to those obtained by the definition, pH = −log aH+, rather the values obtained are closely related
to the activity of the hydrogen-ion in aqueous solutions.] [NOTE—Where a pH measurement system is standardized by use of
an aqueous buffer and then used to measure the “pH” of nonaqueous systems, the ionization constant of the acid or base, the
dielectric constant of the medium, the liquid-junction potential (which may give rise to errors of approximately 1 pH unit), and
the hydrogen-ion response of the glass electrode are all changed. For these reasons, the values so obtained with solutions that
are only partially aqueous in character can be regarded only as apparent pH values.]
Buffer solutions for standardization are prepared as directed in Table 2.1 Buffer salts of requisite purity can be obtained from
the National Institute of Standards and Technology, other national authorities, or other suppliers. Buffer solutions should be
stored in appropriate containers that ensure stability of the pH through the expiry date, and fitted with a tight closure. For
buffer solutions greater than 11, the storage should be in containers that are resistant to or reduce carbon dioxide intrusion
which would lower the pH of the buffer. For buffer solutions lower than 11, they should typically be prepared at intervals not
to exceed 3 months. For buffer solutions greater than 11, they should typically be prepared and used fresh unless carbon diox-
ide ingress is restricted. All buffer solutions should be prepared using Purified Water. Table 2 indicates the pH of the buffer solu-
tions as a function of temperature. The instructions presented here are intended for the preparation of solutions having the
designated molal (m) concentrations. However, in order to facilitate their preparation, the instructions are given in molarity.
The difference in concentration between molality and molarity preparations for these buffer solutions is less than 1%, and the
pH difference is negligible. Calibration using buffer solutions shall be done in the temperature range of the buffers listed in
Table 2. [NOTE—The Nernst temperature compensation corrects only for the electrode millivolt output change with tempera-
ture, not the actual pH change of the buffer solution with temperature which is unique for each buffer.] Features such as auto-
matic buffer recognition or buffer pH–temperature correction are available for convenience in accommodating the tempera-
ture influence on buffer solutions. The pH–temperature response can be determined from the values in Table 2.
Table 2. pH Values of Buffer Solutions for Standardization
Potassi-
um
Dihydro-
gen
Phos- Sodium
phate, Carbo-
0.0087 M, nate,
and 0.025 M,
Potassi- Potassium Potassium Disodium and
um Hydrogen Dihydro- Potassium Equimolal Hydrogen Sodium Sodium Calcium
Temper- Tetra- Tartrate gen Biphtha- Phos- Phos- Tetra- Bicarb- Hydroxide,
ature oxalate, Saturated Citrate, late, phate, phate, borate, onate, Saturated at
(°C) 0.05 m at 25° 0.05 M 0.05 m 0.05 m 0.0303 M 0.01 m 0.025 M 25°
10 1.67 — — 4.00 6.92 — 9.33 — 13.00
15 1.67 — 3.80 4.00 6.90 7.45 9.28 10.12 12.81
20 1.68 — 3.79 4.00 6.88 7.43 9.23 10.06 12.63
25 1.68 3.56 3.78 4.01 6.86 7.41 9.18 10.01 12.45
30 1.68 3.55 3.77 4.02 6.85 7.40 9.14 9.97 12.29
35 1.69 3.55 3.76 4.02 6.84 7.39 9.10 9.93 12.13
40 1.69 — — 4.04 6.84 — 9.07 — 11.98
45 1.70 — — 4.05 6.83 — 9.04 — 11.84
50 1.71 — — 4.06 6.83 — 9.01 — 11.71
55 1.72 — — 4.08 6.83 — 8.99 — 11.57
60 1.72 — — 4.09 6.84 — 8.96 — 11.45
DpH/D°C 0.0010 −0.0014 −0.0022 0.0018 −0.0016 −0.0028 −0.0074 −0.0096 −0.0310
Preparation of alternative volumes at the same concentrations to those indicated below is acceptable.
Potassium tetraoxalate, 0.05 m: Dissolve 12.61 g of KH3(C2O4)2 · 2H2O, and dilute with water to make 1000.0 mL.
1 Commercially available buffer solutions for pH measurement system, standardized by methods traceable to the National Institute of Standards and Technology
(NIST) or other national authorities, labeled with a pH value accurate to 0.02 pH unit may be used. Solutions prepared from ACS reagent grade materials or other
suitable materials, may be used provided the pH of the resultant solution is the same as that of the solution prepared from the NIST (or other national authorities)
certified material. Buffer solutions that are greater than 12 should be used immediately, or should be prepared using freshly boiled water, and stored under condi-
tions to minimize carbon dioxide absorption and ingress.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á791ñ pH 315
Potassium hydrogen tartrate, saturated at 25°: Add C4H5KO6 to water until saturation is exceeded at 25°. Then filter or
decant.
Potassium dihydrogen citrate, 0.05 M: Dissolve 11.41 g C6H7KO4, and dilute with water to make 1000.0 mL.
Potassium biphthalate, 0.05 m: Dissolve 10.12 g of KHC8H4O4, previously dried at 110° for 1 h, and dilute with water to
make 1000.0 mL.
Equimolal phosphate, 0.05 m: Dissolve 3.53 g of Na2HPO4 and 3.39 g of KH2PO4, each previously dried at 120° for 2 h,
and dilute with water to make 1000.0 mL.
Potassium dihydrogen phosphate, 0.0087 M, and disodium hydrogen phosphate, 0.0303 M: Dissolve 1.18 g of
KH2PO4 and 4.30 g Na2HPO4, both dried for 2 h at 120 ± 2°, and dilute with water to make 1000.0 mL.
Sodium tetraborate, 0.01 m: Dissolve 3.80 g of Na2B4O7 · 10H2O, and dilute with water to make 1000.0 mL. Protect
from absorption of carbon dioxide.
Sodium carbonate, 0.025 M, and sodium bicarbonate, 0.025 M: Dissolve 2.64 g of Na2CO3 and 2.09 g NaHCO3, and
dilute with water to make 1000.0 mL.
Calcium hydroxide, saturated at 25°: Add Ca(OH)2 to water until saturation is exceeded at 25°. Use water that has been
recently boiled and protected from atmosphere to limit carbon dioxide absorption. Then filter or decant.
CALIBRATION
Because of variations in the nature and operation of the available pH measurement systems, it is not practical to provide
universal directions for the standardization of the measurement system. However, the general principles to be followed are set
forth in the following paragraphs. Examine the electrodes, especially the reference electrode and electrolyte level, if a liquid
electrolyte is used. If necessary, replenish electrolyte supply, and observe other precautions indicated by the instrument and
electrode manufacturers.
The calibration or verification of the pH measurement system should be periodically executed. The historical performance of
the measurement system, the criticality of the pH measurement, the maintenance of the pH sensor, and the frequency of
measurement operation is used to determine the frequency of the calibration/verification.
If the pH of the buffer is sensitive to ambient carbon dioxide, then use Purified Water that has been recently boiled, and
subsequently stored in a container designed to minimize ingress of carbon dioxide.
1. To standardize the pH measurement system, select three buffer solutions for standardization, preferably from those given
in Table 2, such that the expected pH of the material under test falls within their range. Two of the buffers are used for
the calibration process, and the third buffer is used for verification. The value of the verification buffer should be between
the calibration buffers. If the operational range of the pH sensor is beyond the pH range of the buffer solutions in Table 2,
then either 1) select two nearby pH buffers from Table 2 or 2) select one from Table 2 and another documented prepared
buffer that is outside the range.
2. Rinse the pH sensor several times with water, then with the first buffer solution.
3. Immerse the pH sensor in the first buffer solution at a temperature within the range of Table 2.
4. If automatic temperature measurement and compensation is not included in the measuring system, manually enter the
temperature of the buffer and pH value of the buffer solution at that temperature into the instrument. For temperatures
not listed in Table 2, use linear interpolation to determine the pH value as a function of temperature.
5. Initiate the 2-point calibration sequence with the first buffer according to manufacturer's instructions.
6. Remove the pH sensor from the first buffer and rinse the electrode(s) with water, and then the second buffer solution.
7. Immerse the pH sensor in the second buffer at a temperature within the range of Table 2.
8. If automatic temperature measurement and compensation is not included in the measuring system, manually enter the
temperature of the buffer and pH value of the buffer solution at that temperature into the instrument.
9. Continue the 2-point calibration sequence with the second buffer according to manufacturer¢s instructions.
10. After completion of the 2-point calibration process, verify that the pH slope and offset are within acceptable parameters.
Typical acceptable parameters are slopes ranging from 90%–105% and an offset of ±30 mV (0.5 pH units at 25° C). De-
pending on the pH instrumentation, the pH slope and offset may be determined in software or by manual methods. If
these parameters are not within acceptable parameters, the sensor should be properly cleaned, replenished, serviced, or
replaced, and the 2-point calibration process shall be repeated.
11. Remove the pH sensor from the second buffer, and rinse thoroughly with water, and then the verification buffer.
12. Immerse the pH sensor in the verification buffer at a temperature within the range of Table 2.
13. If automatic temperature measurement and compensation is not included in the measuring system, manually enter the
temperature of the buffer and pH value of the buffer solution at that temperature into the instrument.
14. The pH reading shall be within ±0.05 pH of the value in Table 2 at the buffer solution temperature.
OPERATION
All test samples should be prepared using Purified Water, unless otherwise specified in the monograph. All test measurements
should use manual or automated Nernst temperature compensation.
1. Prepare the test material according to requirements in the monograph or according to specific procedures. If the pH of
the test sample is sensitive to ambient carbon dioxide, then use Purified Water that has been recently boiled, and subse-
quently stored in a container designed to minimize ingress of carbon dioxide.
2. Rinse the pH sensor with water, then with a few portions of the test material.
3. Immerse the pH sensor into the test material and read the pH value and temperature.
In all pH measurements, allow sufficient time for stabilization of the temperature and pH measurement.
Diagnostic functions such as glass or reference electrode resistance measurement may be available to determine equipment
deficiencies. Refer to electrode supplier for diagnostic tools to assure proper electrode function.
Where approximate pH values suffice, indicators and test papers (see Indicators and Indicator Test Papers, in the section Re-
agents, Indicators, and Solutions) may be suitable.
For a discussion of buffers, and for the composition of standard buffer solutions called for in compendial tests and assays, see
Buffer Solutions in the section Solutions. This referenced section is not intended to replace the use of the pH calibration buffers
in Table 2.
á801ñ POLAROGRAPHY
Polarography is an electrochemical method of analysis based on the measurement of the current flow resulting from the
electrolysis of a solution at a polarizable microelectrode, as a function of an applied voltage. The polarogram (see Figure 1)
obtained by this measurement provides qualitative and quantitative information on electro-reducible and electro-oxidizable
substances. The normal concentration range for substances being analyzed is from 10−2 molar to 10−5 molar.
In direct current (dc) polarography, the microelectrode is a dropping mercury electrode (DME) consisting of small reprodu-
cible drops of mercury flowing from the orifice of a capillary tube connected to a mercury reservoir. A saturated calomel elec-
trode (SCE) with a large surface area is the most commonly employed reference electrode. As the voltage applied to the cell
increases, only a very small residual current flows until the substance under assay undergoes reduction or oxidation. Then the
current increases, at first gradually, then almost linearly with voltage, and it gradually reaches a limiting value as is shown in
Figure 1.
Fig. 1. Typical Polarogram Showing Change in Current Flow with Increasing Potential Applied to the Dropping Mercury
Electrode.
On the initial rising portion of the polarographic wave, the increased flow of current results in a decrease in the concentration
of the electro-active species at the electrode surface. As the voltage and current increase, the concentration of the reactive
species decreases further to a minimal value at the electrode surface. The current is then limited by the rate at which the react-
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á801ñ Polarography 317
ing species can diffuse from the bulk of the solution to the surface of the microelectrode. The final current rise is caused by the
reaction of the supporting electrolyte. This large concentration of electrolyte is inert within the potential range used in the
analysis, and it prevents the reactive species from reaching the electrode by electrical migration, thus assuring that the limiting
current is diffusion-controlled.
Since, in the case of the DME, the electrode surface is being constantly renewed in a cyclic fashion, the current increases
from a small value as the drop begins to form to a maximum value as the drop falls. By the use of a suitable recorder to meas-
ure the current, the characteristic saw-toothed record is obtained. The limiting current is the sum of the residual and the diffu-
sion currents. The residual current is subtracted from the limiting current to give the wave height.
Ilkovic Equation—The linear relationship between the diffusion current (id) and the concentration of electro-active species
is shown by the Ilkovic equation:
id = 708nD1/2Cm2/3t1/6
in which id is the maximum current in microamperes; n is the number of electrons required per molecule of electro-active sub-
stance; D is its diffusion coefficient, in square cm per second; C is the concentration, in millimoles per L; m is the rate of mer-
cury flow from the DME, in mg per second; and t is the drop time, in seconds.
Modern polarographs are equipped with recorders capable of following the current during the latter portion of the drop life;
consequently, the maximum of the oscillations is the measure of the current. When the current is measured only at the end of
the drop life, the technique is termed sampled dc polarography. In this case, only the maximum currents are recorded and
oscillations due to drop growth are not observed.
For instruments equipped with galvanometers to measure the current or recorders operated in a damped mode, the saw-
toothed waves correspond to oscillations about the average current. In the latter case, the average of the oscillations is the
measure of the current. For polarograms obtained in this manner, the id given by the Ilkovic equation is the average current in
microamperes observed during the life of the drop, when the coefficient 708 is replaced by 607.
Control of the Diffusion Current—The Ilkovic equation identifies the variables that must be controlled to ensure that the
diffusion current is directly proportional to the concentration of electro-active material. At 25° the diffusion coefficients for
aqueous solutions of many ions and organic molecules increase 1% to 2% per degree rise in temperature. Thus the tempera-
ture of the polarographic cell must be controlled to within ±0.5°. The quantities m and t depend upon the dimensions of the
capillary and the height of the mercury column above the electrode. Although results obtained with different capillaries can be
compared if the product m2/3t1/6 is known, it is advisable to use the same capillary with a constant head of mercury during a
series of analyses. The diffusion current is proportional to the square root of the height of the mercury column. A mercury
reservoir with a diameter greater than 4 cm prevents any significant drop in the mercury level during a series of runs.
The capillary for the DME has a bore of approximately 0.04 mm and a length of 6 cm to 15 cm. The height of the mercury
column, measured from the tip of the capillary to the top of the mercury pool, ranges from 40 cm to 80 cm. The exact length
of the capillary and the height of the mercury column are adjusted to give a drop-time of between 3 and 5 seconds at open
circuit with the capillary immersed in the test solution.
Equipment is available that allows controlled drop-times of fractions of a second to several seconds. As detail within a po-
larogram is related to the number of drops delivered during a given potential change, such short drop-times allow more rapid
recording of the polarogram.
The current flowing through the test solution during the recording of a polarogram is in the microampere range. Thus, the
current flow produces negligible changes in the test solution and several polarograms can be run on the same test solution
without significant differences.
Half-wave Potential—The half-wave potential (E1/2) occurs at the point on the polarogram one-half the distance between
the residual current and the limiting current plateau. This potential is characteristic of the electro-active species and is largely
independent of its concentration or the capillary used to obtain the wave. It is dependent upon the solution composition and
may change with variations in the pH or in the solvent system or with the addition of complexing agents. The half-wave po-
tential thus serves as a criterion for the qualitative identification of a substance.
The potential of the DME is equal to the applied voltage versus the reference electrode after correction for the iR drop (that
potential need to pass the current, i, through the solution with a resistance R). It is especially important to make this correction
for nonaqueous solutions, which ordinarily possess high resistance, if an accurate potential for the DME is needed. Correction
of the half-wave potential is not required for quantitative analysis. Unless otherwise indicated, it is to be understood that po-
tentials represent measurements made against the SCE.
Removal of Dissolved Oxygen—Inasmuch as oxygen is reduced at the DME in two steps, first to hydrogen peroxide and
then to water, it interferes where polarograms are to be made at potentials more negative than about 0 volt versus SCE, and
must be removed. This may be accomplished by bubbling oxygen-free nitrogen through the solution for 10 to 15 minutes
immediately before recording the wave, the nitrogen first having been “conditioned” to minimize changes due to evapora-
tion, by being passed through a separate portion of the solution.
It is necessary that the solution be quiet and vibration-free during the time the wave is recorded, to ensure that the current
is diffusion-controlled. Therefore, the nitrogen aeration should be stopped and the gas be directed to flow over the surface of
the solution before a polarogram is recorded.
In alkaline media, sodium bisulfite may be added to remove oxygen, provided the reagent does not react with other compo-
nents of the system.
Official text. Reprinted from First Supplement to USP38-NF33.
318 á801ñ Polarography / Physical Tests DSC
Measurement of Wave Height—To use a polarogram quantitatively, it is necessary to measure the height of the wave.
Since this is a measure of the magnitude of the diffusion current, it is measured vertically. To compensate for the residual cur-
rent, the segment of the curve preceding the wave is extrapolated beyond the rise in the wave. For a well-formed wave where
this extrapolation parallels the limiting current plateau, the measurement is unambiguous. For less well-defined waves, the fol-
lowing procedure may be used unless otherwise directed in the individual monograph. Both the residual current and the limit-
ing current are extrapolated with straight lines, as shown by the graph (Figure 1). The wave height is taken as the vertical dis-
tance between these lines measured at the half-wave potential.
Procedure—[Caution—Mercury vapor is poisonous, and metallic mercury has a significant vapor pressure at room temperature.
The work area in which mercury is used should be constructed in such a way that any spilled or spattered droplets of mercury can be
completely recovered with relative ease. Scrupulously clean up mercury after each use of the instrument. Work in a well-ventilated
laboratory, taking care to clean up any spilled mercury.] Transfer a portion of the final dilution of the substance being assayed to
a suitable polarographic cell immersed in a water bath regulated to 25 ± 0.5°. Pass a stream of nitrogen through the solution
for 10 to 15 minutes to remove dissolved oxygen. Start the mercury dropping from the capillary, insert the capillary into the
test solution, and adjust the height of the mercury reservoir. Switch the flow of nitrogen to pass over the surface of the solu-
tion, and record the polarogram over the potential range indicated in the individual monograph, using the appropriate re-
corder or galvanometer sensitivity to give a suitable wave. Measure the height of the wave, and unless otherwise directed in
the monograph, compare this with the wave height obtained with the appropriate USP Reference Standard, measured under
the same conditions.
Pulse Polarography—In conventional dc polarography, the current is measured continuously as potential is applied as a
linear ramp (see Figure 2).
This current is composed of two elements. The first, the diffusion (faradaic) current, is produced by the substance undergoing
reduction or oxidation at the working electrode, and is directly proportional to the concentration of this substance. The sec-
ond element is the capacitative current (charging of the electrochemical double layer). The changes in these currents as the
mercury drop varies in size produce the oscillations present in typical dc polarograms.
In normal pulse polarography, a potential pulse is applied to the mercury electrode near the end of the drop life, with the
drop being held at the initial potential during growth period (see Figure 3).
Each succeeding drop has a slightly higher pulse applied to it, with the rate of increase being determined by the selected scan
rate. The current is measured at the end of the pulse where the capacitative current is nearly zero, and thus primarily faradaic
current is measured (see Figure 4).
In addition, since the pulse is applied for only a short duration, the diffusion layer is not depleted as extensively as in dc polar-
ography and larger current levels are obtained for equivalent concentrations. Concentrations as low as 10−6 M can be meas-
ured, providing approximately a ten-fold increase in sensitivity over that with dc polarography. Limiting current values are
more easily measured, since the waves are free from oscillations.
Differential pulse polarography is a technique whereby a fixed-height pulse applied at the end of the life of each drop is
superimposed on a linear increasing dc ramp (see Figure 5).
Current flow is measured just before application of the pulse and again at the end of the pulse. The difference between these
two currents is measured and presented to the recorder. Such a differential signal provides a curve approximating the deriva-
tive of the polarographic wave, and gives a peak presentation. The peak potential is equivalent to:
E1/2 – DE/2
where DE is the pulse height. The peak height is directly proportional to concentration at constant scan rates and constant
pulse heights. This technique is especially sensitive (levels of 10−7M may be determined) and affords improved resolution be-
tween closely spaced waves.
Anodic Stripping Voltammetry—Anodic stripping voltammetry is an electrochemical technique whereby trace amounts of
substances in solution are concentrated (reduced) onto an electrode and then stripped (oxidized) back into solution by scan-
ning the applied voltage anodically. The measurement of the current flow as a function of this voltage and scanning rate pro-
vides qualitative and quantitative information on such substances. The concentration step permits analyses at 10−7M to 10−9M
levels.
Basic instrumentation includes a voltage ramp generator; current-measuring circuitry; a cell with working, reference, and
counter electrodes; and a recorder or other read-out device. Instruments having dc or pulse-polarographic capabilities are gen-
erally quite adequate for stripping application. The working electrode commonly used is the hanging mercury drop electrode
(HMDE), although the mercury thin-film electrode (MTFE) has acquired acceptance. For analysis of metals such as silver, plati-
num, and gold, whose oxidation potentials are more positive than mercury, and mercury itself, the use of solid electrodes such
as platinum, gold, or carbon is required. A saturated calomel electrode or a silver–silver chloride electrode serves as the refer-
ence except for the analysis of mercury or silver. A platinum wire is commonly employed as the counter electrode.
Test specimens containing suitable electrolyte are pipeted into the cell. Dissolved oxygen is removed by bubbling nitrogen
through the cell for 5 to 10 minutes.
Generally, an electrolysis potential equivalent to 200 to 300 mV more negative than the half-wave potential of the material
to be analyzed is applied (although this potential is to be determined experimentally), with stirring for 1 to 10 minutes. For
reproducible results, maintain constant conditions (i.e., deposition time, stirring rate, temperature, specimen volume, and
drop size if HMDE is used).
After deposition, the stirring is discontinued and the solution and electrode are allowed to equilibrate for a short period. The
potential is then rapidly scanned anodically (10 mV/second or greater in dc polarography and 5 mV/second in differential
pulse polarography). As in polarography, the limiting current is proportional to concentration of the species (wave height in dc
and pulse; peak height in differential pulse), while the half-wave potential (dc, pulse) or peak potential (differential pulse) iden-
tifies the species. It is imperative that the choice of supporting electrolyte be made carefully in order to obtain satisfactory be-
havior. Quantitation is usually achieved by a standard addition or calibration method.
This technique is appropriate for trace-metal analysis, but has limited use in organic determinations, since many of these
reactions are irreversible. In analyzing substances such as chloride, cathodic stripping voltammetry may be used. The techni-
que is the same as anodic stripping voltammetry, except that the substance is deposited anodically and then stripped by a
cathodic voltage scan.
The particle size distribution should be estimated by Particle Size Distribution Estimation by Analytical Sieving á786ñ or by appli-
cation of other methods where practical. A simple descriptive classification of powder fineness is provided in this chapter. For
practical reasons, sieves are commonly used to measure powder fineness. Sieving is most suitable where a majority of the parti-
cles are larger than about 75 mm, although it can be used for some powders having smaller particle sizes where the method
can be validated. Light diffraction is also a widely used technique for measuring the size of a wide range of particles.
Classification of Powder Fineness—Where the cumulative distribution has been determined by analytical sieving or by ap-
plication of other methods, powder fineness may be classified in the following manner:
x90 = particle dimension corresponding to 90% of the cumulative undersize distribution
x50 = median particle dimension (i.e., 50% of the particles are smaller and 50% of the particles are larger)
x10 = particle dimension corresponding to 10% of the cumulative undersize distribution
It is recognized that the symbol d is also widely used to designate these values. Therefore, the symbols d90, d50, and d10 may be
used.
The following parameters may be defined based on the cumulative distribution. QR(x) = cumulative distribution of particles
with a dimension less than or equal to x where the subscript R reflects the distribution type.
R Distribution Type
0 Number
1 Length
2 Area
3 Volume
Therefore, by definition:
1. QR(x) = 0.90 when x = x90
2. QR(x) = 0.50 when x = x50
3. QR(x) = 0.10 when x = x10
An alternative but less informative method of classifying powder fineness is by use of the terms in the following table.
Classification of Powders by Fineness
Cumulative Distribution
by Volume Basis,
Descriptive Term X50 (mm) Q3(x)
Coarse >355 Q3(355) <0.50
Moderately Fine 180–355 Q3(180) <0.50 and Q3(355) ³0.50
Fine 125–180 Q3(125) <0.50 and Q3(180) ³0.50
Very Fine £125 Q3(125) ³0.50
The refractive index (n) of a substance is the ratio of the velocity of light in air to the velocity of light in the substance. It is
valuable in the identification of substances and the detection of impurities.
Although the standard temperature for Pharmacopeial measurements is 25°, many of the refractive index specifications in
the individual monographs call for determining this value at 20°. The temperature should be carefully adjusted and main-
tained, since the refractive index varies significantly with temperature.
The values for refractive index given in this Pharmacopeia are for the D line of sodium (doublet at 589.0 nm and 589.6 nm).
Most instruments available are designed for use with white light but are calibrated to give the refractive index in terms of the
D line of sodium light.
The Abbé refractometer measures the range of refractive index for those Pharmacopeial materials for which such values are
given. Other refractometers of equal or greater accuracy may be employed.
To achieve the theoretical accuracy of ±0.0001, it is necessary to calibrate the instrument against a standard provided by the
manufacturer and to check frequently the temperature control and cleanliness of the instrument by determining the refractive
index of distilled water, which is 1.3330 at 20° and 1.3325 at 25°.
Unless otherwise stated in the individual monograph, the specific gravity determination is applicable only to liquids, and un-
less otherwise stated, is based on the ratio of the weight of a liquid in air at 25° to that of an equal volume of water at the
same temperature. Where a temperature is specified in the individual monograph, the specific gravity is the ratio of the weight
of the liquid in air at the specified temperature to that of an equal volume of water at the same temperature. When the sub-
stance is a solid at 25°, determine the specific gravity of the melted material at the temperature directed in the individual mon-
ograph, and refer to water at 25°.
Unless otherwise stated in the individual monograph, the density is defined as the mass of a unit volume of the substance at
25°, expressed in kilograms per cubic meter or grams per cubic centimeter (1 kg/m3 = 10–3 g/cm3). Where the density is
known, mass can be converted to volume, or volume converted to mass, by the formula: volume = mass/density.
Unless otherwise directed in the individual monograph, use Method I.
METHOD I
Procedure—Select a scrupulously clean, dry pycnometer that previously has been calibrated by determining its weight and
the weight of recently boiled water contained in it at 25°. Adjust the temperature of the liquid to about 20°, and fill the pycn-
ometer with it. Adjust the temperature of the filled pycnometer to 25°, remove any excess liquid, and weigh. When the mono-
graph specifies a temperature different from 25°, filled pycnometers must be brought to the temperature of the balance before
they are weighed. Subtract the tare weight of the pycnometer from the filled weight.
The specific gravity of the liquid is the quotient obtained by dividing the weight of the liquid contained in the pycnometer
by the weight of water contained in it, both determined at 25°, unless otherwise directed in the individual monograph.
METHOD II
The procedure includes the use of the oscillating transducer density meter. The apparatus consists of the following:
• a U-shaped tube, usually of borosilicate glass, which contains the liquid to be examined;
• a magneto-electrical or piezo-electrical excitation system that causes the tube to oscillate as a cantilever oscillator at a
characteristic frequency depending on the density of the liquid to be examined;
• a means of measuring the oscillation period (T), which may be converted by the apparatus to give a direct reading of
density or used to calculate density by using the constants A and B described below; and
• a means to measure and/or control the temperature of the oscillating transducer containing the liquid to be tested.
The oscillation period is a function of the spring constant (c) and the mass of the system:
T2 = {(M/c) + [(r × V)/c]} × 4p2
where r is the density of the liquid to be tested, M is the mass of the tube, and V is the volume of the filled tube.
Introduction of two constants A = c/(4p2 × V) and B = (M/V) leads to the classical equation for the oscillating transducer:
r = A × T2 – B
INTRODUCTION
The specific surface area of a powder is determined by physical adsorption of a gas on the surface of the solid and by calcu-
lating the amount of adsorbate gas corresponding to a monomolecular layer on the surface. Physical adsorption results from
relatively weak forces (van der Waals forces) between the adsorbate gas molecules and the adsorbent surface of the test pow-
der. The determination is usually carried out at the temperature of liquid nitrogen. The amount of gas adsorbed can be meas-
ured by a volumetric or continuous flow procedure.
BRUNAUER, EMMETT AND TELLER (BET) THEORY AND SPECIFIC SURFACE AREA
DETERMINATION
Multipoint Measurement
The data are treated according to the Brunauer, Emmett and Teller (BET) adsorption isotherm equation:
1/[Va((PO/P) − 1)] = [(C − 1)/(VmC)] × (P/PO) + (1/VmC) (1)
P = partial vapor pressure of adsorbate gas in equilibrium with the surface at 77.4 K (b.p. of liquid nitrogen), in Pa,
Po = saturated pressure of adsorbate gas, in Pa,
Va = volume of gas adsorbed at standard temperature and pressure (STP) [273.15 K and atmospheric pressure (1.013 × 105 Pa)], in
mL,
Vm = volume of gas adsorbed at STP to produce an apparent monolayer on the sample surface, in mL,
C = dimensionless constant that is related to the enthalpy of adsorption of the adsorbate gas on the powder sample.
is plotted against P/Po, according to equation (1). This plot should yield a straight line usually in the approximate relative pres-
sure range 0.05 to 0.3. The data are considered acceptable if the correlation coefficient, r, of the linear regression is not less
than 0.9975; that is, r2 is not less than 0.995. From the resulting linear plot, the slope, which is equal to (C − 1)/VmC, and the
intercept, which is equal to 1/VmC, are evaluated by linear regression analysis. From these values, Vm is calculated as 1/(slope +
intercept), while C is calculated as (slope/intercept) + 1. From the value of Vm so determined, the specific surface area, S, in
m2 · g–1, is calculated by the equation:
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á846ñ Specific Surface Area 323
A minimum of three data points is required. Additional measurements may be carried out especially when nonlinearity is
obtained at a P/Po value close to 0.3. Because nonlinearity is often obtained at a P/Po value below 0.05, values in this region
are not recommended. The test for linearity, the treatment of the data, and the calculation of the specific surface area of the
sample are described above.
Single-Point Measurement
Normally, at least three measurements of Va, each at different values of P/Po, are required for the determination of specific
surface area by the dynamic flow gas adsorption technique (Method I) or by volumetric gas adsorption (Method II). However,
under certain circumstances described below, it may be acceptable to determine the specific surface area of a powder from a
single value of Va measured at a single value of P/Po such as 0.300 (corresponding to 0.300 mole of nitrogen or 0.001038
mole fraction of krypton), using the following equation for calculating Vm:
Vm = Va[1 − (P/Po)] (3)
The specific surface area is then calculated from the value of Vm by equation (2) given above.
The single-point method may be employed directly for a series of powder samples of a given material for which the material
constant C is much greater than unity. These circumstances may be verified by comparing values of specific surface area deter-
mined by the single-point method with that determined by the multipoint method for the series of powder samples. Close
similarity between the single-point values and multipoint values suggests that 1/C approaches zero.
The single-point method may be employed indirectly for a series of very similar powder samples of a given material for
which the material constant C is not infinite but may be assumed to be invariant. Under these circumstances, the error associ-
ated with the single-point method can be reduced or eliminated by using the multipoint method to evaluate C for one of the
samples of the series from the BET plot, from which C is calculated as (1 + slope/intercept). Then Vm is calculated from the
single value of Va measured at a single value of P/Po, by the equation:
Vm = Va[(Po/P) − 1] [(1/C) + ((C − 1)/C) × (P/Po)] (4)
The specific surface area is calculated from Vm by equation (2) given above.
EXPERIMENTAL TECHNIQUES
This section describes the methods to be used for the sample preparation, the dynamic flow gas adsorption technique
(Method I) and the volumetric gas adsorption technique (Method II).
Sample Preparation
OUTGASSING
Before the specific surface area of the sample can be determined, it is necessary to remove gases and vapors that may have
become physically adsorbed onto the surface after manufacture and during treatment, handling, and storage. If outgassing is
not achieved, the specific surface area may be reduced or may be variable because an intermediate area of the surface is cov-
ered with molecules of the previously adsorbed gases or vapors. The outgassing conditions are critical for obtaining the re-
quired precision and accuracy of specific surface area measurements on pharmaceuticals because of the sensitivity of the sur-
face of the materials.
The outgassing conditions must be demonstrated to yield reproducible BET plots, a constant weight of test powder, and no
detectable physical or chemical changes in the test powder.
The outgassing conditions defined by the temperature, pressure, and time are chosen so that the original surface of the solid
is reproduced as closely as possible. Outgassing of many substances is often achieved by applying a vacuum by purging the
sample in a flowing stream of a nonreactive, dry gas or by applying a desorption-adsorption cycling method. In either case,
elevated temperatures are sometimes applied to increase the rate at which the contaminants leave the surface. Caution should
be exercised when outgassing powder samples using elevated temperatures to avoid affecting the nature of the surface and
the integrity of the sample.
If heating is employed, the recommended temperature and time of outgassing are as low as possible to achieve reproduci-
ble measurement of specific surface area in an acceptable time. For outgassing sensitive samples, other outgassing methods
such as the desorption-adsorption cycling method may be employed.
ADSORBATE
The standard technique is the adsorption of nitrogen of analytical quality at liquid nitrogen temperature.
For powders of low specific surface area (< 0.2 m2g–1), the proportion adsorbed is low. In such cases, the use of krypton at
the liquid nitrogen temperature is preferred because the low vapor pressure exerted by this gas greatly reduces error. The use
of larger sample quantities, where feasible (equivalent to 1 m2 or greater total surface area using nitrogen), may compensate
for the errors in determining low surface areas.
All gases used must be free from moisture.
QUANTITY OF SAMPLE
A quantity of the test powder is accurately weighed such that the total surface of the sample is at least 1 m2 when the adsor-
bate is nitrogen and 0.5 m2 when the adsorbate is krypton.
Lower quantities of sample may be used after appropriate validation.
Measurements
Because the amount of gas adsorbed under a given pressure tends to increase when the temperature is decreased, adsorp-
tion measurements are usually made at a low temperature. Measurement is performed at 77.4 K, the boiling point of liquid
nitrogen.
PRINCIPLE
In the dynamic flow method (see Figure 1), the recommended adsorbate gas is dry nitrogen or krypton, while helium is em-
ployed as a diluent gas, which is not adsorbed under the recommended conditions.
A minimum of three mixtures of the appropriate adsorbate gas with helium are required within the P/Po range 0.05 to 0.30.
The gas detector-integrator should provide a signal that is approximately proportional to the volume of the gas passing
through it under defined conditions of temperature and pressure. For this purpose, a thermal conductivity detector with an
electronic integrator is one among various suitable types. A minimum of three data points within the recommended range of
0.05 to 0.30 for P/Po is determined.
PROCEDURE
A known mixture of the gases, usually nitrogen and helium, is passed through a thermal conductivity cell, through the sam-
ple again, through the thermal conductivity cell, and then to a recording potentiometer.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á846ñ Specific Surface Area 325
The sample cell is immersed in liquid nitrogen, and the sample adsorbs nitrogen from the mobile phase. This unbalances the
thermal conductivity cell, and a pulse is generated on a recorder chart.
The sample is removed from the coolant; this gives a desorption peak equal in area and in the opposite direction to the
adsorption peak. Because this is better defined than the adsorption peak, it is the one used for the determination.
To effect the calibration, a known quantity of adsorbate, sufficient to give a peak of similar magnitude to the desorption
peak, is injected into the system, and the proportion of gas volume per unit peak area is obtained.
A mixture of nitrogen and helium is used for a single-point determination; and several such mixtures or premixing two
streams of gas are used for a multipoint determination.
The calculation is the same as the volumetric method.
PRINCIPLE
In the volumetric method (see Figure 2), the recommended adsorbate gas is nitrogen, which is admitted into the evacuated
space above the previously outgassed powder sample to give a defined equilibrium pressure, P, of the gas. The use of a diluent
gas, such as helium, is therefore unnecessary, although helium may be employed for other purposes, such as to measure the
dead volume.
Because only pure adsorbate gas, instead of a gas mixture, is employed, interfering effects of thermal diffusion are avoided
in this method.
PROCEDURE
A small amount of dry nitrogen is admitted into the sample tube to prevent contamination of the clean surface, the sample
tube is removed, a stopper is inserted, the tube is weighed, and the weight of the sample is calculated. Then the sample tube
is attached to the volumetric apparatus. The sample is cautiously evacuated down to the specified pressure (e.g., between 2 Pa
and 10 Pa). Alternately, some instruments are operated by evacuating to a defined rate of pressure change (e.g., less than 13
Pa/30 s) and by holding for a defined period of time before commencing the next step.
If the principle of operation of the instrument requires the determination of the dead volume in the sample tube, for exam-
ple, by the admission of a nonadsorbed gas, such as helium, this procedure is carried out at this point, followed by evacuation
of the sample. The determination of dead volume may be avoided using difference measurements: that is, by means of refer-
ence and sample tubes connected by a differential transducer. The adsorption of nitrogen gas is then measured as described
below.
Raise a Dewar vessel containing liquid nitrogen at 77.4 K up to a defined point on the sample cell. Admit a sufficient volume
of adsorbate gas to give the lowest desired relative pressure. Measure the volume adsorbed, Va. For multipoint measurements,
repeat the measurement of Va at successively higher P/Po values. When nitrogen is used as the adsorbate gas, P/Po values of
0.10, 0.20, and 0.30 are often suitable.
Reference Materials
Periodically verify the functioning of the apparatus using appropriate reference materials of known surface area that have a
specific surface area similar to that of the sample to be examined.
Absorption spectrophotometry is the measurement of an interaction between electromagnetic radiation and the molecules, or
atoms, of a chemical substance. Techniques frequently employed in pharmaceutical analysis include UV, visible, IR, and atomic
absorption spectroscopy. Spectrophotometric measurement in the visible region was formerly referred to as colorimetry; how-
ever, it is more precise to use the term “colorimetry” only when considering human perception of color.
Fluorescence spectrophotometry is the measurement of the emission of light from a chemical substance while it is being ex-
posed to UV, visible, or other electromagnetic radiation. In general, the light emitted by a fluorescent solution is of maximum
intensity at a wavelength longer than that of the exciting radiation, usually by some 20 to 30 nm.
Light-Scattering involves measurement of the light scattered because of submicroscopic optical density inhomogeneities of
solutions and is useful in the determination of weight-average molecular weights of polydisperse systems in the molecular
weight range from 1000 to several hundred million. Two such techniques utilized in pharmaceutical analysis are turbidimetry
and nephelometry.
Raman spectroscopy (inelastic light-scattering) is a light-scattering process in which the specimen under examination is irradi-
ated with intense monochromatic light (usually laser light) and the light scattered from the specimen is analyzed for frequency
shifts.
The wavelength range available for these measurements extends from the short wavelengths of the UV through the IR. For
convenience of reference, this spectral range is roughly divided into the UV (190 to 380 nm), the visible (380 to 780 nm), the
near-IR (780 to 3000 nm), and the IR (2.5 to 40 mm or 4000 to 250 cm−1).
For many pharmaceutical substances, measurements can be made in the UV and visible regions of the spectrum with greater
accuracy and sensitivity than in the near-IR and IR. When solutions are observed in 1-cm cells, concentrations of about 10 mg
of the specimen per mL often will produce absorbances of 0.2 to 0.8 in the UV or the visible region. In the IR and near-IR,
concentrations of 1 to 10 mg per mL and up to 100 mg per mL, respectively, may be needed to produce sufficient absorption;
for these spectral ranges, cell lengths of from 0.01 mm to upwards of 3 mm are commonly used.
The UV and visible spectra of substances generally do not have a high degree of specificity. Nevertheless, they are highly
suitable for quantitative assays, and for many substances they are useful as additional means of identification.
There has been increasing interest in the use of near-IR spectroscopy in pharmaceutical analysis, especially for rapid identifi-
cation of large numbers of samples, and also for water determination.
The near-IR region is especially suitable for the determination of –OH and –NH groups, such as water in alcohol, –OH in the
presence of amines, alcohols in hydrocarbons, and primary and secondary amines in the presence of tertiary amines.
The IR spectrum is unique for any given chemical compound with the exception of optical isomers, which have identical
spectra. However, polymorphism may occasionally be responsible for a difference in the IR spectrum of a given compound in
the solid state. Frequently, small differences in structure result in significant differences in the spectra. Because of the large
number of maxima in an IR absorption spectrum, it is sometimes possible to quantitatively measure the individual components
of a mixture of known qualitative composition without prior separation.
The Raman spectrum and the IR spectrum provide similar data, although the intensities of the spectra are governed by dif-
ferent molecular properties. Raman and IR spectroscopy exhibit different relative sensitivities for different functional groups,
e.g., Raman spectroscopy is particularly sensitive to C–S and C–C multiple bonds, and some aromatic compounds are more
easily identified by means of their Raman spectra. Water has a highly intense IR absorption spectrum, but a particularly weak
Raman spectrum. Therefore, water has only limited IR “windows” that can be used to examine aqueous solutes, while its Ram-
an spectrum is almost completely transparent and useful for solute identification. The two major limitations of Raman spectro-
scopy are that the minimum detectable concentration of specimen is typically 10−1 M to 10−2 M and that the impurities in
many substances fluoresce and interfere with the detection of the Raman scattered signal.
Optical reflectance measurements provide spectral information similar to that obtained by transmission measurements.
Since reflectance measurements probe only the surface composition of the specimen, difficulties associated with the optical
thickness and the light-scattering properties of the substance are eliminated. Thus, reflectance measurements are frequently
more simple to perform on intensely absorbing materials. A particularly common technique used for IR reflectance measure-
ments is termed attenuated total reflectance (ATR), also known as multiple internal reflectance (MIR). In the ATR technique,
the beam of the IR spectrometer is passed through an appropriate IR window material (e.g., KRS-5, a TlBr-TlI eutectic mixture),
which is cut at such an angle that the IR beam enters the first (front) surface of the window, but is totally reflected when it
impinges on the second (back) surface (i.e., the angle of incidence of the radiation upon the second surface of the window
exceeds the critical angle for that material). By appropriate window construction, it is possible to have many internal reflec-
tions of the IR beam before it is transmitted out of the window. If a specimen is placed in close contact with the window along
the sides that totally reflect the IR beam, the intensity of reflected radiation is reduced at each wavelength (frequency) that the
specimen absorbs. Thus, the ATR technique provides a reflectance spectrum that has been increased in intensity, when com-
pared to a simple reflectance measurement, by the number of times that the IR beam is reflected within the window. The ATR
technique provides excellent sensitivity, but it yields poor reproducibility, and is not a reliable quantitative technique unless an
internal standard is intimately mixed with each test specimen.
Fluorescence spectrophotometry is often more sensitive than absorption spectrophotometry. In absorption measurements, the
specimen transmittance is compared to that of a blank; and at low concentrations, both solutions give high signals. Converse-
ly, in fluorescence spectrophotometry, the solvent blank has low rather than high output, so that the background radiation
that may interfere with determinations at low concentrations is much less. Whereas few compounds can be determined con-
veniently at concentrations below 10−5 M by light absorption, it is not unusual to employ concentrations of 10−7 M to 10−8 M
in fluorescence spectrophotometry.
The power of a radiant beam decreases in relation to the distance that it travels through an absorbing medium. It also de-
creases in relation to the concentration of absorbing molecules or ions encountered in that medium. These two factors deter-
mine the proportion of the total incident energy that emerge. The decrease in power of monochromatic radiation passing
through a homogeneous absorbing medium is stated quantitatively by Beer's law, log10(1/T) = A = abc, in which the terms are
as defined below.
Absorbance [Symbol: A]—The logarithm, to the base 10, of the reciprocal of the transmittance (T). [NOTE—Descriptive
terms used formerly include optical density, absorbancy, and extinction.]
Absorptivity [Symbol: a]—The quotient of the absorbance (A) divided by the product of the concentration of the sub-
stance (c), expressed in g per L, and the absorption path length (b) in cm. [NOTE—It is not to be confused with absorbancy
index; specific extinction; or extinction coefficient.]
Molar Absorptivity [Symbol: e]—The quotient of the absorbance (A) divided by the product of the concentration, ex-
pressed in moles per L, of the substance and the absorption path length in cm. It is also the product of the absorptivity (a) and
the molecular weight of the substance. [NOTE—Terms formerly used include molar absorbancy index; molar extinction coeffi-
cient; and molar absorption coefficient.]
For most systems used in absorption spectrophotometry, the absorptivity of a substance is a constant independent of the
intensity of the incident radiation, the internal cell length, and the concentration, with the result that concentration may be
determined photometrically.
Beer's law gives no indication of the effect of temperature, wavelength, or the type of solvent. For most analytical work the
effects of normal variation in temperature are negligible.
Deviations from Beer's law may be caused by either chemical or instrumental variables. Apparent failure of Beer's law may
result from a concentration change in solute molecules because of association between solute molecules or between solute
and solvent molecules, or dissociation or ionization. Other deviations might be caused by instrumental effects such as poly-
chromatic radiation, slit-width effects, or stray light.
Even at a fixed temperature in a given solvent, the absorptivity may not be truly constant. However, in the case of speci-
mens having only one absorbing component, it is not necessary that the absorbing system conform to Beer's law for use in
quantitative analysis. The concentration of an unknown may be found by comparison with an experimentally determined
standard curve.
Although, in the strictest sense, Beer's law does not hold in atomic absorption spectrophotometry because of the lack of
quantitative properties of the cell length and the concentration, the absorption processes taking place in the flame under con-
ditions of reproducible aspiration do follow the Beer relationship in principle. Specifically, the negative log of the transmit-
tance, or the absorbance, is directly proportional to the absorption coefficient, and, consequently, is proportional to the num-
ber of absorbing atoms. On this basis, calibration curves may be constructed to permit evaluation of unknown absorption val-
ues in terms of concentration of the element in solution.
Absorption Spectrum—A graphic representation of absorbance, or any function of absorbance, plotted against wave-
length or function of wavelength.
Transmittance [Symbol: T]—The quotient of the radiant power transmitted by a specimen divided by the radiant power
incident upon the specimen. [NOTE—Terms formerly used include transmittancy and transmission.]
Fluorescence Intensity [Symbol: I]—An empirical expression of fluorescence activity, commonly given in terms of arbitrary
units proportional to detector response. The fluorescence emission spectrum is a graphical presentation of the spectral distribu-
tion of radiation emitted by an activated substance, showing intensity of emitted radiation as ordinate, and wavelength as ab-
Official text. Reprinted from First Supplement to USP38-NF33.
328 á851ñ Spectrophotometry and Light-Scattering / Physical Tests DSC
scissa. The fluorescence excitation spectrum is a graphical presentation of the activation spectrum, showing intensity of radiation
emitted by an activated substance as ordinate, and wavelength of the incident (activating) radiation as abscissa. As in absorp-
tion spectrophotometry, the important regions of the electromagnetic spectrum encompassed by the fluorescence of organic
compounds are the UV, visible, and near-IR, i.e., the region from 250 to 800 nm. After a molecule has absorbed radiation, the
energy can be lost as heat or released in the form of radiation of the same or longer wavelength as the absorbed radiation.
Both absorption and emission of radiation are due to the transitions of electrons between different energy levels, or orbitals, of
the molecule. There is a time delay between the absorption and emission of light; this interval, the duration of the excited
state, has been measured to be about 10−9 second to 10−8 second for most organic fluorescent solutions. The short lifetime of
fluorescence distinguishes this type of luminescence from phosphorescence, which is a long-lived afterglow having a lifetime
of 10−3 second up to several minutes.
Turbidance [Symbol: S]—The light-scattering effect of suspended particles. The amount of suspended matter may be
measured by observation of either the transmitted light (turbidimetry) or the scattered light (nephelometry).
Turbidity [Symbol: t]—In light-scattering measurements, the turbidity is the measure of the decrease in incident beam in-
tensity per unit length of a given suspension.
Raman Scattering Activity—The molecular property (in units of cm4 per g) governing the intensity of an observed Raman
band for a randomly oriented specimen. The scattering activity is determined from the derivative of the molecular polarizabili-
ty with respect to the molecular motion giving rise to the Raman shifted band. In general, the Raman band intensity is linearly
proportional to the concentration of the analyte.
With few exceptions, the Pharmacopeial spectrophotometric tests and assays call for comparison against a USP Reference
Standard. This is to ensure measurement under conditions identical for the test specimen and the reference substance. These
conditions include wavelength setting, slit-width adjustment, cell placement and correction, and transmittance levels. It should
be noted that cells exhibiting identical transmittance at a given wavelength may differ considerably in transmittance at other
wavelengths. Appropriate cell corrections should be established and used where required.
The expressions, “similar preparation” and “similar solution,” as used in tests and assays involving spectrophotometry, indi-
cate that the reference specimen, generally a USP Reference Standard, is to be prepared and observed in a manner identical
for all practical purposes to that used for the test specimen. Usually in making up the solution of the specified Reference Stand-
ard, a solution of about (i.e., within 10%) the desired concentration is prepared and the absorptivity is calculated on the basis
of the exact amount weighed out; if a previously dried specimen of the Reference Standard has not been used, the absorptivity
is calculated on the anhydrous basis.
The expressions, “concomitantly determine” and “concomitantly measured,” as used in tests and assays involving spectro-
photometry, indicate that the absorbances of both the solution containing the test specimen and the solution containing the
reference specimen, relative to the specified test blank, are to be measured in immediate succession.
APPARATUS
Many types of spectrophotometers are available. Fundamentally, most types, except those used for IR spectrophotometry,
provide for passing essentially monochromatic radiant energy through a specimen in suitable form, and measuring the intensi-
ty of the fraction that is transmitted. Fourier transform IR spectrophotometers use an interferometric technique whereby poly-
chromatic radiation passes through the analyte and onto a detector on an intensity and time basis. UV, visible, and dispersive
IR spectrophotometers comprise an energy source, a dispersing device (e.g., a prism or grating), slits for selecting the wave-
length band, a cell or holder for the test specimen, a detector of radiant energy, and associated amplifiers and measuring devi-
ces. In diode array spectrophotometers, the energy from the source is passed through the test specimen and then dispersed via
a grating onto several hundred light-sensitive diodes, each of which in turn develops a signal proportional to the number of
photons at its small wavelength interval; these signals then may be computed at rapid chosen intervals to represent a com-
plete spectrum. Fourier transform IR systems utilize an interferometer instead of a dispersing device and a digital computer to
process the spectral data. Some instruments are manually operated, whereas others are equipped for automatic and continu-
ous recording. Instruments that are interfaced to a digital computer have the capabilities also of co-adding and storing spectra,
performing spectral comparisons, and performing difference spectroscopy (accomplished with the use of a digital absorbance
subtraction method).
Instruments are available for use in the visible; in the visible and UV; in the visible, UV, and near-IR; and in the IR regions of
the spectrum. Choice of the type of spectrophotometric analysis and of the instrument to be used depends upon factors such
as the composition and amount of available test specimen, the degree of accuracy, sensitivity, and selectivity desired, and the
manner in which the specimen is handled.
The apparatus used in atomic absorption spectrophotometry has several unique features. For each element to be deter-
mined, a specific source that emits the spectral line to be absorbed should be selected. The source is usually a hollow-cathode
lamp, the cathode of which is designed to emit the desired radiation when excited. Since the radiation to be absorbed by the
test specimen element is usually of the same wavelength as that of its emission line, the element in the hollow-cathode lamp is
the same as the element to be determined. The apparatus is equipped with an aspirator for introducing the test specimen into
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á851ñ Spectrophotometry and Light-Scattering 329
a flame, which is usually provided by air–acetylene, air–hydrogen, or, for refractory cases, nitrous oxide–acetylene. The flame,
in effect, is a heated specimen chamber. A detector is used to read the signal from the chamber. Interfering radiation pro-
duced by the flame during combustion may be negated by the use of a chopped source lamp signal of a definite frequency.
The detector should be tuned to this alternating current frequency so that the direct current signal arising from the flame is
ignored. The detecting system, therefore, reads only the change in signal from the hollow-cathode source, which is directly
proportional to the number of atoms to be determined in the test specimen. For Pharmacopeial purposes, apparatus that pro-
vides the readings directly in absorbance units is usually required. However, instruments providing readings in percent trans-
mission, percent absorption, or concentration may be used if the calculation formulas provided in the individual monographs
are revised as necessary to yield the required quantitative results. Percent absorption or percent transmittance may be conver-
ted to absorbance, A, by the following two equations:
A = 2 − log10 (100 − % absorption)
or:
A = 2 − log10 (% transmittance)
Depending upon the type of apparatus used, the readout device may be a meter, digital counter, recorder, or printer. Both
single-beam and double-beam instruments are commercially available, and either type is suitable.
Measurement of fluorescence intensity can be made with a simple filter fluorometer. Such an instrument consists of a radia-
tion source, a primary filter, a specimen chamber, a secondary filter, and a fluorescence detection system. In most such fluor-
ometers, the detector is placed on an axis at 90° from that of the exciting beam. This right-angle geometry permits the excit-
ing radiation to pass through the test specimen and not contaminate the output signal received by the fluorescence detector.
However, the detector unavoidably receives some of the exciting radiation as a result of the inherent scattering properties of
the solutions themselves, or if dust or other solids are present. Filters are used to eliminate this residual scatter. The primary
filter selects short-wavelength radiation capable of exciting the test specimen, while the secondary filter is normally a sharp
cut-off filter that allows the longer-wavelength fluorescence to be transmitted but blocks the scattered excitation.
Most fluorometers use photomultiplier tubes as detectors, many types of which are available, each having special character-
istics with respect to spectral region of maximum sensitivity, gain, and electrical noise. The photocurrent is amplified and read
out on a meter or recorder.
A spectrofluorometer differs from a filter fluorometer in that filters are replaced by monochromators, of either the prism or the
grating type. For analytical purposes, the spectrofluorometer is superior to the filter fluorometer in wavelength selectivity, flexi-
bility, and convenience, in the same way in which a spectrophotometer is superior to a filter photometer.
Many radiation sources are available. Mercury lamps are relatively stable and emit energy mainly at discrete wavelengths.
Tungsten lamps provide an energy continuum in the visible region. The high-pressure xenon arc lamp is often used in spectro-
fluorometers because it is a high-intensity source that emits an energy continuum extending from the UV into the IR.
In spectrofluorometers, the monochromators are equipped with slits. A narrow slit provides high resolution and spectral pu-
rity, while a large slit sacrifices these for high sensitivity. Choice of slit size is determined by the separation between exciting
and emitting wavelengths as well as the degree of sensitivity needed.
Specimen cells used in fluorescence measurements may be round tubes or rectangular cells similar to those used in absorp-
tion spectrophotometry, except that they are polished on all four vertical sides. A convenient test specimen size is 2 to 3 mL,
but some instruments can be fitted with small cells holding 100 to 300 mL, or with a capillary holder requiring an even smaller
amount of specimen.
Light-scattering instruments are available and consist in general of a mercury lamp, with filters for the strong green or blue
lines, a shutter, a set of neutral filters with known transmittance, and a sensitive photomultiplier to be mounted on an arm that
can be rotated around the solution cell and set at any angle from −135° to 0° to +135° by a dial outside the light-tight hous-
ing. Solution cells are of various shapes, such as square for measuring 90° scattering; semioctagonal for 45°, 90°, and 135°
scattering; and cylindrical for scattering at all angles. Since the determination of molecular weight requires a precise measure
of the difference in refractive index between the solution and solvent, [(n − n0)/c], a second instrument, a differential refrac-
tometer, is needed to measure this small difference.
Raman spectrometers include the following major components: a source of intense monochromatic radiation (invariably a
laser); optics to collect the light scattered by the test specimen; a (double) monochromator to disperse the scattered light and
reject the intense incident frequency; and a suitable light-detection and amplification system. Raman measurement is simple in
that most specimens are examined directly in melting-point capillaries. Because the laser source can be focused sharply, only a
few microliters of the specimen is required.
PROCEDURE
Absorption Spectrophotometry
Detailed instructions for operating spectrophotometers are supplied by the manufacturers. To achieve significant and valid
results, the operator of a spectrophotometer should be aware of its limitations and of potential sources of error and variation.
The instruction manual should be followed closely on such matters as care, cleaning, and calibration of the instrument, and
techniques of handling absorption cells, as well as instructions for operation. The following points require special emphasis.
Check the instrument for accuracy of calibration. Where a continuous source of radiant energy is used, attention should be
paid to both the wavelength and photometric scales; where a spectral line source is used, only the photometric scale need be
checked. A number of sources of radiant energy have spectral lines of suitable intensity, adequately spaced throughout the
spectral range selected. The best single source of UV and visible calibration spectra is the quartz-mercury arc, of which the
lines at 253.7, 302.25, 313.16, 334.15, 365.48, 404.66, and 435.83 nm may be used. The glass-mercury arc is equally useful
above 300 nm. The 486.13-nm and 656.28-nm lines of a hydrogen discharge lamp may be used also. The wavelength scale
may be calibrated also by means of suitable glass filters, which have useful absorption bands through the visible and UV re-
gions. Standard glasses containing didymium (a mixture of praseodymium and neodymium) have been used widely, although
glasses containing holmium were found to be superior. Standard holmium oxide solution has superseded the use of holmium
glass.1 The wavelength scales of near-IR and IR spectrophotometers are readily checked by the use of absorption bands provi-
ded by polystyrene films, carbon dioxide, water vapor, or ammonia gas.
For checking the photometric scale, a number of standard inorganic glass filters as well as standard solutions of known trans-
mittances such as potassium dichromate are available.2
Quantitative absorbance measurements usually are made on solutions of the substance in liquid-holding cells. Since both
the solvent and the cell window absorb light, compensation must be made for their contribution to the measured absorbance.
Matched cells are available commercially for UV and visible spectrophotometry for which no cell correction is necessary. In IR
spectrophotometry, however, corrections for cell differences usually must be made. In such cases, pairs of cells are filled with
the selected solvent and the difference in their absorbances at the chosen wavelength is determined. The cell exhibiting the
greater absorbance is used for the solution of the test specimen and the measured absorbance is corrected by subtraction of
the cell difference.
With the use of a computerized Fourier transform IR system, this correction need not be made, since the same cell can be
used for both the solvent blank and the test solution. However, it must be ascertained that the transmission properties of the
cell are constant.
Comparisons of a test specimen with a Reference Standard are best made at a peak of spectral absorption for the compound
concerned. Assays prescribing spectrophotometry give the commonly accepted wavelength for peak spectral absorption of the
substance in question. It is known that different spectrophotometers may show minor variation in the apparent wavelength of
this peak. Good practice demands that comparisons be made at the wavelength at which peak absorption occurs. Should this
differ by more than ±1 nm from the wavelength specified in the individual monograph, recalibration of the instrument may be
indicated.
TEST PREPARATION
For determinations utilizing UV or visible spectrophotometry, the specimen generally is dissolved in a solvent. Unless other-
wise directed in the monograph, determinations are made at room temperature using a path length of 1 cm. Many solvents
are suitable for these ranges, including water, alcohols, chloroform, lower hydrocarbons, ethers, and dilute solutions of strong
acids and alkalies. Precautions should be taken to utilize solvents free from contaminants absorbing in the spectral region be-
ing used. It is usually advisable to use water-free methanol or alcohol, or alcohol denatured by the addition of methanol but
not containing benzene or other interfering impurities, as the solvent. Solvents of special spectrophotometric quality, guaran-
teed to be free from contaminants, are available commercially from several sources. Some other analytical reagent-grade or-
ganic solvents may contain traces of impurities that absorb strongly in the UV region. New lots of these solvents should be
checked for their transparency, and care should be taken to use the same lot of solvent for preparation of the test solution and
the standard solution and for the blank.
No solvent in appreciable thickness is completely transparent throughout the near-IR and IR spectrum. Carbon tetrachloride
(up to 5 mm in thickness) is practically transparent to 6 mm (1666 cm−1). Carbon disulfide (1 mm in thickness) is suitable as a
solvent to 40 mm (250 cm−1) with the exception of the 4.2-mm to 5.0-mm (2381-cm−1 to 2000-cm−1) and the 5.5-mm to 7.5-mm
(1819-cm−1 to 1333-cm−1) regions, where it has strong absorption. Other solvents have relatively narrow regions of transparen-
cy. For IR spectrophotometry, an additional qualification for a suitable solvent is that it must not affect the material, usually
sodium chloride, of which the cell is made. The test specimen may also be prepared by dispersing the finely ground solid
specimen in mineral oil or by mixing it intimately with previously dried alkali halide salt (usually potassium bromide). Mixtures
with alkali halide salts may be examined directly or as transparent disks or pellets obtained by pressing the mixture in a die.
Typical drying conditions for potassium bromide are 105° in vacuum for 12 hours, although grades are commercially available
that require no drying. Infrared microscopy or a mineral oil dispersion is preferable where disproportionation between the al-
kali halide and the test specimen is encountered. For suitable materials the test specimen may be prepared neat as a thin sam-
ple for IR microscopy or suspended neat as a thin film for mineral oil dispersion. For Raman spectrometry, most common sol-
1 National Institute of Standards and Technology (NIST), Gaithersburg, MD 20899: “Spectral Transmittance Characteristics of Holmium Oxide in Perchloric Acid,”
J. Res. Natl. Bur. Stds. 90, No. 2, 115 (1985). The performance of an uncertified filter should be checked against a certified standard.
2 For further detail regarding checks on photometric scale of a spectrophotometer, reference may be made to the following NIST publications: J. Res. Nalt. Bur.
Stds. 76A, 469 (1972) [re: SRM 93l, “Liquid Absorbance Standards for Ultraviolet and Visible Spectrophotometry” as well as potassium chromate and potassium
dichromate]; NIST Spec. Publ. 260–116 (1994) [re: SRM 930 and SRM 1930, “Glass Filters for Spectrophotometry.”
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á851ñ Spectrophotometry and Light-Scattering 331
vents are suitable, and ordinary (nonfluorescing) glass specimen cells can be used. The IR region of the electromagnetic spec-
trum extends from 0.8 to 400 mm. From 800 to 2500 nm (0.8 to 2.5 mm) is generally considered to be the near-IR (NIR) re-
gion; from 2.5 to 25 mm (4000 to 400 cm−1) is generally considered to be the mid-range (mid-IR) region; and from 25 to 400
mm is generally considered to be the far-IR (FIR) region. Unless otherwise specified in the individual monograph, the region
from 3800 to 650 cm−1 (2.6 to 15 mm) should be used to ascertain compliance with monograph specifications for IR absorp-
tion.
Where values for IR line spectra are given in an individual monograph, the letters s, m, and w signify strong, medium, and
weak absorption, respectively; sh signifies a shoulder, bd signifies a band, and v means very. The values may vary as much as
0.1 mm or 10 cm−1, depending upon the particular instrument used. Polymorphism gives rise to variations in the IR spectra of
many compounds in the solid state. Therefore, when conducting IR absorption tests, if a difference appears in the IR spectra of
the analyte and the standard, dissolve equal portions of the test substance and the standard in equal volumes of a suitable
solvent, evaporate the solutions to dryness in similar containers under identical conditions, and repeat the test on the residues.
In NIR spectroscopy much of the current interest centers around the ease of analysis. Samples can be analyzed in powder
form or by means of reflectance techniques, with little or no preparation. Compliance with in-house specifications can be de-
termined by computerized comparison of spectra with spectra previously obtained from reference materials. Many pharma-
ceutical materials exhibit low absorptivity in this spectral region, which allows incident near-IR radiation to penetrate samples
more deeply than UV, visible, or IR radiation. NIR spectrophotometry may be used to observe matrix modifications and, with
proper calibration, may be used in quantitative analysis.
In atomic absorption spectrophotometry, the nature of the solvent and the concentration of solids must be given special
consideration. An ideal solvent is one that interferes to a minimal extent in the absorption or emission processes and one that
produces neutral atoms in the flame. If there is a significant difference between the surface tension or viscosity of the test solu-
tion and standard solution, the solutions are aspirated or atomized at a different rate, causing significant differences in the sig-
nals generated. The acid concentration of the solutions also affects the absorption processes. Thus, the solvents used in prepar-
ing the test specimen and the standard should be the same or as much alike in these respects as possible, and should yield
solutions that are easily aspirated via the specimen tube of the burner-aspirator. Since undissolved solids present in the solu-
tions may give rise to matrix or bulk interferences, the total undissolved solids content in all solutions should be kept below
2% wherever possible.
CALCULATIONS
The application of absorption spectrophotometry in an assay or a test generally requires the use of a Reference Standard.
Where such a measurement is specified in an assay, a formula is provided in order to permit calculation of the desired result. A
numerical constant is frequently included in the formula. The following derivation is provided to introduce a logical approach
to the deduction of the constants appearing in formulas in the assays in many monographs.
The Beer's law relationship is valid for the solutions of both the Reference Standard (S) and the test specimen (U):
AS = abCS (1)
AU = abCU (2)
in which AS is the absorbance of the Standard solution of concentration CS; and AU is the absorbance of the test specimen
solution of concentration CU. If CS and CU are expressed in the same units and the absorbances of both solutions are measured
in matching cells having the same dimensions, the absorptivity, a, and the cell thickness, b, are the same; consequently, the
two equations may be combined and rewritten to solve for CU:
CU = CS(AU/AS) (3)
Quantities of solid test specimens to be taken for analysis are generally specified in mg. Instructions for dilution are given in
the assay and, since dilute solutions are used for absorbance measurements, concentrations are usually expressed for conven-
ience in units of mg per mL. Taking a quantity, in mg, of a test specimen of a drug substance or solid dosage form for analysis,
it therefore follows that a volume (VU), in L, of solution of concentration CU may be prepared from the amount of test speci-
men that contains a quantity WU, in mg, of the drug substance [NOTE—CU is numerically the same whether expressed as mg
per mL or mg per L], such that:
WU = VUCU (4)
The form in which the formula appears in the assay in a monograph for a solid article may be derived by substituting CU of
equation (3) into equation (4). In summary, the use of equation (4), with due consideration for any unit conversions necessary
to achieve equality in equation (5), permits the calculation of the constant factor (VU) occurring in the final formula:
WU = VUCS(AU/AS) (5)
The same derivation is applicable to formulas that appear in monographs for liquid articles that are assayed by absorption
spectrophotometry. For liquid dosage forms, results of calculations are generally expressed in terms of the quantity, in mg, of
drug substance in each mL of the article. Thus it is necessary to include in the denominator an additional term, the volume (V),
in mL, of the test preparation taken.
Assays in the visible region usually call for comparing concomitantly the absorbance produced by the Assay preparation with
that produced by a Standard preparation containing approximately an equal quantity of a USP Reference Standard. In some
situations, it is permissible to omit the use of a Reference Standard. This is true where spectrophotometric assays are made
with routine frequency, and where a suitable standard curve is available, prepared with the respective USP Reference Standard,
and where the substance assayed conforms to Beer's law within the range of about 75% to 125% of the final concentration
used in the assay. Under these circumstances, the absorbance found in the assay may be interpolated on the standard curve,
and the assay result calculated therefrom.
Such standard curves should be confirmed frequently, and always when a new spectrophotometer or new lots of reagents
are put into use.
In spectrophotometric assays that direct the preparation and use of a standard curve, it is permissible and preferable, when
the assay is employed infrequently, not to use the standard curve but to make the comparison directly against a quantity of
the Reference Standard approximately equal to that taken of the specimen, and similarly treated.
Fluorescence Spectrophotometry
The measurement of fluorescence is a useful analytical technique. Fluorescence is light emitted from a substance in an excited
state that has been reached by the absorption of radiant energy. A substance is said to be fluorescent if it can be made to fluo-
resce. Many compounds can be assayed by procedures utilizing either their inherent fluorescence or the fluorescence of suita-
ble derivatives.
Test specimens prepared for fluorescence spectrophotometry are usually one-tenth to one-hundredth as concentrated as
those used in absorption spectrophotometry, for the following reason. In analytical applications, it is preferable that the fluo-
rescence signal be linearly related to the concentration; but if a test specimen is too concentrated, a significant part of the
incoming light is absorbed by the specimen near the cell surface, and the light reaching the center is reduced. That is, the
specimen itself acts as an “inner filter.” However, fluorescence spectrophotometry is inherently a highly sensitive technique,
and concentrations of 10−5 M to 10−7 M frequently are used. It is necessary in any analytical procedure to make a working
curve of fluorescence intensity versus concentration in order to establish a linear relationship. All readings should be corrected
for a solvent blank.
Fluorescence measurements are sensitive to the presence of dust and other solid particles in the test specimen. Such impuri-
ties may reduce the intensity of the exciting beam or give misleading high readings because of multiple reflections in the
specimen cell. It is, therefore, wise to eliminate solid particles by centrifugation; filtration also may be used, but some filter
papers contain fluorescent impurities.
Temperature regulation is often important in fluorescence spectrophotometry. For some substances, fluorescence efficiency
may be reduced by as much as 1% to 2% per degree of temperature rise. In such cases, if maximum precision is desired, tem-
perature-controlled specimen cells are useful. For routine analysis, it may be sufficient to make measurements rapidly enough
so that the specimen does not heat up appreciably from exposure to the intense light source. Many fluorescent compounds
are light-sensitive. Exposed in a fluorometer, they may be photo-degraded into more or less fluorescent products. Such effects
may be detected by observing the detector response in relationship to time, and may be reduced by attenuating the light
source with filters or screens.
Change of solvent may markedly affect the intensity and spectral distribution of fluorescence. It is inadvisable, therefore, to
alter the solvent specified in established methods without careful preliminary investigation. Many compounds are fluorescent
in organic solvents but virtually nonfluorescent in water; thus, a number of solvents should be tried before it is decided wheth-
er or not a compound is fluorescent. In many organic solvents, the intensity of fluorescence is increased by elimination of dis-
solved oxygen, which has a strong quenching effect. Oxygen may be removed by bubbling an inert gas such as nitrogen or
helium through the test specimen.
A semiquantitative measure of the strength of fluorescence is given by the ratio of the fluorescence intensity of a test speci-
men and that of a standard obtained with the same instrumental settings. Frequently, a solution of stated concentration of
quinine in 0.1 N sulfuric acid or fluorescein in 0.1 N sodium hydroxide is used as a reference standard.
Light-Scattering
Turbidity can be measured with a standard photoelectric filter photometer or spectrophotometer, preferably with illumina-
tion in the blue portion of the spectrum. Nephelometric measurements require an instrument with a photocell placed so as to
receive scattered rather than transmitted light; this geometry applies also to fluorometers, so that, in general, fluorometers can
be used as nephelometers, by proper selection of filters. A ratio turbidimeter combines the technology of 90° nephelometry
and turbidimetry: it contains photocells that receive and measure scattered light at a 90° angle from the sample as well as
receiving and measuring the forward scatter in front of the sample; it also measures light transmitted directly through the sam-
ple. Linearity is attained by calculating the ratio of the 90° angle scattered light measurement to the sum of the forward scat-
tered light measurement and the transmitted light measurement. The benefit of using a ratio turbidimetry system is that the
measurement of stray light becomes negligible.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á852ñ Atomic Absorption Spectroscopy 333
In practice, it is advisable to ensure that settling of the particles being measured is negligible. This is usually accomplished by
including a protective colloid in the liquid suspending medium. It is important that results be interpreted by comparison of
readings with those representing known concentrations of suspended matter, produced under precisely the same conditions.
Turbidimetry or nephelometry may be useful for the measurement of precipitates formed by the interaction of highly dilute
solutions of reagents, or other particulate matter, such as suspensions of bacterial cells. In order that consistent results may be
achieved, all variables must be carefully controlled. Where such control is possible, extremely dilute suspensions may be meas-
ured.
The specimen solute is dissolved in the solvent at several different accurately known concentrations, the choice of concentra-
tions being dependent on the molecular weight of the solute and ranging from 1% for Mw = 10,000 to 0.01% for Mw =
1,000,000. Each solution must be very carefully cleaned before measurement by repeated filtration through fine filters. A dust
particle in the solution vitiates the intensity of the scattered light measured. A criterion for a clear solution is that the dissym-
metry, 45°/135° scattered intensity ratio, has attained a minimum.
The turbidity and refractive index of the solutions are measured. From the general 90° light-scattering equation, a plot of
HC/t versus C is made and extrapolated to infinite dilution, and the weight-average molecular weight, M, is calculated from
the intercept, 1/M.
Visual Comparison
Where a color or a turbidity comparison is directed, color-comparison tubes that are matched as closely as possible in inter-
nal diameter and in all other respects should be used. For color comparison, the tubes should be viewed downward, against a
white background, with the aid of a light source directed from beneath the bottoms of the tubes, while for turbidity compari-
son the tubes should be viewed horizontally, against a dark background, with the aid of a light source directed from the sides
of the tubes.
In conducting limit tests that involve a comparison of colors in two like containers (e.g., matched color-comparison tubes), a
suitable instrument, rather than the unaided eye, may be used.
INTRODUCTION
Atomic absorption (AA) spectroscopy is an analytical method that supports qualification and/or quantification of elements.
In this use, the AA method supports procedures that measure the absorbance of radiation at a characteristic wavelength by a
vapor composed of ground state atoms. A typical instrument consists of a primary energy source that produces the spectrum
of the element under examination, a monochromator, and a suitable detector.
For discussion of the theory and principles of measurements, see Atomic Absorption Spectroscopy—Theory and Practice á1852ñ,
a resource that may be helpful but is not mandatory.
Qualification of an AA spectrophotometer can be divided into three elements: installation qualification (IQ), operational
qualification (OQ), and performance qualification (PQ); see also the general information chapter Analytical Instrument Qualifica-
tion á1058ñ.
Installation Qualification
The IQ requirements provide evidence that the hardware and software are properly installed in the desired location.
Operational Qualification
In OQ, an instrument's performance is characterized using standards of known spectral properties to verify that the system
operates within target specifications (see Table 1 and Table 2). The purpose of OQ is to demonstrate that instrument perform-
ance is suitable. OQ is a check of the key operational parameters performed following installation and following repairs and/or
maintenance.
The OQ tests in the following sections are typical examples only. Other tests and samples can be used to establish specifica-
tions for OQ. Instrument vendors often have samples and test parameters available as part of the IQ/OQ package.
Generate a calibration curve from a blank, 25-, 50-, 75-, and Correlation coefficient NLT 0.995
100-mg/L Cu standards. Inject each standard in triplicate and %RSD of triplicate injections of each Cu standard NMT 5%
Linearity record the %RSD. (not including the blank)
Assay 5 separate replicates of the 50-mg/L Cu standard versus %RSD of 5 replicates NMT 3%
the standard curve generated for the Linearity test. Inject each
Precision replicate in triplicate. %RSD of triplicate injections NMT 5%
Performance Qualification
PQ determines that the instrument is capable of meeting the user's requirements for all the parameters that may affect the
quality of the measurement.
Depending on typical use, the specifications for PQ may be different from the manufacturer's specifications. For validated
methods, specific PQ tests, also known as system suitability tests, can be used in lieu of PQ requirements.
Specific procedures, acceptance criteria, and time intervals for characterizing AA spectrophotometer performance depend
on the instrument and intended application. Demonstrating stable instrument performance over extended periods of time
provides some assurance that reliable measurements can be taken from test sample spectra using validated AA procedures.
PROCEDURE
Evaluate and select the type of material of construction, pretreatment, and cleaning of analytical labware used in AA analy-
ses. The material must be inert and, depending on the specific application, resistant to caustics, acids, and/or organic solvents.
For some analyses, diligence must be exercised to prevent the adsorption of analytes onto the surface of a vessel, particularly
in ultra-trace analyses. Contamination of the sample solutions from metal and ions present in the container also can lead to
inaccurate results.
For the analysis of a ubiquitous element, it is often necessary to use the purist grade of reagent or solvent available. Check all
solutions (diluents, matrix modifier solutions, ionization suppression solutions, reactants, and others) for elemental contamina-
tion before they are used in an analysis.
Standard Solution
Prepare as directed in the individual monograph. [NOTE—Commercially available single- or multi-element standard solu-
tions, traceable to the National Institute of Standards and Technology or to an equivalent national metrology organization, can
be used in the preparation of standard solutions.] Standard solutions, especially those used for ultra-trace analyses, may have
limited shelf life. Standard solutions should be retained for NMT 24 h unless stability is demonstrated experimentally.
The method of standard additions also can be used. This method involves adding a known concentration of the analyte ele-
ment to the sample at no fewer than two concentration levels against an unspiked sample preparation. The instrument re-
sponse is plotted against the concentration of the added analyte element, and a linear regression line is drawn through the
data points. The absolute value of the x-intercept multiplied by any dilution factor is the concentration of the analyte in the
sample.
Sample Solution
A variety of digestion techniques may be indicated to dissolve the sample. These may include hot-plate and microwave-assis-
ted digestions, including open-vessel and closed-vessel approaches. Note that open-vessel digestion generally is not recom-
mended for the analysis of volatile metals, e.g., selenium and mercury.
Analysis
Follow the procedure as directed in the individual monograph for the instrumental parameters.
The instrument must be standardized for quantification at the time of use. The absorbance of standard solutions that bracket
the target concentration is determined against an appropriate blank. The detector response is plotted as a function of the ana-
lyte concentration. When an analysis is performed at or near the detection limit, the analyst cannot always use a bracketing
standard. This is acceptable for qualitative but not quantitative tests. Regression analysis of the standard plot should be used to
evaluate the linearity of detector response, and individual monographs may set criteria for the residual error of the regression
line.
To demonstrate the stability of the system's initial standardization, the analyst must reassay a solution used in the initial
standard curve as a check standard at appropriate intervals throughout the analysis of the sample set. Unless otherwise indica-
ted in the individual monograph, the reassayed standard should agree with its expected value to within ±3% for an assay or
±20% for an impurity analysis.
Sample concentrations are calculated versus the working curve generated by plotting the detector response versus the con-
centration of the analyte in the standard solutions.
Validation
Validation is required when an AA method is intended for use as an alternative to the official procedure for testing an official
article.
The objective of an AA procedure validation is to demonstrate that the measurement is suitable for its intended purpose,
including quantitative determination of the main component in a drug substance or a drug product (Category I assays), quan-
titative determination of impurities or limit tests (Category II), and identification tests (Category IV). [NOTE—For definition of
different categories, see Validation of Compendial Procedures á1225ñ.] Depending on the category of the test, analytical proce-
dure validation for AA may require the testing of linearity, range, accuracy, specificity, precision, detection limit, quantitation
limit, and robustness. These analytical performance characteristics apply to externally standardized methods and to the meth-
od of standard additions.
General information chapter á1225ñ provides definitions and general guidance on analytical procedures validation without
indicating specific validation criteria for each characteristic. The intention of the following sections is to provide the user with
specific validation criteria that represent the minimum expectations for this technology. For each particular application, tighter
criteria may be needed to demonstrate suitability for the intended use.
ACCURACY
For Category I assays or Category II tests, accuracy can be determined by conducting recovery studies with the appropriate
matrix spiked with known concentrations of elements. It is also an acceptable practice to compare assay results obtained using
the AA procedure under validation to those of an established analytical procedure. In standard addition methods, accuracy
assessments are based on the final intercept concentration, not the recovery calculated from the individual standard additions.
Validation criteria: 95.0%–105.0% mean recovery for the drug substance assay and the drug product assay, and 70.0%–
150.0% mean recovery for the impurity analysis. These criteria apply throughout the intended range.
Precision
REPEATABILITY
The analytical procedure should be assessed by measuring the concentrations of six independently prepared sample solu-
tions at 100% of the assay test concentration. Alternatively, three replicates of three separate sample solutions at different con-
centrations can be used. The three concentrations should be close enough that the repeatability is constant across the concen-
tration range. If this is done, the repeatability at the three concentrations is pooled for comparison to the acceptance criteria. If
validating a procedure by the method of standard additions, the precision criterion applies to the final experimental result, not
the accuracy of the individual standard addition levels.
Validation criteria: The relative standard deviation is NMT 5.0% for the drug substance assay, NMT 5.0% for the drug prod-
uct assay, and NMT 20% for the impurity analysis.
INTERMEDIATE PRECISION
The effect of random events on the analytical precision of the procedure should be established. Typical variables include per-
forming the analysis on different days, using different instrumentation, or having the method performed by two or more ana-
lysts. As a minimum, the analytical procedure should be assessed by performing the repeatability test in any of the conditions
previously mentioned (totaling 12 measurements).
Validation criteria: The relative standard deviation is NMT 8.0% for the drug substance assay, NMT 8.0% for the drug prod-
uct assay, and NMT 25.0% for the impurity analysis.
SPECIFICITY
The procedure must be able to unequivocally assess each analyte element in the presence of components that may be ex-
pected to be present, including any matrix components.
Validation criteria: Demonstrated by meeting the accuracy requirement.
QUANTITATION LIMIT
The limit of quantitation (QL) can be estimated by calculating the standard deviation of NLT six replicate measurements of a
blank solution, divided by the slope of a standard curve, and multiplying by 10. If validating a procedure using the method of
standard additions, the slope of standards applied to a solution of the test material is used. Other suitable approaches can be
used (see á1225ñ).
A measurement of a test solution prepared from a representative sample matrix spiked at the estimated QL concentration
must be performed to confirm accuracy. If validating a procedure using the method of standard additions, the validation crite-
rion applies to the final experimental result, not the spike recovery of the individual standard addition levels.
Validation criteria: The analytical procedure should be capable of determining the analyte precisely and accurately at a level
equivalent to 50% of the specification.
LINEARITY
A response curve between the analyte concentration and absorbance is prepared from NLT five standard solutions at con-
centrations encompassing the anticipated concentration of the test solution. The standard curve is then evaluated using appro-
priate statistical methods, such as a least-squares regression.
For experiments that do not yield a linear relationship between analyte concentration and AA response, appropriate statisti-
cal methods must be applied to describe the analytical response.
Validation criteria: Correlation coefficient (R), NLT 0.995 for Category I assays and NLT 0.99 for Category II quantitative
tests.
RANGE
Range is the interval between the upper and lower concentrations (amounts) of analyte in the sample (including these con-
centrations) for which it has been demonstrated that the analytical procedure has a suitable level of precision, accuracy, and
linearity. Range is demonstrated by meeting the linearity, precision, and accuracy requirements.
Validation criteria: For Category I tests, the validation range for 100.0% centered acceptance criteria is 80.0%–120.0%. For
noncentered acceptance criteria, the validation range is 10.0% below the lower limit to 10.0% above the upper limit. For con-
tent uniformity, the validation range is 70.0%–130.0%. For Category II tests, the validation range covers 50.0%–120.0% of
the acceptance criteria.
ROBUSTNESS
The reliability of an analytical measurement is demonstrated by deliberate changes to experimental parameters. For AA this
can include but is not limited to sample preparation steps and heating programs, including atomization hold time or atomiza-
tion temperature. Exercise caution when changing fuel and oxidant gas flows and burner hardware, because this could poten-
tially create a flash-back condition.
Verification
U.S. Current Good Manufacturing Practices regulations [21 CFR 211.194(a)(2)] indicate that users of the analytical proce-
dures, as described in USP–NF, are not required to validate these procedures if provided in a monograph. Instead, they must
simply verify their suitability under actual conditions of use.
The objective of an AA procedure verification is to demonstrate that the procedure, as prescribed in a specific monograph,
can be executed by the user with suitable accuracy, specificity, linearity, and precision using the instruments, analysts, and
sample matrices available. According to Verification of Compendial Procedures á1226ñ, if the verification of the compendial pro-
cedure by following the monograph is not successful, the procedure may not be suitable for use with the article under test. It
may be necessary to develop and validate an alternative procedure as allowed in General Notices 6.30.
Verification of compendial AA methods should, at a minimum, include the execution of the validation parameters for specif-
icity, linearity, accuracy, precision, and limit of quantitation, when appropriate, as indicated in Validation.
INTRODUCTION
Fluorescence is a two-step process that requires absorption of light at a specific wavelength (excitation) followed by emission
of light, usually at a higher wavelength. The emission of light is termed fluorescence.
The most common type of fluorescent sample is a submicromolar transparent solution that absorbs light following the Beer–
Lambert–Bouguer Law and fluoresces with an intensity that is directly proportional to the concentration, the absorptivity, and
the fluorescence quantum yield of the fluorescent species or fluorophore.
Unlike absorption spectroscopy, where deviation from linearity is the exception, fluorescence linearity can be affected by a
number of sample-related effects. For additional information, see Fluorescence Spectroscopy—Theory and Practice á1853ñ.
Fluorescence methods also are termed background-free because little excitation light reaches the detector. This characteristic
makes fluorescence detection highly sensitive, down to single-molecule detection in some cases. Fluorescence detection also
can be highly specific because a fluorophore emits a characteristic emission pattern. Specificity and sensitivity are two of the
more important strengths of fluorescence methods.
Analysts ensure the suitability of a specific instrument for a given procedure by using a stepwise evaluation for the desired
application from selection to instrument retirement: design qualification (DQ); installation qualification (IQ); an initial perform-
ance-to-specification qualification, also known as operational qualification (OQ); and an ongoing performance qualification
(PQ). For additional information, see general chapter Analytical Instrument Qualification á1058ñ.
DQ and IQ are not further considered in this chapter. The purpose of this section is to provide test methods and acceptance
criteria to ensure that the instrument is suitable for its intended use (OQ) and that it will continue to function properly over
extended time periods (PQ).
As with any spectrometric device, analysts must qualify a spectrofluorometer for both wavelength (x-axis) and relative inten-
sity (y-axis or signal axis) accuracy and precision. They also must establish sensitivity. OQ should span the operational ranges
required within the laboratory for both intensity and wavelength scales.
The tolerances given in both the instrument OQ and PQ are applicable for general use. Specifications for particular instru-
ments and applications can vary depending on the analytical procedure used and the desired accuracy of the final result. In-
strument vendors often have samples and test parameters available as part of the IQ/OQ package.
Wherever possible, analysts should use certified reference materials for purposes of calibration in the steps detailed below in
preference to laboratory-prepared solutions. When certified reference materials are obtained from a recognized accredited
source, they have independently verified traceable value assignments with associated calculated uncertainties.
Two general types of instrumental measurements are differentiated here: spectral (i.e., those that measure intensity versus
wavelength) and fixed (i.e., those that measure intensity at a fixed wavelength and bandwidth).
CONTROL OF WAVELENGTHS
The level of confidence of measured peak positions is defined by wavelength accuracy for spectral measurements. Determi-
nation of the accuracy of many wavelengths across the desired wavelength range demonstrates if further calibration beyond a
single point is needed. Multipoint calibration involves measuring wavelength biases at multiple wavelengths and correcting for
the wavelength dependence of the bias. A single-point calibration often can be applied to the wavelength axis in an instru-
ment's software before data are collected, but a multipoint calibration may require that the correction be applied to spectra
after they are collected.
For fixed measurements, the wavelength position and bandwidth must be reproducible. For filter-based wavelength selec-
tion, this requires that only the same filter be used when analysts compare data over time. If a different filter must be used
(e.g., when data are compared across instruments and laboratories), then the transmission curves of the filters must be com-
pared.
Wavelength precision should be determined over the operational range using at least six replicate measurements. The stand-
ard deviation should not exceed ±1 nm.
This procedure is described as the primary application because the emission lines produced from a discharge lamp are char-
acteristic of the source element, and, as a fundamental physical standard, these wavelengths have been measured with an un-
certainty of NMT ± 0.01 nm. In solution spectrofluorometry the wavelength bias required rarely exceeds 1.0 nm. For these
reasons, the atomic line standard values are cited without uncertainty. The lamp should be placed at the source position in the
spectrofluorometer.
A commonly employed low-pressure mercury lamp has a number of intense lines that cover a large part of the UV and visi-
ble range. Manufacturers often use two Xenon lines from the source at 260.5 and 541.9 nm as an internal calibration check,
because the accuracy of both the excitation and emission monochromators can be verified and can be used for diagnostic
purposes (see Table 1).1
Table 1. Elemental Line Spectra Wavelengths
Wavelength
Element (nm)
Hg 253.7
Xe 260.5
Hg 296.7
Hg 365.0
Hg 404.7
Hg 435.8
Xe 541.9
Hg 546.1
Hg 577.0
Hg 579.1
This procedure uses a solution of a rare earth oxide prepared by dissolution in acid media. The most frequently used is hol-
mium oxide in perchloric acid in combination with a diffuse reflector located at the sample position. Suitable certified refer-
ence materials are available commercially.2 The wavelength selector not being scanned should be removed; if removal is not
practicable, it should be set to zero order (in this position a grating behaves like a mirror reflecting all wavelengths). The dif-
fuse reflector is scanned with and without the rare earth sample in place, and the ratio of the two intensities is calculated to
obtain an effective transmittance spectrum. Minima in the intensity ratio correspond to absorption peaks of the sample. For a
4% (w/w) solution of holmium oxide in perchloric acid at 1.0 nm spectral bandwidth and a path length of 1 cm, these minima
are in Table 2.3
Table 2
Wavelength
(nm)
241.1
249.9
278.1
287.2
333.5
345.4
361.3
385.6
1 The rounded values are taken from ASTM Standard E388-04 (2009).
2 NIST SRM 2034 is no longer available.
3 The rounded values are taken from the intrinsic wavelength standard absorption band data from Travis et al. J Phys Chem Ref Data 2005;34(1):41. The maximum
Table 2 (Continued)
Wavelength
(nm)
416.3
451.4
467.8
485.2
536.6
640.5
If the operational range of the spectrophotometer lies outside 240–650 nm, other certified rare earth oxides or other solu-
tions are used.
Didymium (a mixture of neodymium and praseodymium) is available as a traceable standard in both solution and glass pre-
sentations. Didymium is similar in preparation to the holmium materials and has useful peak characteristics in the 730–870 nm
region. Useful peaks are found in the didymium solution at approximately 731.6, 740.0, 794.1, 799.0, and 864.4 nm.
This procedure uses solid reference materials manufactured by polymerization of a variety of fluorescent active aromatic ring
compounds into an inert polymethylmethacrylate (PMMA) matrix. These materials are supplied as polished blocks for use in a
standard cuvette holder (see Table 3).
Table 3. Dopant, Excitation, and Emission Data for Selected Reference Materials
Excitation l Emission l
Dopant (nm) (nm)
p-Terphenyl 295 338
Ovalene 342 482
Tetraphenylbutadiene 348 422
Anthracene 360 402
Performance Verification
Results from day-to-day testing of photostable intensity standards are used to verify the performance of an instrument. If the
measured intensity does not change from that observed when the instrument was qualified, then instrument performance has
not changed and remains qualified. Using such standards to determine an artifact-based or quasi-absolute intensity scale po-
tentially enables measured intensities and instrument sensitivity to be compared over time or between instruments. Intensity
measurements should be within the linear range of the instrument's detection system before analysts attempt intensity com-
parisons, and will be affected by any instrumental effect directly related to the fluorescence signal, e.g., changes in source in-
tensity, detector response, etc.
For instruments with filter-based wavelength selection, analysts use fluorescence standards for spectral correction to deter-
mine expected intensity differences caused by filters with different transmission profiles. By compensating for these intensity
differences due to spectral mismatch, the analyst can determine a quasi-absolute intensity scale for these instruments. Analysts
should approach instrument-to-instrument comparisons with particular caution because of the relatively large and difficult-to-
quantify uncertainties involved.
The Raman band of water is used to measure signal-to-noise ratios in fluorescent instruments. The Raman band of water is
inherently reproducible and does not degrade with time. Water is convenient to obtain in a pure state and allows interlabora-
tory comparisons to be made with a high level of confidence. No preparation or dilution is required. The Raman band is a low-
level signal that provides a good test for both the optics and the electronics of an instrumental system.
The Raman band of water is not caused by fluorescence but is a result of Raman scattering. For water, the Raman band is
always red-shifted 3382 cm–1 relative to the excitation.4 This band usually is measured by excitation at 350 nm, resulting in a
Raman peak at 397 nm, but radiation up to 500 nm also can be used as the excitation wavelength, and the corresponding
emission peak is 602 nm.
4 The red-shift value is taken from Parker CA. Raman spectra in spectrofluorimetry. Analyst. 1959;84:446–453.
Official text. Reprinted from First Supplement to USP38-NF33.
340 á853ñ Fluorescence Spectroscopy / Physical Tests DSC
Several solid-doped fluorescent materials are available.5 These polymers or glasses enable the relative spectral correction and
day-to-day performance qualification of fluorescence instruments across the UV, visible, and NIR regions from 320 to 830 nm.
The high photostability of the materials makes them particularly useful as day-to-day intensity standards, even when spectral
correction is not needed or when the excitation wavelength differs from that used for certification. A certified, steady-state
emission spectrum is supplied with each certified reference material, along with the estimated total uncertainties. The refer-
ence is available in the form of a solid glass, standard-sized cuvette (12.5 mm × 12.5 mm × 45 mm) with three polished long
faces for 90° detection and one frosted long face for front-face or epifluorescence detection.
Alternatively, analysts can use fluorophores in solutions that have been shown to be stable.6,7
Two general classes of procedural measurements commonly are performed by fluorescence spectrometry: qualitative and
quantitative.
Qualitative fluorescence measurements are used to detect the presence of particular analytes and yield a positive or negative
answer. The excitation and emission wavelengths often are selected at the peak maximum of the fluorophore to be detected.
The minimum amount of analyte needed for a positive result should be considered by the analyst to ensure that the method is
appropriate for the particular application. The observation of fluorescence at the peak position above the limit of detection
(usually 3 times the noise level) indicates a positive result.
Quantitative fluorescence measurements are used to determine amounts or concentrations of analytes in unknown samples.
These quantities may be determined in absolute units, such as moles or moles per L, or in relative units, such as the ratio of the
concentrations of two fluorescent analytes contained in a single unknown solution. These determinations use the following
proportionality relating fluorescence signal (S) at a given pair of excitation and emission wavelengths (lex, lem) to fluorescent
analyte concentration (c):
S µ /0 × W × Rd × a × F × c
Comparisons of a test specimen with a Reference Standard are best made at a peak of spectral emission for the compound
of interest. Assays based on spectrofluorometry give the commonly accepted wavelengths for excitation and peak spectral
emission of the substance in question. Different spectrofluorometers may show minor variation in the apparent wavelength of
this peak. Comparisons should be made at the wavelength at which peak emission occurs. If this differs from the wavelength
specified in the monograph by more than ±1 nm in the range of 200–400 nm or by more than ±2 nm in the range of 400–
800 nm, recalibration of the instrument may be indicated.
With few exceptions, pharmacopeial spectrofluorometric procedures provide results by comparison against a USP Reference
Standard. This ensures measurement under identical conditions for the test specimen and the Reference Standard. These con-
5 Available from commercial vendors and from NIST as SRMs 2940 (orange emission), 2941 (green emission), 2942 (UV emission), 2943 (blue emission), and
ditions could include wavelength setting, spectral bandwidth selection, cell placement and correction, and intensity levels.
Cells that exhibit identical optical fluorescence characteristics at a specific wavelength may differ considerably at other wave-
lengths. Analysts should establish and use appropriate cell corrections where required.
The terms similar preparation and similar solution in tests and assays that involve spectrofluorometry indicate that the Refer-
ence Standard should be prepared and observed in a manner that is identical to that used for the sample under test. Usually
when a solution of the specified Reference Standard is prepared at (i.e., within 10% of) the desired concentration, the fluores-
cence intensity is calculated on the basis of the exact amount weighed out. If analysts have not used a previously dried speci-
men of the Reference Standard, they should correct this intensity on the anhydrous basis.
The expressions concomitantly determine and concomitantly measured as used in procedures that involve spectrofluorometry
indicate that the fluorescence of both the sample solution and the standard solution, relative to the specified test blank, are to
be measured in immediate succession.
For determinations using UV or visible spectrofluorometry, the specimen generally is dissolved in a solvent. Unless otherwise
directed in the monograph, determinations are made at room temperature by using a path length of 1 cm. Many solvents are
suitable for these ranges, including water, alcohols, chloroform, lower hydrocarbons, ethers, and dilute solutions of strong
acids and alkalis. Solvents should be free from contaminants that fluoresce in the spectral region under examination. For the
solvent, water-free methanol or alcohol or alcohol that has been denatured by the addition of methanol but does not contain
benzene or other interfering impurities should be used. Spectrophotometric-quality solvents that are guaranteed to be free
from contaminants are available commercially from several sources, but some analytical reagent-grade organic solvents may
contain traces of impurities that fluoresce strongly in the UV region. New lots of these solvents should be checked for their
transparency, and analysts should take care to use the same lot of solvent for the preparation of the sample solution, the
standard solution, and the blank. Solvents that do not have an interfering fluorescence signature at the wavelength(s) of inter-
est should be used. In normal usage, the fluorescence baseline intensity should not be more than 2% of the expected meas-
urement signal unless a larger value previously has been justified.
Assays in the visible region usually call for comparing concomitantly the fluorescence intensities produced by the sample
solution with that produced by a standard solution that contains approximately an equal quantity of a USP Reference Stand-
ard. In some situations, it may be permissible to omit the use of a Reference Standard. This is true when spectrofluorometric
assays are made with routine frequency, when a suitable standard curve is available and is prepared with the appropriate USP
Reference Standard, and when the substance assayed conforms to Beer–Lambert–Bouguer Law within the range of about
75%–125% of the final concentration used in the assay. Under these circumstances, the fluorescence intensity found in the
assay can be interpolated on the standard curve, and the assay result can be calculated. Such standard curves should be con-
firmed frequently and always when a new spectrofluorometer or new lots of reagents are put into use.
Validation
Validation is required when a procedure based on fluorescence spectroscopy is intended for use as an alternative to the offi-
cial procedure. The objective of validation is to demonstrate that the measurement is suitable for its intended purpose, includ-
ing the following: quantitative determination of the main component in a drug substance or a drug product (Category I as-
says), quantitative determination of impurities or limit tests (Category II), and identification tests (Category IV). Depending on
the category of the test (for additional information, see Table 2 in Validation of Compendial Procedures á1225ñ), the process for
analytical procedure validation for fluorescence requires testing for linearity, range, accuracy, specificity, precision, detection
limit, quantitation limit, and robustness. These analytical performance characteristics apply to externally standardized proce-
dures and those that use standard additions. Chapter á1225ñ provides definitions and general guidance about analytical proce-
dures validation without indicating specific validation criteria for each characteristic. The intention of the following sections is
to provide the user with specific validation criteria that represent the minimum expectations for fluorescence technology. For
each particular application tighter criteria may be needed in order to demonstrate suitability for the intended use.
ACCURACY
For Category I, Category II, and Category III procedures, accuracy is determined by conducting recovery studies with the
appropriate matrix spiked with known concentrations of the analyte. Analysts also can compare assay results obtained using
the fluorescence procedure under validation to those from an established analytical procedure.
Validation criteria: 98.0%–102.0% mean recovery for a drug substance, 95.0%–105.0% mean recovery for a drug product
assay, and 80.0%–120.0% mean recovery for impurity analysis. These criteria must be met throughout the intended range.
Precision
REPEATABILITY
Repeatability of the analytical procedure is assessed by measuring the concentrations of six independently prepared sample
solutions at 100% of the assay test concentration. Alternatively, repeatability is assessed by measuring concentrations of three
replicates of three separate sample solutions at different concentrations. The three concentrations should be sufficiently similar
so that the repeatability is similar across the concentration range. If this is done, the repeatability at the three concentrations
can be pooled for comparison to the acceptance criteria.
Validation criteria: The relative standard deviation is NMT 1.0% for a drug substance, NMT 2.0% for a drug product assay,
and NMT 20.0% for impurity analysis.
Intermediate Precision
The effect of random events on the analytical precision of the procedure should be evaluated. Typical variables include per-
forming the analysis on different days, using different instrumentation, and having the method performed by two or more
analysts. As a minimum, any combination of at least two of these factors totaling six experiments will provide an estimation of
intermediate precision.
Validation criteria: The relative standard deviation is NMT 1.0% for a drug substance, NMT 3.0% for a drug product assay,
and NMT 25.0% for impurity analysis.
SPECIFICITY
In fluorescence measurements, specificity is ensured by the use of a Reference Standard wherever possible and is demonstra-
ted by the lack of interference from other components present in the matrix.
Validation criteria: Demonstrated by meeting the accuracy requirement
DETECTION LIMIT
Analysts can estimate the detection limit (DL) by calculating the standard deviation of NLT six replicate measurements of a
blank solution and multiplying by 3.3. Alternatively, the standard deviation can be determined from the error of the intercept
from a calibration curve or by demonstration that the signal-to-noise ratio is >3.3. Analysts must confirm the estimated DL by
analyzing samples at the calculated concentration.
QUANTITATION LIMIT
Analysts can estimate the quantitation limit (QL) by calculating the standard deviation of NLT six replicate measurements of
a blank solution and multiplying by 10. Alternatively, the standard deviation can be determined from the error at the intercept
from a calibration curve or by demonstration that the signal-to-noise ratio is >10.
A sample solution prepared from a representative sample matrix spiked at the required QL concentration is measured to
confirm sufficient sensitivity and adequate precision. The observed signal-to-noise ratio at the required QL should be >10.
Validation criteria: For the estimated limit of quantitation to be considered valid, the measured concentration must be accu-
rate and precise at a level equal to or less than 50% of the specification.
LINEARITY
A response curve between the analyte concentration and the fluorescence signal is prepared from NLT five Standard solu-
tions at concentrations that encompass the anticipated concentration of the sample solution. Analysts then should evaluate the
standard curve for linearity using appropriate statistical methods such as a least-squares regression. Deviation from linearity can
result from either instrumental or sample factors, or both, and can be reduced to acceptable levels by reduction of the analyte
concentration and thereby the associated absorbance values.
Validation criteria: The correlation coefficient (R) must be NLT 0.995 for Category I assays and NLT 0.99 for Category II
quantitative tests.
RANGE
The operational range of an analytical instrument (and the analytical procedure as a whole) is the interval between the up-
per and lower concentrations (amounts) of analyte in the sample (including these concentrations) for which it has been dem-
onstrated that the instrumental response function has a suitable level of precision, accuracy, and linearity.
Validation criteria: For Category I tests, the validation range for 100.0% centered acceptance criteria is 80.0%–120.0%. For
noncentered acceptance criteria, the validation range is 10.0% below the lower limit to 10.0% above the upper limit. For con-
tent uniformity, the validation range is 70.0%–130.0%. For Category II tests, the validation range covers 50.0%–120.0% of
the acceptance criteria.
ROBUSTNESS
Analysts should demonstrate the reliability of an analytical measurement by deliberate changes to experimental parameters.
For fluorescence these changes can include measuring the stability of the analyte under specified storage conditions, varying
pH, removal of oxygen, and adding possible interfering species, to list a few examples. Analysts should determine robustness
concurrently using a suitable design-of-experiments procedure.
Verification
Analytical procedures described in USP–NF do not require validation. Instead, a verification is used to determine a proce-
dure's suitability under actual conditions of use.
Thus the objective of fluorescence procedure verification is to demonstrate the suitability of a test procedure under actual
conditions of use. Performance characteristics that verify the suitability of a fluorescence procedure are similar to those re-
quired for any analytical procedure. For additional information, see Verification of Compendial Procedures á1226ñ for a discussion
of the applicable general principles. Verification should be performed using a reference material and a well-defined matrix. Ver-
ification of compendial fluorescence procedures should at a minimum include the execution of the validation parameters for
specificity, accuracy, precision, and quantitation limit, when appropriate, as indicated in Validation.
Some fluorescence procedures employ chromogenic reactions. Generally the requirements for the analytical performance
characteristics should be used. In some instances the required accuracy and precision for the direct measurements may not be
achievable. Under these circumstances, the accuracy and precision requirements may be widened by as much as 50%. Howev-
er, any such widening must be justified on scientific grounds and with documented evidence. Under these circumstances, the
amount of replication required to produce a scientifically sound reportable value may be increased.
INTRODUCTION
Mid-infrared (mid-IR) spectroscopy is an instrumental method used to measure the absorption of electromagnetic radiation
over the wavenumber range between 4000 and 400 cm−1 (corresponding to wavelengths between 2.5 and 25 mm). Unless
otherwise specified in a monograph or other validated procedure, the region from 3800 to 650 cm−1 (corresponding to wave-
lengths from 2.6 to 15 mm) should be used to ensure compliance with monograph specifications for IR absorption. The ab-
sorption of photons causes the promotion of molecules from a ground state of their vibrational mode to an excited vibrational
state.
Vibrational modes are defined by the motion of all atoms in a molecule. When molecules contain certain functional groups,
IR absorption often occurs in specific narrow spectral ranges. In these cases, the wavenumbers at which these transitions occur
are known as group frequencies. When a vibrational mode involves atomic motions of more than just a few atoms, the fre-
quencies occur over wider spectral ranges and are not characteristic of a particular functional group but are more characteristic
of the molecules as a whole. Such bands are known as fingerprint bands. All strong bands that absorb at wavenumbers above
1500 cm−1 are group frequencies. Strong bands that absorb below 1500 cm−1 can be either group frequencies or fingerprint
bands.
For discussion of the theory and principles of measurements, see Mid-Infrared Spectroscopy—Theory and Practice á1854ñ,
which may be a helpful, but not mandatory, resource.
QUALIFICATION OF IR SPECTROPHOTOMETERS
Qualification of mid-IR spectrophotometers is divided into three components: Installation Qualification (IQ), Operational
Qualification (OQ), and Performance Qualification (PQ). For further information, see Analytical Instrument Qualification á1058ñ.
Installation Qualification
The IQ requirements provide evidence that the hardware and software are properly installed in the desired location.
Operational Qualification
Because the majority of mid-IR spectra are measured with Fourier-transform IR (FTIR) spectrophotometers, only these instru-
ments will be discussed. [NOTE—No recommended values for signal-to-noise ratio or 100% line stability are included in this
chapter because these vary with manufacturer, model, and age of the instrument.]
WAVENUMBER ACCURACY
The most commonly used wavenumber standard for IR spectrophotometry is an approximately 35 mm thick, matte polystyr-
ene film. The spectrum of such a film has several sharp bands at 3060.0, 2849.5, 1942.9, 1601.2, 1583.0, 1154.5, and 1028.3
cm−1. The most frequently chosen band for wavenumber accuracy determination is located at 1601.2 cm−1. Using a suitable
polystyrene film or other well-characterized wavenumber standard, scan from 3800 to 650 cm−1 wavenumbers, and compare
the wavenumber of maximum response of the chosen band using the center-of-gravity, polynomial spline procedure, or other
peak-picking algorithms to the known absorption wavenumber of the standard. The acceptable tolerance for the measured
wavenumber is ±1.0 cm−1.
Performance Qualification
The purpose of performance qualification (PQ) is to determine that the instrument is capable of meeting the user's require-
ments for all the parameters that may affect the quality of the measurement.
PROCEDURE
Mid-IR spectra can be measured by transmission, external reflection, internal reflection (often called attenuated total reflec-
tion), diffuse reflection, and photoacoustic spectroscopy. Different sample preparation techniques are available for these op-
tions. The most common sample preparation techniques are presented below.
Certain powdered alkali halides such as potassium bromide, potassium chloride, and caesium iodide coalesce under high
pressure and can be formed into self-supporting disks that are transparent to mid-IR radiation. The alkali halide most common-
ly used is powdered, dry, highly pure potassium bromide, which is transparent to mid-IR radiation above 400 cm−1.
Commercial presses and dies in a range of diameters are available for the preparation of alkali halide and similar disks.
A typical procedure to prepare a mull is to place 10–20 mg of the sample into an agate mortar, and then grind the sample
to a fine particle-size powder using a vigorous rotary motion of the pestle. A small drop of the mulling agent is added to the
mortar. Rotary motion of the pestle is used to mix the components into a uniform paste, which is transferred to the center of a
clean IR-transparent window (e.g., potassium bromide, sodium chloride, silver bromide, or caesium iodide). A second match-
ing window is placed on top of the mull, and the mull is squeezed to form a thin, translucent film that is free from bubbles.
The most widely used mulling agent for the mid-IR region is a saturated hydrocarbon mineral oil (liquid paraffin, Nujol).
The mid-IR transmission spectrum of many polymers used as packaging materials is at times recorded from samples pre-
pared as thin self-supporting films using hot compression molding or microtoming.
Capillary Films
Nonvolatile liquids can be examined neat in the form of a thin layer sandwiched between two matching windows that are
transparent to mid-IR radiation. The liquid layer must be free of bubbles and must completely cover the diameter of the IR
beam focused on the sample.
For the examination of liquid and solution samples, transmission cell assemblies that comprise a window pair, spacer, filling
ports, and a holder are available commercially in both macro- and micro-sample configurations.
For laboratory applications, spacers typically are formed from lead, poly(tetrafluoroethylene), or poly(ethylene terephthalate)
and can be supplied, depending on spacer materials, in standard thickness path lengths from approximately 6 mm to 1 mm or
larger.
Gases
Mid-IR transmission cells for static or flow-through gas and vapor sampling are available in a wide range of materials to suit
the application, from laboratory to process scale. In the laboratory, the traditional gas cell has been a 10 cm long cylinder
made from borosilicate glass or stainless steel with an approximately 40-mm aperture at each end. Each open end is covered
with an end cap that contains one of a pair of mid-IR–transparent windows constructed from, e.g., potassium bromide, zinc
selenium, or calcium fluoride.
Attenuated total reflectance spectroscopy relies on the optical phenomenon of radiation passing through a medium of high
refractive index at a certain angle of incidence entirely reflected internally at a boundary in contact with a material of lower
refractive index. The medium of high refractive index is also known as the internal reflection element (IRE).
The sample under examination should be placed in close contact with the IRE such as diamond, germanium, zinc selenide,
or another suitable material of high refractive index. Ensure close and uniform contact between the substance and the whole
crystal surface by applying pressure in the case of solid samples or by dissolving the substance in an appropriate solvent and
then covering the IRE with the solution and evaporating to dryness.
Diffuse Reflection
The most important and commonly used form of sample preparation for diffuse reflection is to dilute the sample by inti-
mately mixing it with 90%–99% of nonabsorbing diluents such as finely powdered potassium bromide or potassium chloride.
The sample dilution has the added benefit of reducing absorption band intensities to an appropriate level.
Microscope Sampling
Coupling a light microscope with a mid-IR spectrophotometer allows spectra to be obtained from very small samples. Gen-
erally applied in transmittance or reflectance modes, it provides, for example, a powerful tool for obtaining spectroscopic data
of contaminants in pharmaceutical samples.
Validation
Validation is required when an IR method is intended for use as an alternative to the official procedure for testing an official
article.
The objective of an IR method validation is to demonstrate that the measurement is suitable for its intended purpose includ-
ing: quantitative determination of the main component in a drug substance or a drug product (Category I assays), quantitative
determination of impurities or limit tests (Category II), and identification tests (Category IV, see Table 2 in Validation of Com-
pendial Procedures á1225ñ). Depending on the category of the test, the validation process for IR may require the testing of line-
arity, range, accuracy, specificity, precision, detection limit, quantitation limit, and robustness. If the IR procedure employs a
chemometrics model calculated against the response of another analytical technology (e.g., HPLC), then the principles of
Near-Infrared Spectroscopy á1119ñ, specifically the Method Validation section, is to be applied.
Chapter á1225ñ provides definitions and general guidance on analytical procedures validation without indicating specific val-
idation for each characteristic. The following sections are intended to provide the user with specific validation criteria that rep-
resent the minimum expectations for this technology. For each particular application, tighter criteria may be needed in order
to demonstrate suitability for the intended use.
ACCURACY
For Category I and II procedures, accuracy can be determined by conducting recovery studies with the appropriate matrix
spiked with known concentrations of the analyte. It is also an acceptable practice to compare assay results obtained using the
IR procedure under validation with those obtained from an established alternative analytical method.
Validation criteria: 98.0%–102.0% mean recovery for drug substances, 95.0%–105.0% mean recovery for drug product as-
say, and 70.0%–150.0% mean recovery for impurity analysis. These criteria are met throughout the intended range.
PRECISION
Repeatability: The analytical procedure is assessed by measuring the concentrations of six independently prepared sample
preparations at 100% of the assay test concentration. Alternatively it can be based on measurements of three replicates of
three separate sample solutions at different concentrations. The three concentrations should be close enough so that the re-
peatability is constant across the concentration range. If this is done, the repeatability at the three concentrations is pooled for
comparison to the acceptance criteria.
Validation criteria: The relative standard deviation is NMT 1.0% for drug substance, NMT 2.0% for drug product, and NMT
20.0% for impurity analysis.
Intermediate precision: The effect on analytical precision caused by changes in variables such as performing the analysis on
different days, using different instrumentation, or having the method performed by two or more analysts needs to be assessed.
At a minimum, any combination of at least two of these factors totaling six experiments will provide an estimation of inter-
mediate precision.
Validation criteria: The relative standard deviation is NMT 1.0% for drug substance, NMT 3.0% for drug product assay, and
NMT 25.0% for impurity analysis.
SPECIFICITY
For Category IV tests, the identity of the analyte should be ensured. Regarding Category I and II procedures, the accuracy
requirement also demonstrates specificity for the targeted analytes.
In the case of identification tests, the ability to select between compounds of closely related structure that are likely to be
present should be demonstrated. This should be confirmed by obtaining positive results (perhaps by comparison to a known
reference material) from samples containing the analyte, coupled with negative results from samples that do not contain the
analyte and by confirming that a positive response is not obtained from materials structurally similar to or closely related to the
analyte.
QUANTITATION LIMIT
The quantitation limit can be estimated by calculating the standard deviation of NLT 6 replicate measurements of a blank
preparation divided by the slope of the calibration line and multiplying by 10. Other suitable approaches can be used (see
á1225ñ). A measurement of a representative sample matrix spiked at the estimated quantitation limit concentration must be
performed to confirm accuracy.
Validation criteria: For the estimated quantitation limit to be considered valid, the measured concentration must be accu-
rate and precise at a level equal to or less than 50% of the specification.
LINEARITY
A linear relationship between the analyte concentration and the IR spectral response is demonstrated by preparing NLT 5
standard preparations at concentrations encompassing the anticipated concentration of the test preparation. The standard
curve should then be evaluated using appropriate statistical methods such as a least squares regression. For experiments that
do not have a linear relationship between analyte concentration and IR spectral response, appropriate statistical methods
should be applied to describe the analytical response.
Validation criteria: Correlation coefficient (R), NLT 0.995 for Category I assays and NLT 0.99 for Category II quantitative
tests
RANGE
ROBUSTNESS
The reliability of an analytical measurement should be demonstrated by deliberate changes to experimental parameters. For
mid-IR this can include but is not limited to changes in sample preparation procedure or changes in hardware settings.
Verification
U.S. Current Good Manufacturing Practices regulations [21 CFR 211.194(a)(2)] indicate that users of analytical procedures
described in USP–NF are not required to validate these procedures if provided in a monograph. Instead, they must simply veri-
fy their suitability under actual conditions of use.
The objective of an IR procedure verification is to demonstrate that the method, as prescribed in specific monographs, is
being executed with suitable accuracy, sensitivity, and precision. Verification of Compendial Procedures á1226ñ notes that if the
verification of the compendial procedure, according to the monograph, is not successful, the procedure is not suitable for use
with the article under test. It may be necessary to develop and validate an alternative procedure as allowed in General Notices
and Requirements 6.30.
Although complete revalidation of a compendial procedure is not required, verification of the compendial Mid-IR procedure
includes the execution of certain critical parameters. When the method being verified is for identification purposes, specificity
is the only parameter required. For quantitative applications, additional validation parameters are studied. Typically these in-
clude accuracy, precision, and quantitation limit, as indicated in Validation.
INTRODUCTION
Ultraviolet-visible (UV-Vis) spectra are derived when the interaction between incident radiation and the electron cloud in a
chromophore results in an electronic transition involving the promotion of one or more of the outer shell or the bonding elec-
trons from a ground state into a state of higher energy. The UV and visible spectral bands of substances generally are broad
and do not possess a high degree of specificity for compound identification. Nevertheless, they are suitable for quantitative
assays and, for many substances, are useful as an additional means of identification.
In the Beer–Lambert law the absorbance (Al) of a solution at given wavelength, l, is defined as the logarithm to base 10 of
the reciprocal of the transmittance (Tl):
which represents the specific absorbance of a dissolved substance, refers to the absorbance of a 10-g/L solution (1% m/v) in a
1-cm cell measured at a defined wavelength so that:
The suitability of a specific instrument for a given procedure is ensured by a stepwise life cycle evaluation for the desired
application from selection to instrument retirement: design qualification (DQ); installation qualification (IQ); an initial perform-
ance-to-specification qualification, also known as operational qualification (OQ); and an ongoing performance qualification
(PQ). For more details, see Analytical Instrument Qualification á1058ñ.
The purpose of this chapter is to provide test methodologies and acceptance criteria to ensure that the instrument is suitable
for its intended use (OQ), and that it will continue to function properly over extended time periods as part of PQ. As with any
spectrometric device, a UV-Vis spectrophotometer must be qualified for both wavelength (x-axis) and photometric (y-axis, or
signal axis) accuracy and precision, and the fundamental parameters of stray light and resolution must be established. OQ is
carried out across the operational ranges required within the laboratory for both the absorbance and wavelength scales.
Installation Qualification
The IQ requirements provide evidence that the hardware and software are properly installed in the desired location.
Operational Qualification
Acceptance criteria for critical instrument parameters that establish “fitness for purpose” are verified during IQ and OQ.
Specifications for particular instruments and applications can vary depending on the analytical procedure used and the desired
accuracy of the final result. Instrument vendors often have samples and test parameters available as part of the IQ/OQ pack-
age.
Wherever possible in the procedures detailed as follows, certified reference materials (CRMs) are to be used in preference to
laboratory-prepared solutions. These CRMs should be obtained from a recognized accredited source and include independent-
ly verified traceable value assignments with associated calculated uncertainty. CRMs must be kept clean and free from dust.
Recertification should be performed periodically to maintain the validity of the certification.
Control of Wavelengths
Ensure that the accuracy of the wavelength axis (x-axis) over the intended operational range is correct within acceptable
limits.
For non-diode array instruments, wavelength accuracy and precision are determined over the operational range using at
least six replicate measurements. For wavelength accuracy, the difference of the mean measured value to the certified value of
the CRM must be within ±1 nm in the UV region (200–400 nm), and in the visible region (400–700 nm) must be within ±2
nm. For wavelength precision, the standard deviation of the mean must not exceed 0.5 nm. For diode array instruments, only
one wavelength accuracy measurement is required, and no precision determination needs to be performed. The difference be-
tween the certified and measured value of the CRM must not exceed ±1 nm in the UV region (200–400 nm), and in the visible
region (400–700 nm) must not exceed ±2 nm.
This procedure is described as the primary application because the emission lines produced from a discharge lamp are char-
acteristic of the source element and, as a fundamental physical standard, these wavelengths have been measured with an un-
certainty of NMT ±0.01 nm. In solution spectrometry, the wavelength accuracy required rarely exceeds 0.5 nm. For these rea-
sons, the atomic line standard values are cited without uncertainty. The lamp needs to be placed at the source position in the
spectrophotometer; thus, it can be used only in spectrophotometers that can be operated in a single-beam intensity mode
and practically should be implemented only on a system designed to accommodate these sources, e.g., as an accessory.
A commonly employed low-pressure mercury lamp has a number of intense lines that cover a large part of the UV and visi-
ble spectra. Two deuterium lines from the source at 486.0 and 656.1 nm often are used by manufacturers as an internal cali-
bration check and can be used for diagnostic purposes (Table 1).1
Table 1. Recommended Atomic Lines from Low-Pressure Mercury and Deuterium Lamps for Wavelength Calibration Purposes
Element nm
Hg 253.7
Hg 296.7
Hg 365.0
Hg 404.7
Hg 435.8
Table 1. Recommended Atomic Lines from Low-Pressure Mercury and Deuterium Lamps for Wavelength Calibration Purposes
(Continued)
Element nm
D2 486.0
Hg 546.1
Hg 577.0
Hg 579.1
D2 656.1
This procedure uses solutions of rare earth oxides prepared by dissolution in acid media. The most frequently used is holmi-
um oxide in perchloric acid. Holmium oxide solution has been internationally accepted as an intrinsic wavelength standard,
and suitable CRMs are available commercially.2 The observed peak maxima are determined using the normal scan mode on
the spectrophotometer. The peak maxima for a 4% m/v solution of holmium oxide in perchloric acid at 1.0-nm spectral band-
width and a path length of 1 cm are shown in Table 2.3
Table 2. Recommended Peak Maxima from a 4% Solution of Holmium Oxide in Perchloric Acid for Wavelength Calibration
Purposes
nm
241.1
249.9
278.1
287.2
333.5
345.4
361.3
385.6
416.3
451.4
467.8
485.2
536.6
640.5
If the operational range of the spectrophotometer lies outside the range 240–650 nm, other certified rare earth oxides or
other solutions can be used if they are traceable to a national or international standard. Didymium (a mixture of neodymium
and praseodymium) is available as a traceable standard both in solution and as a glass. Didymium is similar in preparation to
the holmium materials and has useful peak characteristics in the 730–870 nm region. Useful peaks are found in the didymium
solution at approximately 731.6, 740.0, 794.1, 799.0, and 864.4 nm.
This procedure uses glasses manufactured by fusing the appropriate rare earth oxide in a base glass matrix. The most fre-
quently used is holmium, for which the reference wavelengths have been well defined. Although manufacturing can cause
batch variation in these glasses, traceable CRMs are commercially available and can be used. Typical values for a holmium
glass using a 1.0-nm spectral bandwidth are the following: 241.5, 279.2, 287.5, 333.8, 360.9, 418.8, 445.8, 453.7, 460.2,
536.5, and 637.7 nm.
Control of Absorbance
To establish the transmittance accuracy, precision, and linearity of a given system, it is necessary to verify the absorbance
accuracy of a system over its intended operational range by using the following procedures as appropriate for the wavelength
and absorbance ranges required.
In the 0–200 mg/L range, potassium dichromate solutions provide reference values of up to 3 absorbance units at one of the
certified values of 235, 257, 313, or 350 nm. These solutions are available as CRMs or can be prepared according to NIST from
SRM 935a. Using potassium dichromate solutions, the absorbance accuracy must be ±1%A (for values above 1.0A) or ±0.010A
(for values below 1.0A), whichever is larger. The absorbance precision can be determined as the standard deviation of at least
six replicate measurements at two or more absorbance levels over the operational range. The standard deviation must not ex-
ceed ±0.5%A (for values above 1.0A) or ±0.005A (for values below 1.0A), whichever is larger.
These gray glass filters are manufactured from doped glass and have a nominally flat spectrum in the region of the calibra-
tion wavelengths. They provide reference values of up to 3 absorbance units at the certified values of 440, 465, 546.1, 590,
and 635 nm. These filters are available as CRMs that are traceable to NIST SRM 930e, 1930, and 2930. Other certified stand-
ard solutions or optical filters can be used if they are traceable to a national or international standard. Using gray glass filters,
the absorbance accuracy must be ±0.8%A (for values above 1.0A) or ±0.0080A (for values below 1.0A), whichever is larger.
The absorbance precision can be determined as the standard deviation of at least six replicate measurements at two or more
absorbance levels over the operational range. The standard deviation must not exceed ±0.5%A (for values above 1.0A) or
±0.005A (for values below 1.0A), whichever is larger.
Although the measurement of absorbance or transmittance is a ratio measurement of intensities and therefore theoretically
is independent of monochromatic source intensity, practical measurements are affected by the presence of unwanted radiation
called “stray radiant energy” or “stray light”. In addition, the adverse effect of stray light increases with aging of optical com-
ponents and lamps in a spectrophotometer. The effects are greater at the extremes of detector and lamp operational ranges.
Analysts must monitor the level of stray light at appropriate wavelength(s) as part of PQ. Stray light can be detected at a given
wavelength with a suitable liquid filter. These solutions are available as CRMs or can be prepared at the concentrations shown
in Table 3 by using reagent-grade materials.
Table 3. Spectral Ranges of Selected Materials for Monitoring Stray Light
Spectral Range
(nm) Liquid or Solution
190–205 Aqueous potassium chloride (12 g/L)
210–259 Aqueous sodium iodide or potassium iodide (10 g/L)
250–320 Acetone
300–385 Aqueous sodium nitrite (50 g/L)
When using a 5-mm path length cell (filled with the same filter) as the reference cell, and then measuring the 10-mm cell
over the required spectral range, analysts can calculate the stray light value from the observed maximum absorbance using the
formula:
Sl = 0.25 × 10−2Al
Al = observed maximum absorbance
Acceptance criteria: Sl is £0.01. Al ³0.7A.
This procedure simply requires the 10-mm cell measurement to be referenced against the 5-mm cell (filled with the same
filter) and therefore can be achieved by either chronological or spatial referencing in any type of spectrophotometer. Alterna-
tively, analysts can measure the absorbance of the filters specified in Table 3 against the appropriate reference, and record the
maximum absorbance value. (An Sl value of 0.01 is produced by an Al value of 0.7A, which equates to a maximum absorb-
ance value of 2A measured by this alternate procedure.) [NOTE—For some instruments where absorbance values greater than
3A cannot be reported directly, this procedure may require a two-step process whereby the sample beam initially is attenuated
by a 1- to 2-A filter, the value of which is measured and recorded. After zeroing the instrument with this filter in place, meas-
ure the stray-light filter, and again record the absorbance value. The estimated stray light value is now the sum of these two
absorbance readings.]
Resolution
If accurate absorbance measurements must be made on benzenoid compounds or other compounds with sharp absorption
bands (natural half-bandwidths of less than 15 nm), the spectral bandwidth of the spectrophotometer used should not be
greater than 1/8th the natural half-bandwidth of the compound's absorption.
Determine the resolution of the spectrophotometer by using the following procedure. Measure the ratio of the absorbance
of a 0.020% (v/v) solution of toluene in hexane (UV grade) at the maximum and minimum at about 269 and 266 nm, respec-
tively, using hexane as the reference. The absorbance ratio obtained depends on the spectral bandwidth of the instrument. For
most pharmacopeial quantitative purposes, a spectral bandwidth of 2 nm is sufficient, and the acceptance criteria for the ratio
is NLT 1.3.
The effect of spectral bandwidth and measurement temperature on the ratio is shown in Table 4.4
Table 4. Spectral Bandwidth and Measurement Temperature
Suitable CRMs may also be used for this measurement. Alternatively, a suitable atomic line can be scanned in single-beam
mode, and the peak width at half peak height can be determined. This peak width at half peak height equates to the band-
width of the spectrophotometer.
Performance Qualification
The purpose of PQ is to determine that the instrument is capable of meeting the user's requirements for all the parameters
that may affect the quality of the measurement and to ensure that it will function properly over extended periods of time.
PROCEDURE
With few exceptions, compendial spectrophotometric tests and assays call for comparison against a USP Reference Standard.
This helps ensure measurement under identical conditions for the test specimen and the reference substance. These conditions
could include wavelength setting, spectral bandwidth selection, cell placement and correction, and transmittance levels. Cells
that exhibit identical transmittance at a given wavelength may differ considerably in transmittance at other wavelengths. Ap-
propriate cell corrections should be established and used where required.
Comparisons of a test specimen with a reference standard are best made at a peak of spectral absorption for the compound
concerned. Assays that prescribe spectrophotometry give the commonly accepted wavelength for peak spectral absorption of
the substance in question. Different spectrophotometers may show minor variation in the apparent wavelength of this peak.
Good practice demands that comparisons be made at the wavelength at which peak absorption occurs. Should this differ by
more than ±1 nm (in the range 200–400 nm) or ±2 nm (in the range 400–800 nm) from the wavelength specified in the
individual monograph, recalibration of the instrument may be indicated.
The expressions “similar preparation” and “similar solution” as used in tests and assays involving spectrophotometry indicate
that the reference comparator, generally a USP Reference Standard, should be prepared and observed in an identical manner
for all practical purposes to that used for the test specimen. Usually when analysts make up the solution of the specified refer-
ence standard, they prepare a solution of about (i.e., within 10%) the desired concentration, and they calculate the absorptivi-
ty on the basis of the exact amount weighed out. If a previously dried specimen of the reference standard has not been used,
the absorptivity is calculated on the anhydrous basis. The expressions “concomitantly determine” and “concomitantly meas-
ure” as used in tests and assays involving spectrophotometry indicate that the absorbances of both the solution containing the
test specimen and the solution containing the reference specimen, relative to the specified test blank, must be measured in
immediate succession.
For determinations using UV or visible spectrophotometry, the specimen generally is dissolved in a solvent. Unless otherwise
directed in the monograph, analysts make determinations at room temperature using a path length of 1 cm. Many solvents
are suitable for these ranges, including water, alcohols, lower hydrocarbons, ethers, and dilute solutions of strong acids and
alkalis. Precautions should be taken to use solvents that are free from contaminants that absorb in the spectral region under
examination. For the solvent, analysts typically should use water-free methanol or alcohol or alcohol denatured by the addition
of methanol but without benzene or other interfering impurities. Solvents of special spectrophotometric quality, guaranteed to
be free from contaminants, are available commercially from several sources. Some other analytical reagent-grade organic sol-
vents may contain traces of impurities that absorb strongly in the UV region. New lots of these solvents should be checked for
their transparency, and analysts should take care to use the same lot of solvent for preparation of the test solution, the stand-
ard solution, and the blank. The best practice is to use solvents that have NLT 40% transmittance (39.9%T = 0.399A) at the
wavelength of interest.
Assays in the visible region usually call for concomitantly comparing the absorbance produced by the assay preparation with
that produced by a standard preparation containing approximately an equal quantity of a USP Reference Standard. In some
situations, analysts can omit the use of a reference standard (e.g., when spectrophotometric assays are made with routine fre-
quency) when a suitable standard curve is available and is prepared with the appropriate USP Reference Standard, and when
the substance assayed conforms to the Beer–Lambert law within the range of about 75%–125% of the final concentration
used in the assay. Under these circumstances, the absorbance found in the assay may be interpolated on the standard curve,
and the assay result can be calculated. Such standard curves should be confirmed frequently and always when a new spectro-
photometer or new lots of reagents are put into use.
Validation
Validation is required when a UV-Vis method is intended for use as an alternative to the official procedure for testing an
official article.
The objective of UV-Vis method validation is to demonstrate that the measurement is suitable for its intended purpose, in-
cluding quantitative determination of the main component in a drug substance or a drug product (Category I assays), quanti-
tative determination of impurities or limit tests (Category II), and identification tests (Category IV). Depending on the category
of the test (see Table 2 in Validation of Compendial Procedures á1225ñ), the analytical method validation process for UV-Vis re-
quires testing for linearity, range, accuracy, specificity, precision, detection limit, quantitation limit, and robustness. These ana-
lytical performance characteristics apply to externally standardized procedures and those that use standard additions.
Chapter á1225ñ provides definitions and general guidance on analytical procedures validation without indicating specific val-
idation criteria for each characteristic. The intention of the following sections is to provide the user with specific validation cri-
teria that represent the minimum expectations for this technology. For each particular application, tighter criteria may be nee-
ded in order to demonstrate suitability for the intended use.
ACCURACY
For Category I, II, and III procedures, accuracy can be determined by conducting recovery studies with the appropriate ma-
trix spiked with known concentrations of the analyte. Analysts also can compare assay results obtained using the UV-Vis proce-
dure under validation to those from an established analytical procedure.
Validation criteria: 98.0%–102.0% mean recovery for the drug substances, 95.0%–105.0% mean recovery for the drug
product assay, and 80.0%–120.0% mean recovery for the impurity analysis. These criteria are met throughout the intended
range.
Precision
REPEATABILITY
The repeatability of the analytical procedure is assessed by measuring the concentrations of six independently prepared sam-
ple solutions at 100% of the assay test concentration. Alternatively, it can be assessed by measuring the concentrations of
three replicates of three separate sample solutions at different concentrations. The three concentrations should be close
enough so that the repeatability is constant across the concentration range. If this is done, the repeatability at the three con-
centrations is pooled for comparison to the acceptance criteria.
Validation criteria: The relative standard deviation is NMT 1.0% for the drug substance, NMT 2.0% for the drug product
assay, and NMT 20.0% for the impurity analysis.
INTERMEDIATE PRECISION
The effect of random events on the analytical precision of the method must be established. Typical variables include per-
forming the analysis on different days, using different instrumentation, and/or having the method performed by two or more
analysts. At a minimum, any combination of at least two of these factors totaling six experiments will provide an estimation of
intermediate precision.
Validation criteria: The relative standard deviation is NMT 1.5% for the drug substance, NMT 3.0% for the drug product
assay, and NMT 25.0% for the impurity analysis.
SPECIFICITY
In UV-Vis measurements, specificity is ensured by the use of a reference standard wherever possible and is demonstrated by
the lack of interference from other components present in the matrix.
DETECTION LIMIT
The detection limit (DL) can be estimated by calculating the standard deviation of NLT 6 replicate measurements of a blank
solution and multiplying by 3.3. Alternatively, the standard deviation can be determined from the error of the intercept from a
calibration curve or by determining that the signal-to-noise ratio is >3.3. The estimated DL must be confirmed by analyzing
samples at the calculated concentration.
QUANTITATION LIMIT
The quantitation limit (QL) can be estimated by calculating the standard deviation of NLT 6 replicate measurements of a
blank solution and multiplying by 10. Alternatively, the standard deviation can be determined from the error of the intercept
from a calibration curve or by determining that the signal-to-noise ratio is >10.
Measurement of a test solution prepared from a representative sample matrix spiked at the required QL concentration must
be performed to confirm sufficient sensitivity and adequate precision. The observed signal-to-noise ratio at the required QL
should be >10. [NOTE—A suitable procedure for measuring the signal-to-noise ratio is given in ASTM 1657-98 (2006) Standard
Practice for the Testing of Variable-Wavelength Photometric Detectors Used in Liquid Chromatography.]
Validation criteria: For the estimated limit of quantitation to be considered valid, the measured concentration must be accu-
rate and precise at a level £50% of the specification.
LINEARITY
A linear relationship between the analyte concentration and UV-Vis response must be demonstrated by preparation of NLT
five standard solutions at concentrations encompassing the anticipated concentration of the test solution. The standard curve
is then evaluated using appropriate statistical methods such as a least-squares regression. Deviation from linearity results from
either instrumental or sample factors, or both, and can be reduced to acceptable levels by reducing the analyte concentration
and thereby the associated absorbance values.
Validation criteria: The correlation coefficient (R) must be NLT 0.995 for Category I assays and NLT 0.99 for Category II
quantitative tests.
RANGE
The operational range of an analytical instrument (and the analytical procedure as a whole) is the interval between the up-
per and lower concentrations (amounts) of analyte in the sample (including these concentrations) for which it has been dem-
onstrated that the instrumental response function has a suitable level of precision, accuracy, and linearity.
Validation criteria: For Category I tests, the validation range for 100.0% centered acceptance criteria is 80.0%–120.0%. For
noncentered acceptance criteria, the validation range is 10.0% below the lower limit to 10.0% above the upper limit. For con-
tent uniformity, the validation range is 70.0%–130.0%. For Category II tests, the validation range covers 50.0%–120.0% of
the acceptance criteria.
ROBUSTNESS
The reliability of an analytical measurement is demonstrated by deliberate changes to experimental parameters. For UV-Vis
this can include measuring the stability of the analyte under specified storage conditions, varying pH, and adding possible in-
terfering species, to list a few examples. Robustness is determined concurrently using a suitable design for the experimental
procedure.
For certain UV-Vis procedures, chromogenic reactions are employed. Generally the requirements for the analytical perform-
ance characteristics are used. In some instances, the required accuracy and precision criteria for the direct measurements may
not be achievable. Under these circumstances, the accuracy and precision requirements can be widened by as much as 50%.
Any such widening must be justified on scientific grounds and with documented evidence. It may be necessary to increase the
amount of replication required to produce a scientifically sound reportable value.
Verification
Current US Good Manufacturing Practices regulations [21 CFR 211.194(a)(2)] indicate that users of analytical procedures
described in USP–NF are not required to validate these procedures if provided in a monograph. Instead, they simply must veri-
fy their suitability under actual conditions of use.
The objective of a UV-Vis procedure verification is to demonstrate the suitability of a test procedure under actual conditions
of use. Performance characteristics that verify the suitability of a UV-Vis procedure are similar to those required for any analyti-
cal procedure. A discussion of the applicable general principles is found in Verification of Compendial Procedures á1226ñ. Verifica-
tion is usually performed using a reference material and a well-defined matrix. Verification of compendial UV-Vis procedures
includes at minimum the execution of the validation parameters for specificity, accuracy, precision, and quantitation limit,
when appropriate, as indicated under Validation.
INTRODUCTION
Precisely determined thermodynamic events, such as a change of state, can indicate the identity and purity of drugs. Com-
pendial standards have long been established for the melting or boiling temperatures of substances. These transitions occur at
characteristic temperatures, and the compendial standards therefore contribute to the identification of the substances. Because
impurities affect these changes in predictable ways, the same compendial standards contribute to the control of the purity of
the substances.
Thermal analysis in the broadest sense is the measurement of physical–chemical properties of materials as a function of tem-
perature. Instrumental methods have largely supplanted older methods dependent on visual inspection and measurements un-
der fixed or arbitrary conditions, because they are objective, provide more information, afford permanent records, and are
generally more sensitive, precise, and accurate. Furthermore, they may provide information on desolvation, dehydration, de-
composition, crystal perfection, polymorphism, melting temperature, sublimation, glass transitions, evaporation, pyrolysis, sol-
id-solid interactions, and purity. Such data are useful in the characterization of substances with respect to compatibility, stabili-
ty, packaging, and quality control. The measurements used most often in thermal analysis, i.e., transition and melting point
temperatures by differential scanning calorimetry (DSC), thermogravimetric analysis, hot-stage microscopy, and eutectic im-
purity analysis, are described here.
As a specimen is heated, transitions can be observed using differential scanning calorimetry (DSC), differential thermal analy-
sis (DTA), or hot-stage microscopy. In heat-flux DSC, the heat differential between the sample and reference material is deter-
mined. In power compensation DSC, the sample and reference materials are maintained at the same temperature, using indi-
vidual heating elements, and the difference in power input to the two heaters is recorded. DTA monitors the difference in tem-
peratures between the sample and the reference. The transitions that may be observed include those shown in Table 1 below.
In the case of melting, both an “onset” and a “peak” temperature can be determined objectively and reproducibly, often to
within a few tenths of a degree. Although these temperatures are useful for characterizing substances, and the difference be-
tween the two temperatures is indicative of purity, the values cannot be directly compared to visual “melting-range” or “melt-
ing-point” values or with constants such as the triple point of the pure material.
Furthermore, caution should be used when comparing results obtained by different methods of analysis. Optical methods
may measure the melting point as the temperature where the last trace of solid coalesces. In contrast, melting points meas-
ured by DSC may refer to the onset temperature or the temperature where the maximum melting rate (vertex) was observed.
However, the vertex is sensitive to sample weight, heating rate, and other factors, whereas the onset temperature is less affec-
ted by these factors. With thermal techniques, it is necessary to consider the limitations of solid solution formation, insolubility
in the melt, polymorphism, and decomposition during the analysis.
Table 1
Solid to liquid Melting Endothermic
Liquid to gas Vaporization Endothermic
Freezing Exothermic
Liquid to solid
Crystallization Exothermic
Solid to gas Sublimation Endothermic
Glass transition Second order event
Desolvation Endothermic
Solid to solid
Amorphous to crystalline Exothermic
Polymorphic Endothermic or Exothermic
Reporting Results of Instrumental Methods—A complete description of the conditions employed should accompany
each thermogram, including make and model of instrument; record of last calibration; specimen size and identification (in-
cluding previous thermal history); container; identity, flow rate, and pressure of gaseous atmosphere; direction and rate of
temperature change; and instrument and recorder sensitivity.
Apparatus—Use DTA or DSC instrumentation equipped with a temperature-programming device, thermal detector(s), and
a recording system that can be connected to a computer, unless otherwise prescribed by the specific monograph for which
this chapter is being used.
Calibration—Calibrate instrumentation for temperature and enthalpy changes, using indium or another suitable certified
material. Temperature calibration is performed by heating a standard through the melting transition and comparing the ex-
trapolated onset of melting point of the standard to the certified onset of melting point. The temperature calibration should
be conducted at the same heating rate as the experiment. Enthalpy calibration is performed by heating a standard through
the melting transition and comparing the calculated heat of fusion to the theoretical value.
Procedure—Accurately weigh an appropriate quantity of the substance to be examined in the sample pan, as described in
the specific monograph. Set initial temperature, heating rate, direction of temperature change, and final temperature as speci-
fied in the monograph. If not specified in the monograph, these parameters are determined as follows: make a preliminary
examination over a wide range of temperatures (typically, room temperature to decomposition temperature, or about 10° to
20° above the melting point) and over a wide range of heating rates (1° to 20° per minute) to reveal any unexpected effects.
Then determine a lower heating rate such that decomposition is minimized and the transition temperature is not compro-
mised. Determine a temperature range bracketing the transition of interest such that the baseline can be extended to intersect
with the tangent of the melt (see Figure 1).
Figure 1. Thermogram.
In examining pure crystalline materials, rates as low as 1° per minute may be appropriate, whereas rates of up to 20° per
minute are more appropriate for polymeric and other semicrystalline materials. Begin the analysis, and record the differential
thermal analysis curve with the temperature on the x-axis and the energy change on the y-axis. The melting temperature (melt
onset temperature) is the intersection (188.79°) of the extension of the baseline with the tangent at the point of greatest slope
(inflexion point) of the curve (see Figure 1). The vertex is the temperature at the peak of the curve (190.31°). The enthalpy of
the event is proportional to the area under the curve after application of a baseline correction.
THERMOGRAVIMETRIC ANALYSIS
Thermogravimetric analysis involves the determination of the mass of a specimen as a function of temperature, or time of
heating, or both. It is often used to investigate dehydration/desolvation processes and compound decomposition. When ther-
mogravimetry is properly applied, it provides more useful information than loss on drying at fixed temperature, often for a
fixed time and in what is usually an ill-defined atmosphere. Usually, loss of surface-absorbed solvent can be distinguished from
solvent in the crystal lattice and from degradation losses. The measurements can be carried out in atmospheres having control-
led humidity and oxygen concentration to reveal interactions with the drug substance, between drug substances, and be-
tween active substances and excipients or packaging materials.
Apparatus—While the details depend on the manufacturer, the essential features of the equipment are a recording balance
and a programmable heat source. Equipment differs in the ability to handle specimens of various sizes, the means of sensing
specimen temperature, and the range of atmosphere control.
Calibration—Calibration is required with all systems: that is, the mass scale is calibrated by the use of standard weights,
and calibration of the temperature scale involves the use of standard materials, because it is assumed that the specimen tem-
perature is the furnace temperature. Weight calibration is conducted by measuring the mass of a certified or standard weight
and comparing the measured mass with the certified value. Temperature calibration is performed by analyzing a high-purity
magnetic standard such as nickel for its curie temperature and comparing the measured value to the theoretical value.
Procedure—Apply the method to the sample, using the conditions described in the monograph, and calculate the mass
gain or loss, expressing the change in mass as percentage. Alternatively, place a suitable quantity of material in the sample
holder, and record the mass. Because the test atmosphere is critical, the pressure or flow rate and the composition of the gas
are specified. Set the initial temperature, heating rate, and final temperature according to the manufacturer’s instructions, and
initiate the temperature increase. Alternatively, conduct an examination of the thermogram over a wide range of temperatures
(typically, from room temperature to the decomposition temperature, or 10° to 20° above the melting point at a heating rate
of 1° to 20° per minute). Calculate the mass gain or loss, expressing the change in mass as percentage.
HOT-STAGE MICROSCOPY
Hot-stage microscopy is an analytical technique that involves monitoring the optical properties of the sample using a micro-
scope as a function of temperature. Hot-stage microscopy may be used as a complementary technique to other thermal analy-
sis techniques such as DSC, DTA, and variable temperature X-ray powder diffraction for the solid-state characterization of
pharmaceutical compounds. It is useful to confirm transitions such as melts, recrystallizations, and solid-state transformations
using a visual technique. The hot-stage microscope must be calibrated for temperature.
The basis of any calorimetric purity method is the relationship between the melting and freezing point depression and the
level of impurity. The melting of a compound is characterized by the absorption of latent heat of fusion, DHf, at a specific tem-
perature, To. In theory, a melting transition for an absolutely pure crystalline compound should occur within an infinitely nar-
row range. A broadening of the melting range, due to impurities, provides a sensitive criterion of purity. The effect is apparent
visually by examination of thermograms of specimens differing by a few tenths percent in impurity content. A material that is
99% pure is about 20% molten at 3° below the melting point of the pure material (see Figure 2).
Figure 2. Superimposed thermograms illustrating the effect of impurities on DSC melting peak shape.
The parameters of melting (melting range, DHf, and calculated eutectic purity) are readily obtained from the thermogram of
a single melting event using a small test specimen, and the method does not require multiple, precise actual temperature
measurements. Thermogram units are directly convertible to heat transfer, millicalories per second.
The lowering of the freezing point in dilute solutions by molecules of nearly equal size is expressed by a modified van't Hoff
equation:
in which T = absolute temperature in kelvins; X2 = mole fraction of minor component (solute, impurity); DHf = molar heat of
fusion of the major component in Joules per mol: R = gas constant in Joules per mol × kelvins; and KD = distribution ratio of
solute between the solid and liquid phases.
Assuming that the temperature range is small and that no solid solutions are formed (KD = 0).
Integration of the van't Hoff equation yields the following relationship between the mole fraction of impurity and the melt-
ing-point depression:
in which To = melting point of the pure compound, in kelvins, and Tm = melting point of the test specimen, in kelvins.
With no solid solution formation, the concentration of impurity in the liquid phase at any temperature during the melting is
inversely proportional to the fraction melted at that temperature, and the melting-point depression is directly proportional to
the mole fraction of impurity. A plot of the observed test specimen temperature, Ts, versus the reciprocal of the fraction mel-
ted, 1/F, at temperature Ts, should yield a straight line with the slope equal to the melting-point depression (To − Tm). The
theoretical melting point of the pure compound is obtained by extrapolation to 1/F = 0:
Substituting the experimentally obtained values for To − Tm, DHf, and To in equation (2) yields the mole fraction of the total
eutectic impurity, which, when multiplied by 100, gives the mole percentage of total eutectic impurities.
Deviations from the theoretical linear plot also may be due to solid solution formation (KD ¹ 0), so that care must be taken in
interpreting the data.
To observe the linear effect of the impurity concentration on the melting-point depression, the impurity must be soluble in
the liquid phase or melt of the compound, but insoluble in the solid phase, i.e., no solid solutions are formed. Some chemical
similarities are necessary for solubility in the melt. For example, the presence of ionic compounds in neutral organic com-
pounds and the occurrence of thermal decomposition may not be reflected in purity estimates. The extent of these theoretical
limitations has been only partially explored.
Impurities present from the synthetic route often are similar to the end product, hence there usually is no problem of solubil-
ity in the melt. Impurities consisting of molecules of the same shape, size, and character as those of the major component can
fit into the matrix of the major component without disruption of the lattice, forming solid solutions or inclusions; such impuri-
ties are not detectable by DSC. Purity estimates are too high in such cases. This is more common with less-ordered crystals as
indicated by low heats of fusion.
In addition, the method is reliable when the purity of the major component is greater than 98.5 mol% and the materials are
not decomposed during the melting phase.
Impurity levels calculated from thermograms are reproducible and generally reliable within 0.1% for ideal compounds.
Compounds that exist in polymorphic form cannot be used in purity determination unless the compound is completely con-
verted to one form. On the other hand, DSC and DTA are inherently useful for detecting, and therefore monitoring, polymor-
phism.
Procedure—The actual procedure and the calculations to be employed for eutectic impurity analysis are dependent on the
particular instrument used. Consult the manufacturer's literature and/or the thermal analysis literature for the most appropriate
technique for a given instrument. In any event, it is imperative to keep in mind the limitations of solid solution formation, in-
solubility in the melt, polymorphism, and decomposition during the analysis.
This general chapter is harmonized with the corresponding texts of the European Pharmacopoeia and the Japanese Pharmaco-
poeia. Portions of the general chapter text that are national USP text, and are not part of the harmonized text, are marked with
symbols (♦♦) to specify this fact.
♦NOTE—In this chapter, unit and dosage unit are synonymous.
♦
To ensure the consistency of dosage units, each unit in a batch should have a drug substance content within a narrow range
around the label claim. Dosage units are defined as dosage forms containing a single dose or a part of a dose of drug sub-
stance in each unit. The uniformity of dosage units specification is not intended to apply to suspensions, emulsions, or gels in
unit-dose containers intended for external, cutaneous administration.
The term “uniformity of dosage unit” is defined as the degree of uniformity in the amount of the drug substance among
dosage units. Therefore, the requirements of this chapter apply to each drug substance being comprised in dosage units con-
taining one or more drug substances, unless otherwise specified elsewhere in this Pharmacopeia.
The uniformity of dosage units can be demonstrated by either of two methods, Content Uniformity or ♦Weight♦ Variation (see
Table 1). The test for Content Uniformity of preparations presented in dosage units is based on the assay of the individual con-
tent of drug substance(s) in a number of dosage units to determine whether the individual content is within the limits set. The
Content Uniformity method may be applied in all cases.
The test for ♦Weight♦ Variation is applicable for the following dosage forms:
The test for Content Uniformity is required for all dosage forms not meeting the above conditions for the ♦Weight♦ Variation
test.1
Table 1. Application of Content Uniformity (CU) and Weight Variation (WV) Tests for Dosage Forms
Dose & Ratio of
Drug Substance
³25 mg and <25 mg or
Dosage Form Type Subtype ³25% <25%
Uncoated WV CU
Tablets Film WV CU
Coated
Others CU CU
Hard WV CU
Suspension, emulsion,
Capsules
Soft or gel CU CU
Solutions WV WV
Single component WV WV
Solids in Solution freeze-dried in final
single-unit containers Multiple components container WV WV
Others CU CU
Solutions in unit-dose
containers ♦and into soft capsules♦ WV WV
Others CU CU
CONTENT UNIFORMITY
Select not fewer than 30 units, and proceed as follows for the dosage form designated.
Where different procedures are used for assay of the preparation and for the Content Uniformity test, it may be necessary to
establish a correction factor to be applied to the results of the latter.
Assay 10 units individually using an appropriate analytical method. Calculate the acceptance value (see Table 2).
Assay 10 units individually using an appropriate analytical method. Carry out the assay on the amount of well-mixed materi-
al that is removed from an individual container in conditions of normal use, and express the results as delivered dose. Calculate
the acceptance value (see Table 2).
1 ♦European Pharmacopoeia and Japanese Pharmacopoeia text not accepted by the United States Pharmacopeia: Alternatively, products listed in item (4) above
that do not meet the 25 mg/25% threshold limit may be tested for uniformity of dosage units by Mass Variation instead of the Content Uniformity test if the
concentration relative standard deviation (RSD) of the drug substance in the final dosage units is not more than 2%, based on process validation data and develop-
ment data, and if there has been regulatory approval of such a change. The concentration RSD is the RSD of the concentration per dosage unit (w/w or w/v),
where concentration per dosage unit equals the assay result per dosage unit divided by the individual dosage unit weight. See the RSD formula in Table 2.♦
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á905ñ Uniformity of Dosage Units 359
Table 2
Variable Definition Conditions Value
Mean of individual contents (c1, c2,
¼, cn), expressed as a percentage
X of the label claim
c1, c2, ¼, cn Individual contents of the units tes-
ted, expressed as a percentage of
the label claim
n Sample size (number of units in a
sample)
k Acceptability constant If n = 10, then k = 2.4
If n = 30, then k = 2.0
s Sample standard deviation
♦WEIGHT♦ VARIATION
Carry out an assay for the drug substance(s) on a representative sample of the batch using an appropriate analytical meth-
od. This value is result A, expressed as percentage of label claim (see Calculation of Acceptance Value). Assume that the concen-
tration (weight of drug substance per weight of dosage unit) is uniform. Select not fewer than 30 dosage units, and proceed
as follows for the dosage form designated.
Accurately weigh 10 tablets individually. Calculate the content, expressed as percentage of label claim, of each tablet from
the ♦weight♦ of the individual tablet and the result of the Assay. Calculate the acceptance value.
Hard Capsules
Accurately weigh 10 capsules individually, taking care to preserve the identity of each capsule. Remove the contents of each
capsule by a suitable means. Accurately weigh the emptied shells individually, and calculate for each capsule the net ♦weight♦
of its contents by subtracting the ♦weight♦ of the shell from the respective gross ♦weight♦. Calculate the drug substance content
of each capsule from the ♦net weight♦ of the individual capsule ♦content♦ and the result of the Assay. Calculate the acceptance
value.
Soft Capsules
Accurately weigh 10 intact capsules individually to obtain their gross ♦weights♦, taking care to preserve the identity of each
capsule. Then cut open the capsules by means of a suitable clean, dry cutting instrument such as scissors or a sharp open
blade, and remove the contents by washing with a suitable solvent. Allow the occluded solvent to evaporate from the shells at
room temperature over a period of about 30 minutes, taking precautions to avoid uptake or loss of moisture. Weigh the indi-
vidual shells, and calculate the net contents. Calculate the drug substance content in each capsule from the ♦weight♦ of prod-
uct removed from the individual capsules and the result of the Assay. Calculate the acceptance value.
Proceed as directed for Hard Capsules, treating each unit as described therein. Calculate the acceptance value.
Accurately weigh the amount of liquid that is removed from each of 10 individual containers in conditions of normal use. If
necessary, compute the equivalent volume after determining the density. Calculate the drug substance content in each con-
tainer from the mass of product removed from the individual containers and the result of the Assay. Calculate the acceptance
value.
Calculate the acceptance value as shown in Content Uniformity, except that the individual contents of the units are replaced
with the individual estimated contents defined below.
c1, c2, ¼, cn = individual estimated contents of the units tested, where ci = wi × A/W
w1, w2, ¼, wn = individual ♦weights♦ of the units tested
A = content of drug substance (% of label claim) obtained using an appropriate analytical method
W = mean of individual ♦weights♦
(w1, w2, ¼, wn)
CRITERIA
The requirements for dosage uniformity are met if the acceptance value of the first 10 dosage units is less than or equal to
L1%. If the acceptance value is > L1%, test the next 20 units, and calculate the acceptance value. The requirements are met if
the final acceptance value of the 30 dosage units is £ L1%, and no individual content of ♦any♦ dosage unit is less than [1 −
(0.01)(L2)]M nor more than [1 + (0.01)(L2)]M ♦as specified♦ in the Calculation of Acceptance Value under Content Uniformity or
under ♦Weight♦ Variation. Unless otherwise specified, L1 is 15.0 and L2 is 25.0.
Other viscometers may be used provided that the accuracy and precision is NLT that obtained with the viscometers descri-
bed in this chapter.
Procedure: Fill the viscometer through tube (L) with a sufficient quantity of the sample liquid that is appropriate for the
viscometer being used or by following the manufacturer’s instructions. Carry out the experiment with the tube in a vertical
position. Fill bulb (A) with the liquid, and also ensure that the level of liquid in bulb (B) is below the exit to the ventilation
tube (M). Immerse the viscometer in a water or oil bath stabilized at the temperature specified in the individual mono-
graph, and control the temperature to ±0.1°, unless otherwise specified in the individual monograph. Maintain the viscom-
eter in a vertical position for a time period of NLT 30 min to allow the sample temperature to reach equilibrium. Close tube
(M), and raise the level of the liquid in tube (N) to a level about 8 mm above mark (E º h1). Keep the liquid at this level by
closing tube (N) and opening tube (M). Open tube (N), and measure the time required for the level of the liquid to drop
from mark (E º h1) to mark (F º h2), using an appropriate accurate timing device. [NOTE—The minimum flow time should
be 200 s.]
Calibration: Calibrate each viscometer at the test temperature by using fluids of known viscosities of appropriate viscosity
standards to determine the viscometer constant, k. The viscosity values of the calibration standards should bracket the ex-
pected viscosity value of the sample liquid.
Calculate the viscometer constant, k, in mm2/s2:
k = h/(r × t)
h = known viscosity of the liquid (mPa · s)
r = density of the liquid (g/mL)
t = flow time for the liquid to pass from the upper mark to the lower mark (s)
Calculation of kinematic and Newtonian viscosities of sample fluid: A capillary viscometer is chosen so that the flow
time, t, is NLT 200 s, and the kinematic energy correction is typically less than 1%. If the viscosity constant, k, is known,
use the following equation to calculate the kinematic viscosity, v, in mm2/s, from the flow time, t, in s.
v=k×t
If the density of the fluid is known at the temperature of the viscosity measurement, then the Newtonian viscosity, h, in
mPa · s, is calculated:
h=v×r
r = density of the fluid (g/mL)
The flow time of the fluid under examination is the mean of NLT three consecutive determinations. The result is valid if the
percentage of the relative standard deviation (%RSD) for the three readings is NMT 2.0%.
• METHOD II. SIMPLE U-TUBE (OR OSTWALD-TYPE) CAPILLARY VISCOMETER
Apparatus: The determination may be carried out with a simple U-tube capillary viscometer (Figure 2).
1 For example, the Cannon-Fenske capillary viscometer is one of the simple U-tube capillary viscometers and is also called a modified Ostwald-type capillary visc-
ometer.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á921ñ Water Determination 363
Calibration and Calculation of kinematic and Newtonian viscosities of sample fluid: Proceed as directed in Method I.
For certain simple U-tube capillary viscometers, determine the viscometer constant at the same temperature as the sample
liquid under test.
Many Pharmacopeial articles either are hydrates or contain water in adsorbed form. As a result, the determination of the
water content is important in demonstrating compliance with the Pharmacopeial standards. Generally one of the methods giv-
en below is called for in the individual monograph, depending upon the nature of the article. In rare cases, a choice is allowed
between two methods. When the article contains water of hydration, Method I (Titrimetric), Method II (Azeotropic), or Method III
(Gravimetric) is employed, as directed in the individual monograph, and the requirement is given under the heading Water.
The heading Loss on Drying (see Loss on Drying á731ñ) is used in those cases where the loss sustained on heating may be not
entirely water.
METHOD I (TITRIMETRIC)
Determine the water by Method Ia, unless otherwise specified in the individual monograph.
Principle—The titrimetric determination of water is based upon the quantitative reaction of water with an anhydrous solu-
tion of sulfur dioxide and iodine in the presence of a buffer that reacts with hydrogen ions.
In the original titrimetric solution, known as Karl Fischer Reagent, the sulfur dioxide and iodine are dissolved in pyridine and
methanol. The test specimen may be titrated with the Reagent directly, or the analysis may be carried out by a residual titra-
tion procedure. The stoichiometry of the reaction is not exact, and the reproducibility of a determination depends upon such
factors as the relative concentrations of the Reagent ingredients, the nature of the inert solvent used to dissolve the test speci-
men, and the technique used in the particular determination. Therefore, an empirically standardized technique is used in order
to achieve the desired accuracy. Precision in the method is governed largely by the extent to which atmospheric moisture is
excluded from the system. The titration of water is usually carried out with the use of anhydrous methanol as the solvent for
the test specimen. In some cases, other suitable solvents may be used for special or unusual test specimens. In these cases, the
addition of at least 20% of methanol or other primary alcohol is recommended.
Apparatus—Any apparatus may be used that provides for adequate exclusion of atmospheric moisture and determination
of the endpoint. In the case of a colorless solution that is titrated directly, the endpoint may be observed visually as a change
in color from canary yellow to amber. The reverse is observed in the case of a test specimen that is titrated residually. More
commonly, however, the endpoint is determined electrometrically with an apparatus employing a simple electrical circuit that
serves to impress about 200 mV of applied potential between a pair of platinum electrodes immersed in the solution to be
titrated. At the endpoint of the titration a slight excess of the reagent increases the flow of current to between 50 and 150
microamperes for 30 s to 30 min, depending upon the solution being titrated. The time is shortest for substances that dissolve
in the reagent. With some automatic titrators, the abrupt change in current or potential at the endpoint serves to close a sole-
noid-operated valve that controls the buret delivering the titrant. Commercially available apparatus generally comprises a
closed system consisting of one or two automatic burets and a tightly covered titration vessel fitted with the necessary electro-
des and a magnetic stirrer. The air in the system is kept dry with a suitable desiccant, and the titration vessel may be purged
by means of a stream of dry nitrogen or current of dry air.
Reagent—Prepare the Karl Fischer Reagent as follows. Add 125 g of iodine to a solution containing 670 mL of methanol
and 170 mL of pyridine, and cool. Place 100 mL of pyridine in a 250-mL graduated cylinder, and, keeping the pyridine cold in
an ice bath, pass in dry sulfur dioxide until the volume reaches 200 mL. Slowly add this solution, with shaking, to the cooled
iodine mixture. Shake to dissolve the iodine, transfer the solution to the apparatus, and allow the solution to stand overnight
before standardizing. One mL of this solution when freshly prepared is equivalent to approximately 5 mg of water, but it dete-
riorates gradually; therefore, standardize it within 1 h before use, or daily if in continuous use. Protect from light while in use.
Store any bulk stock of the reagent in a suitably sealed, glass-stoppered container, fully protected from light, and under refrig-
eration. For determination of trace amounts of water (less than 1%), it is preferable to use a Reagent with a water equivalency
factor of not more than 2.0, which will lead to the consumption of a more significant volume of titrant.
A commercially available, stabilized solution of Karl Fischer type reagent may be used. Commercially available reagents con-
taining solvents or bases other than pyridine or alcohols other than methanol may be used also. These may be single solutions
or reagents formed in situ by combining the components of the reagents present in two discrete solutions. The diluted Re-
agent called for in some monographs should be diluted as directed by the manufacturer. Either methanol or other suitable
solvent, such as ethylene glycol monomethyl ether, may be used as the diluent.
Test Preparation—Unless otherwise specified in the individual monograph, use an accurately weighed or measured
amount of the specimen under test estimated to contain 2–250 mg of water. The amount of water depends on the water
equivalency factor of the Reagent and on the method of endpoint determination. In most cases, the minimum amount of
specimen, in mg, can be estimated using the formula:
FCV/KF
in which F is the water equivalency factor of the Reagent, in mg per mL; C is the used volume, in percent, of the capacity of the
buret; V is the buret volume, in mL; and KF is the limit or reasonable expected water content in the sample, in percent. C is
generally between 30% and 100% for manual titration, and between 10% and 100% for the instrumental method endpoint
determination. [NOTE—It is recommended that the product of FCV be greater than or equal to 200 for the calculation to en-
sure that the minimum amount of water titrated is greater than or equal to 2 mg.]
Where the specimen under test is an aerosol with propellant, store it in a freezer for not less than 2 h, open the container,
and test 10.0 mL of the well-mixed specimen. In titrating the specimen, determine the endpoint at a temperature of 10° or
higher.
Where the specimen under test is capsules, use a portion of the mixed contents of not fewer than four capsules.
Where the specimen under test is tablets, use powder from not fewer than four tablets ground to a fine powder in an atmos-
phere of temperature and relative humidity known not to influence the results.
Where the monograph specifies that the specimen under test is hygroscopic, take an accurately weighed portion of the solid
into the titration vessel, proceeding as soon as possible and taking care to avoid moisture uptake from the atmosphere. If the
sample is constituted by a finite amount of solid as a lyophilized product or a powder inside a vial, use a dry syringe to inject
an appropriate volume of methanol, or other suitable solvent, accurately measured, into a tared container, and shake to dis-
solve the specimen. Using the same syringe, remove the solution from the container and transfer it to a titration vessel pre-
pared as directed for Procedure. Repeat the procedure with a second portion of methanol, or other suitable solvent, accurately
measured, add this washing to the titration vessel, and immediately titrate. Determine the water content, in mg, of a portion
of solvent of the same total volume as that used to dissolve the specimen and to wash the container and syringe, as directed
for Standardization of Water Solution for Residual Titration, and subtract this value from the water content, in mg, obtained in
the titration of the specimen under test. Dry the container and its closure at 100° for 3 h, allow to cool in a desiccator, and
weigh. Determine the weight of specimen tested from the difference in weight from the initial weight of the container.
When appropriate, the water may be desorbed or released from the sample by heat in an external oven connected with the
vessel, to where it is transferred with the aid of an inert and dried gas such as pure nitrogen. Any drift due to the transport gas
should be considered and corrected. Care should be taken in the selection of the heating conditions to avoid the formation of
water coming from dehydration due to decomposition of the sample constituents, which may invalidate this approach.
Standardization of the Reagent—Place enough methanol or other suitable solvent in the titration vessel to cover the elec-
trodes, and add sufficient Reagent to give the characteristic endpoint color, or 100 ± 50 microamperes of direct current at
about 200 mV of applied potential.
Purified Water, sodium tartrate dihydrate, a USP Reference Standard, or commercial standards with a certificate of analysis
traceable to a national standard may be used to standardize the Reagent. The reagent equivalency factor, the recommended
titration volume, buret size, and amount of standard to measure are factors to consider when deciding which standard and
how much to use.1 For Purified Water or water standards, quickly add the equivalent of between 2 and 250 mg of water. Cal-
culate the water equivalency factor, F, in mg of water per mL of reagent:
W/V
in which W is the weight, in mg, of the water contained in the aliquot of standard used; and V is the volume, in mL, of the
Reagent used in the titration. For sodium tartrate dihydrate, quickly add 20–125 mg of sodium tartrate dihydrate
(C4H4Na2O6 · 2H2O), accurately weighed by difference, and titrate to the endpoint. The water equivalence factor F, in mg of
water per mL of reagent, is given by the formula:
W/V (36.04/230.08)
in which 36.04 is two times the molecular weight of water and 230.08 is the molecular weight of sodium tartrate dihydrate; W
is the weight, in mg, of sodium tartrate dihydrate; and V is the volume, in mL, of the Reagent consumed in the second titra-
tion. Note that the solubility of sodium tartrate dihydrate in methanol is such that fresh methanol may be needed for addition-
al titrations of the sodium tartrate dihydrate standard.
Procedure—Unless otherwise specified, transfer enough methanol or other suitable solvent to the titration vessel, ensuring
that the volume is sufficient to cover the electrodes (approximately 30–40 mL), and titrate with the Reagent to the electromet-
ric or visual endpoint to consume any moisture that may be present. (Disregard the volume consumed, because it does not
1 Consider a setup in which the reagent equivalency factor is 5 mg/mL, and the buret volume is 5 mL and an instrumental endpoint. Standard amounts equivalent
to between 2.5 mg and 22.5 mg of water (10%–90% of buret capacity) could be used based on the buret and the reagent equivalency factor. The upper end of
this range would involve an excessive amount of sodium tartrate dihydrate. If Purified Water or a standard is weighed, an analytical balance appropriate to the
amount weighed is required.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á921ñ Water Determination 365
enter into the calculations.) Quickly add the Test Preparation, mix, and again titrate with the Reagent to the electrometric or
visual endpoint. Calculate the water content of the specimen taken, in mg:
SF
in which S is the volume, in mL, of the Reagent consumed in the second titration; and F is the water equivalence factor of the
Reagent.
Principle—See the information given in the section Principle under Method Ia. In the residual titration, excess Reagent is
added to the test specimen, sufficient time is allowed for the reaction to reach completion, and the unconsumed Reagent is
titrated with a standard solution of water in a solvent such as methanol. The residual titration procedure is applicable generally
and avoids the difficulties that may be encountered in the direct titration of substances from which the bound water is re-
leased slowly.
Apparatus, Reagent, and Test Preparation—Use Method Ia.
Standardization of Water Solution for Residual Titration—Prepare a Water Solution by diluting 2 mL of water with meth-
anol or other suitable solvent to 1000 mL. Standardize this solution by titrating 25.0 mL with the Reagent, previously standar-
dized as directed under Standardization of the Reagent. Calculate the water content, in mg per mL, of the Water Solution taken:
V¢F/25
in which V¢ is the volume of the Reagent consumed, and F is the water equivalence factor of the Reagent. Determine the water
content of the Water Solution weekly, and standardize the Reagent against it periodically as needed.
Procedure—Where the individual monograph specifies that the water content is to be determined by Method Ib, transfer
enough methanol or other suitable solvent to the titration vessel, ensuring that the volume is sufficient to cover the electrodes
(approximately 30–40 mL), and titrate with the Reagent to the electrometric or visual endpoint. Quickly add the Test Prepara-
tion, mix, and add an accurately measured excess of the Reagent. Allow sufficient time for the reaction to reach completion,
and titrate the unconsumed Reagent with standardized Water Solution to the electrometric or visual endpoint. Calculate the
water content of the specimen, in mg, taken:
F(X¢ − XR)
in which F is the water equivalence factor of the Reagent; X¢ is the volume, in mL, of the Reagent added after introduction of
the specimen; X is the volume, in mL, of standardized Water Solution required to neutralize the unconsumed Reagent; and R is
the ratio, V¢/25 (mL Reagent/mL Water Solution), determined from the Standardization of Water Solution for Residual Titration.
Principle—The Karl Fischer reaction is used in the coulometric determination of water. Iodine, however, is not added in the
form of a volumetric solution but is produced in an iodide-containing solution by anodic oxidation. The reaction cell usually
consists of a large anode compartment and a small cathode compartment that are separated by a diaphragm. Other suitable
types of reaction cells (e.g., without diaphragms) may also be used. Each compartment has a platinum electrode that conducts
current through the cell. Iodine, which is produced at the anode electrode, immediately reacts with water present in the com-
partment. When all the water has been consumed, an excess of iodine occurs, which usually is detected electrometrically, thus
indicating the endpoint. Moisture is eliminated from the system by pre-electrolysis. Changing the Karl Fischer solution after
each determination is not necessary because individual determinations can be carried out in succession in the same reagent
solution. A requirement for this method is that each component of the test specimen is compatible with the other compo-
nents, and no side reactions take place. Samples are usually transferred into the vessel as solutions by means of injection
through a septum. Gases can be introduced into the cell by means of a suitable gas inlet tube. Precision in the method is pre-
dominantly governed by the extent to which atmospheric moisture is excluded from the system; thus, the introduction of sol-
ids into the cell may require precautions, such as working in a glove-box in an atmosphere of dry inert gas. Control of the
system may be monitored by measuring the amount of baseline drift, which does not preclude the need of any blank correc-
tion when used as a vehicle for sample introduction. This method is particularly suited to chemically inert substances like hy-
drocarbons, alcohols, and ethers. In comparison with the volumetric Karl Fischer titration, coulometry is a micro-method.
When appropriate, the water may be desorbed or released from the sample by heat in an external oven connected with the
vessel, to where it is transferred with the aid of an inert and dried gas such as pure nitrogen. Any drift due to the transport gas
should be considered and corrected. Care should be taken in the selection of the heating conditions to avoid the formation of
water coming from dehydration due to decomposition of the sample constituents, which may invalidate this approach.
Apparatus—Any commercially available apparatus consisting of an absolutely tight system fitted with the necessary electro-
des and a magnetic stirrer is appropriate. The instrument's microprocessor controls the analytical procedure and displays the
results. Calibration of the instrument is not necessary, as the current consumed can be measured absolutely.
Reagent—See the manufacturer's recommendations.
Test Preparation—Where the specimen is a soluble solid, an appropriate quantity, accurately weighed, may be dissolved in
anhydrous methanol or other suitable solvents.
Where the specimen is an insoluble solid, an appropriate quantity, accurately weighed, may be extracted using a suitable
anhydrous solvent, and may be injected into the anolyte solution. Alternatively, an evaporation technique may be used in
which water is released and evaporated by heating the specimen in a tube in a stream of dry inert gas. The gas is then passed
into the cell.
Where the specimen is to be used directly without dissolving in a suitable anhydrous solvent, an appropriate quantity, accu-
rately weighed, may be introduced into the chamber directly.
Where the specimen is a liquid, and is miscible with anhydrous methanol or other suitable solvents, an appropriate quantity,
accurately weighed, may be added to anhydrous methanol or other suitable solvents.
Procedure—Using a dry device, inject or add directly an accurately measured amount of the sample or sample preparation
estimated to contain between 0.5 and 5 mg of water, or an amount recommended by the instrument manufacturer into the
anolyte, mix, and perform the coulometric titration to the electrometric endpoint. Read the water content of the liquid Test
Preparation directly from the instrument's display, and calculate the percentage that is present in the substance. Perform a
blank determination, as needed, and make any necessary corrections.
Apparatus—Use a 500-mL glass flask A connected by means of a trap B to a reflux condenser C by ground glass joints (see
Figure 1).
The critical dimensions of the parts of the apparatus are as follows. The connecting tube D is 9–11 mm in internal diameter.
The trap is 235–240 mm in length. The condenser, if of the straight-tube type, is approximately 400 mm in length and not
less than 8 mm in bore diameter. The receiving tube E has a 5-mL capacity, and its cylindrical portion, 146–156 mm in length,
is graduated in 0.1-mL subdivisions, so that the error of reading is not greater than 0.05 mL for any indicated volume. The
source of heat is preferably an electric heater with rheostat control or an oil bath. The upper portion of the flask and the con-
necting tube may be insulated.
Clean the receiving tube and the condenser with a suitable cleanser, thoroughly rinse with water, and dry in an oven. Pre-
pare the toluene to be used by first shaking with a small quantity of water, separating the excess water, and distilling the tol-
uene.
Procedure—Place in the dry flask a quantity of the substance, weighed accurately to the nearest centigram, which is expec-
ted to yield 2–4 mL of water. If the substance is of a pasty character, weigh it in a boat of metal foil of a size that will just pass
through the neck of the flask. If the substance is likely to cause bumping, add enough dry, washed sand to cover the bottom
of the flask, or a number of capillary melting-point tubes, about 100 mm in length, sealed at the upper end. Place about 200
mL of toluene in the flask, connect the apparatus, and fill the receiving tube E with toluene poured through the top of the
condenser. Heat the flask gently for 15 min and, when the toluene begins to boil, distill at the rate of about two drops per s
until most of the water has passed over, then increase the rate of distillation to about four drops per s. When the water has
apparently all distilled over, rinse the inside of the condenser tube with toluene while brushing down the tube with a tube
brush attached to a copper wire and saturated with toluene. Continue the distillation for five min, then remove the heat, and
allow the receiving tube to cool to room temperature. If any droplets of water adhere to the walls of the receiving tube, scrub
them down with a brush consisting of a rubber band wrapped around a copper wire and wetted with toluene. When the wa-
ter and toluene have separated completely, read the volume of water, and calculate the percentage that was present in the
substance.
Procedure for Chemicals—Proceed as directed in the individual monograph preparing the chemical as directed under Loss
on Drying á731ñ.
Procedure for Biologics—Proceed as directed in the individual monograph.
Procedure for Articles of Botanical Origin—Place about 10 g of the drug, prepared as directed (see Methods of Analysis
under Articles of Botanical Origin á561ñ) and accurately weighed, in a tared evaporating dish. Dry at 105° for 5 h, and weigh.
Continue the drying and weighing at 1-h intervals until the difference between two successive weighings corresponds to not
more than 0.25%.
INTRODUCTION
Every crystalline phase of a given substance produces a characteristic X-ray diffraction pattern. Diffraction patterns can be
obtained from a randomly oriented crystalline powder composed of crystallites or crystal fragments of finite size. Essentially
three types of information can be derived from a powder diffraction pattern: the angular position of diffraction lines (depend-
ing on geometry and size of the unit cell), the intensities of diffraction lines (depending mainly on atom type and arrange-
ment, and particle orientation within the sample), and diffraction line profiles (depending on instrumental resolution, crystal-
lite size, strain, and specimen thickness).
Experiments giving angular positions and intensities of lines can be used for applications such as qualitative phase analysis
(e.g., identification of crystalline phases) and quantitative phase analysis of crystalline materials. An estimate of the amorphous
and crystalline fractions1 can also be made.
The X-ray powder diffraction (XRPD) method provides an advantage over other means of analysis in that it is usually nondes-
tructive in nature (to ensure a randomly oriented sample, specimen preparation is usually limited to grinding). XRPD investiga-
tions can also be carried out under in situ conditions on specimens exposed to nonambient conditions such as low or high
temperature and humidity.
PRINCIPLES
X-ray diffraction results from the interaction between X-rays and electron clouds of atoms. Depending on atomic arrange-
ment, interferences arise from the scattered X-rays. These interferences are constructive when the path difference between two
diffracted X-ray waves differs by an integral number of wavelengths. This selective condition is described by the Bragg equa-
tion, also called Bragg's law (see Figure 1).
2dhkl sinqhkl = nl
1 There are many other applications of the X-ray powder diffraction technique that can be applied to crystalline pharmaceutical substances, such as determination
of crystal structures, refinement of crystal structures, determination of the crystallographic purity of crystalline phases, and characterization of crystallographic tex-
ture. These applications are not described in this chapter.
Official text. Reprinted from First Supplement to USP38-NF33.
368 á941ñ X-Ray Powder Diffraction / Physical Tests DSC
The wavelength, l, of the X-rays is of the same order of magnitude as the distance between successive crystal lattice planes,
or dhkl (also called d-spacings). qhkl is the angle between the incident ray and the family of lattice planes, and sin qhkl is inversely
proportional to the distance between successive crystal planes or d-spacings.
The direction and spacing of the planes with reference to the unit cell axes are defined by the Miller indices {hkl}. These
indices are the reciprocals, reduced to the next-lower integer, of the intercepts that a plane makes with the unit cell axes. The
unit cell dimensions are given by the spacings a, b, and c, and the angles between them a, b , and g.
The interplanar spacing for a specified set of parallel hkl planes is denoted by dhkl. Each such family of planes may show
higher orders of diffraction where the d values for the related families of planes nh, nk, nl are diminished by the factor 1/n (n
being an integer: 2, 3, 4, etc.).
Every set of planes throughout a crystal has a corresponding Bragg diffraction angle, qhkl, associated with it (for a specific l).
A powder specimen is assumed to be polycrystalline so that at any angle qhkl there are always crystallites in an orientation
allowing diffraction according to Bragg's law.2 For a given X-ray wavelength, the positions of the diffraction peaks (also refer-
red to as “lines”, “reflections”, or “Bragg reflections”) are characteristic of the crystal lattice (d-spacings), their theoretical in-
tensities depend on the crystallographic unit cell content (nature and positions of atoms), and the line profiles depend on the
perfection and extent of the crystal lattice. Under these conditions, the diffraction peak has a finite intensity arising from atom-
ic arrangement, type of atoms, thermal motion, and structural imperfections, as well as from instrument characteristics.
The intensity is dependent upon many factors such as structure factor, temperature factor, crystallinity, polarization factor,
multiplicity, and Lorentz factor.
The main characteristics of diffraction line profiles are 2q position, peak height, peak area, and shape (characterized by, e.g.,
peak width, or asymmetry, analytical function, and empirical representation). An example of the type of powder patterns ob-
tained for five different solid phases of a substance are shown in Figure 2.
2 An ideal powder for diffraction experiments consists of a large number of small, randomly oriented spherical crystallites (coherently diffracting crystalline do-
mains). If this number is sufficiently large, there are always enough crystallites in any diffracting orientation to give reproducible diffraction patterns.
Official text. Reprinted from First Supplement to USP38-NF33.
DSC Physical Tests / á941ñ X-Ray Powder Diffraction 369
Figure 2. X-ray powder diffraction patterns collected for five different solid phases of a substance (the intensities are normal-
ized).
In addition to the diffraction peaks, an X-ray diffraction experiment also generates a more or less uniform background, upon
which the peaks are superimposed. Besides specimen preparation, other factors contribute to the background—for example,
sample holder, diffuse scattering from air and equipment, and other instrumental parameters such as detector noise and gen-
eral radiation from the X-ray tube. The peak-to-background ratio can be increased by minimizing background and by choos-
ing prolonged exposure times.
INSTRUMENT
Instrument Setup
X-ray diffraction experiments are usually performed using powder diffractometers or powder cameras.
A powder diffractometer generally comprises five main parts: an X-ray source; the incident beam optics, which may perform
monochromatization, filtering, collimation, and/or focusing of the beam; a goniometer; the diffraction beam optics, which
may include monochromatization, filtering, collimation, and focusing or parallelizing of beam; and a detector. Data collection
and data processing systems are also required and are generally included in current diffraction measurement equipment.
Depending on the type of analysis to be performed (phase identification, quantitative analysis, lattice parameters determina-
tion, etc.), different XRPD instrument configurations and performance levels are required. The simplest instruments used to
measure powder patterns are powder cameras. Replacement of photographic film as the detection method by photon detec-
tors has led to the design of diffractometers in which the geometric arrangement of the optics is not truly focusing, but parafo-
cusing, such as in Bragg-Brentano geometry. The Bragg-Brentano parafocusing configuration is currently the most widely used
and is therefore briefly described here.
A given instrument may provide a horizontal or vertical q/2q geometry or a vertical q/q geometry. For both geometries, the
incident X-ray beam forms an angle q with the specimen surface plane, and the diffracted X-ray beam forms an angle 2q with
the direction of the incident X-ray beam (an angle q with the specimen surface plane). The basic geometric arrangement is
represented in Figure 3. The divergent beam of radiation from the X-ray tube (the so-called primary beam) passes through the
parallel plate collimators and a divergence slit assembly and illuminates the flat surface of the specimen. All the rays diffracted
by suitably oriented crystallites in the specimen at an angle 2q converge to a line at the receiving slit. A second set of parallel
plate collimators and a scatter slit may be placed either behind or before the receiving slit. The axes of the line focus and of the
receiving slit are at equal distances from the axis of the goniometer. The X-ray quanta are counted by a radiation detector,
usually a scintillation counter, a sealed-gas proportional counter, or a position-sensitive solid-state detector such as an imaging
plate or CCD detector. The receiving slit assembly and the detector are coupled together and move tangentially to the focus-
ing circle. For q/2q scans, the goniometer rotates the specimen around the same axis as that of the detector, but at half the
rotational speed, in a q/2q motion. The surface of the specimen thus remains tangential to the focusing circle. The parallel
plate collimator limits the axial divergence of the beam and hence partially controls the shape of the diffracted line profile.
A diffractometer may also be used in transmission mode. The advantage with this technology is to lessen the effects due to
preferred orientation. A capillary of about 0.5- to 2-mm thickness can also be used for small sample amounts.
X-Ray Radiation
In the laboratory, X-rays are obtained by bombarding a metal anode with electrons emitted by the thermionic effect and
accelerated in a strong electric field (using a high-voltage generator). Most of the kinetic energy of the electrons is converted
to heat, which limits the power of the tubes and requires efficient anode cooling. A 20- to 30-fold increase in brilliance can be
obtained by using rotating anodes and by using X-ray optics. Alternatively, X-ray photons may be produced in a large-scale
facility (synchrotron).
The spectrum emitted by an X-ray tube operating at sufficient voltage consists of a continuous background of polychromatic
radiation and additional characteristic radiation that depends on the type of anode. Only this characteristic radiation is used in
X-ray diffraction experiments. The principal radiation sources used for X-ray diffraction are vacuum tubes using copper, molyb-
denum, iron, cobalt, or chromium as anodes; copper, molybdenum, or cobalt X-rays are employed most commonly for organ-
ic substances (the use of a cobalt anode can especially be preferred to separate distinct X-ray lines). The choice of radiation to
be used depends on the absorption characteristics of the specimen and possible fluorescence by atoms present in the speci-
men. The wavelengths used in powder diffraction generally correspond to the Ka radiation from the anode. Consequently, it is
advantageous to make the X-ray beam “monochromatic” by eliminating all the other components of the emission spectrum.
This can be partly achieved using Kb filters—that is, metal filters selected as having an absorption edge between the Ka and Kb
wavelengths emitted by the tube. Such a filter is usually inserted between the X-ray tube and the specimen. Another more
commonly used way to obtain a monochromatic X-ray beam is via a large monochromator crystal (usually referred to as a
“monochromator”). This crystal is placed before or behind the specimen and diffracts the different characteristic peaks of the
X-ray beam (i.e., Ka and Kb ) at different angles so that only one of them may be selected to enter into the detector. It is even
possible to separate Ka1 and Ka2 radiations by using a specialized monochromator. Unfortunately, the gain in getting a mono-
chromatic beam by using a filter or a monochromator is counteracted by a loss in intensity. Another way of separating Ka and
Kb wavelengths is by using curved X-ray mirrors that can simultaneously monochromate and focus or parallelize the X-ray
beam.
RADIATION PROTECTION
Exposure of any part of the human body to X-rays can be injurious to health. It is therefore essential that whenever X-ray
equipment is used, adequate precautions be taken to protect the operator and any other person in the vicinity. Recommended
practice for radiation protection as well as limits for the levels of X-radiation exposure are those established by national legisla-
tion in each country. If there are no official regulations or recommendations in a country, the latest recommendations of the
International Commission on Radiological Protection should be applied.
The preparation of the powdered material and the mounting of the specimen in a suitable holder are critical steps in many
analytical methods, particularly for X-ray powder diffraction analysis, since they can greatly affect the quality of the data to be
collected.3 The main sources of errors due to specimen preparation and mounting are briefly discussed in the following section
for instruments in Bragg-Brentano parafocusing geometry.
Specimen Preparation
In general, the morphology of many crystalline particles tends to give a specimen that exhibits some degree of preferred
orientation in the specimen holder. This is particularly evident for needle-like or platelike crystals when size reduction yields
finer needles or platelets. Preferred orientation in the specimen influences the intensities of various reflections so that some are
more intense and others less intense, compared to what would be expected from a completely random specimen. Several
techniques can be employed to improve randomness in the orientation of crystallites (and therefore to minimize preferred ori-
entation), but further reduction of particle size is often the best and simplest approach. The optimum number of crystallites
depends on the diffractometer geometry, the required resolution, and the specimen attenuation of the X-ray beam. In some
cases, particle sizes as large as 50 mm will provide satisfactory results in phase identification. However, excessive milling (crys-
tallite sizes less than approximately 0.5 mm) may cause line broadening and significant changes to the sample itself, such as
• specimen contamination by particles abraded from the milling instruments (mortar, pestle, balls, etc.),
• reduced degree of crystallinity,
• solid-state transition to another polymorph,
• chemical decomposition,
• introduction of internal stress, and
• solid-state reactions.
Therefore, it is advisable to compare the diffraction pattern of the nonground specimen with that corresponding to a speci-
men of smaller particle size (e.g., a milled specimen). If the X-ray powder diffraction pattern obtained is of adequate quality
considering its intended use, then grinding may not be required.
It should be noted that if a sample contains more than one phase and if sieving is used to isolate particles to a specific size,
the initial composition may be altered.
Specimen Mounting
A specimen surface that is offset by D with reference to the diffractometer rotation axis causes systematic errors that are very
difficult to avoid entirely; for the reflection mode, this results in absolute D · cosq shifts4 in 2q positions (typically of the order of
0.01° in 2q at low angles
for a displacement D = 15 mm) and asymmetric broadening of the profile toward low 2q values. Use of an appropriate internal
standard allows the detection and correction of this effect simultaneously with that arising from specimen transparency. This
effect is by far the largest source of errors in data collected on well-aligned diffractometers.
When the XRPD method in reflection mode is applied, it is often preferable to work with specimens of “infinite thickness”.
To minimize the transparency effect, it is advisable to use a nondiffracting substrate (zero background holder)—for example, a
3 Similarly, changes in the specimen can occur during data collection in the case of a nonequilibrium specimen (temperature, humidity).
4 Note that a goniometer zero alignment shift would result in a constant shift on all observed 2q-line positions; in other words, the whole diffraction pattern is, in
this case, translated by an offset of Z° in 2q.
Official text. Reprinted from First Supplement to USP38-NF33.
372 á941ñ X-Ray Powder Diffraction / Physical Tests DSC
plate of single crystalline silicon cut parallel to the 510 lattice planes.5 One advantage of the transmission mode is that prob-
lems with sample height and specimen transparency are less important.
The use of an appropriate internal standard allows the detection and correction of this effect simultaneously with that arising
from specimen displacement.
The goniometer and the corresponding incident and diffracted X-ray beam optics have many mechanical parts that need
adjustment. The degree of alignment or misalignment directly influences the quality of the results of an XRPD investigation.
Therefore, the different components of the diffractometer must be carefully adjusted (optical and mechanical systems, etc.) to
adequately minimize systematic errors, while optimizing the intensities received by the detector. The search for maximum in-
tensity and maximum resolution is always antagonistic when aligning a diffractometer. Hence, the best compromise must be
sought while performing the alignment procedure. There are many different configurations, and each supplier's equipment
requires specific alignment procedures. The overall diffractometer performance must be tested and monitored periodically, us-
ing suitable certified reference materials. Depending on the type of analysis, other well-defined reference materials may also be
employed, although the use of certified reference materials is preferred.
The identification of the phase composition of an unknown sample by XRPD is usually based on the visual or computer-
assisted comparison of a portion of its X-ray powder pattern to the experimental or calculated pattern of a reference material.
Ideally, these reference patterns are collected on well-characterized single-phase specimens. This approach makes it possible in
most cases to identify a crystalline substance by its 2q-diffraction angles or d-spacings and by its relative intensities. The com-
puter-aided comparison of the diffraction pattern of the unknown sample to the comparison data can be based on either a
more or less extended 2q range of the whole diffraction pattern or on a set of reduced data derived from the pattern. For
example, the list of d-spacings and normalized intensities, Inorm, a so-called (d, Inorm) list extracted from the pattern, is the crys-
tallographic fingerprint of the material and can be compared to (d, Inorm) lists of single-phase samples compiled in databases.
For most organic crystals, when using Cu Ka radiation, it is appropriate to record the diffraction pattern in a 2q-range from
as near 0° as possible to at least 40°. The agreement in the 2q-diffraction angles between specimen and reference is within
0.2° for the same crystal form, while relative intensities between specimen and reference may vary considerably due to prefer-
red orientation effects. By their very nature, variable hydrates and solvates are recognized to have varying unit cell dimensions,
and as such, shifting occurs in peak positions of the measured XRPD patterns for these materials. In these unique materials,
variance in 2-q positions of greater than 0.2° is not unexpected. As such, peak position variances such as 0.2° are not applica-
ble to these materials. For other types of samples (e.g., inorganic salts), it may be necessary to extend the 2q region scanned
to well beyond 40°. It is generally sufficient to scan past the 10 strongest reflections identified in single-phase X-ray powder
diffraction database files.
It is sometimes difficult or even impossible to identify phases in the following cases:
• noncrystallized or amorphous substances,
• the components to be identified are present in low mass fractions of the analyte amounts (generally less than 10% m/m),
• pronounced preferred orientation effects,
• the phase has not been filed in the database used,
• the formation of solid solutions,
• the presence of disordered structures that alter the unit cell,
• the specimen comprises too many phases,
• the presence of lattice deformations,
• the structural similarity of different phases.
If the sample under investigation is a mixture of two or more known phases, of which not more than one is amorphous, the
percentage (by volume or by mass) of each crystalline phase and of the amorphous phase can in many cases be determined.
Quantitative phase analysis can be based on the integrated intensities, on the peak heights of several individual diffraction
lines,6 or on the full pattern. These integrated intensities, peak heights, or full-pattern data points are compared to the corre-
sponding values of reference materials. These reference materials must be single phase or a mixture of known phases. The diffi-
5 In the case of a thin specimen with low attenuation, accurate measurements of line positions can be made with focusing diffractometer configurations in either
transmission or reflection geometry. Accurate measurements of line positions on specimens with low attenuation are preferably made using diffractometers with
parallel beam optics. This helps to reduce the effects of specimen thickness.
6 If the crystal structures of all components are known, the Rietveld method can be used to quantify them with good accuracy. If the crystal structures of the
components are not known, the Pawley method or the partial least-squares (PLS) method can be used.
Official text. Reprinted from First Supplement to USP38-NF33.
Next Page
culties encountered during quantitative analysis are due to specimen preparation (the accuracy and precision of the results re-
quire, in particular, homogeneity of all phases and a suitable particle size distribution in each phase) and to matrix effects.
In favorable cases, amounts of crystalline phases as small as 10% may be determined in solid matrices.
Polymorphic Samples
For a sample composed of two polymorphic phases a and b, the following expression may be used to quantify the fraction
Fa of phase a:
Fa = 1/[1 + K(Ib/Ia)]
The fraction is derived by measuring the intensity ratio between the two phases, knowing the value of the constant K. K is the
ratio of the absolute intensities of the two pure polymorphic phases Ioa/Iob. Its value can be determined by measuring standard
samples.
In a mixture of crystalline and amorphous phases, the crystalline and amorphous fractions can be estimated in several ways.
The choice of the method used depends on the nature of the sample:
• If the sample consists of crystalline fractions and an amorphous fraction of different chemical compositions, the amounts
of each of the individual crystalline phases may be estimated using appropriate standard substances, as described above.
The amorphous fraction is then deduced indirectly by subtraction.
• If the sample consists of one amorphous and one crystalline fraction, either as a 1-phase or a 2-phase mixture, with the
same elemental composition, the amount of the crystalline phase (the “degree of crystallinity”) can be estimated by
measuring three areas of the diffractogram:
A = total area of the peaks arising from diffraction from the crystalline fraction of the sample,
B = total area below area A,
C = background area (due to air scattering, fluorescence, equipment, etc).
When these areas have been measured, the degree of crystallinity can be roughly estimated as:
% crystallinity = 100A/(A + B – C)
It is noteworthy that this method does not yield an absolute degree of crystallinity values and hence is generally used for com-
parative purposes only. More sophisticated methods are also available, such as the Ruland method.
In general, the determination of crystal structures is performed from X-ray diffraction data obtained using single crystals.
However, crystal structure analysis of organic crystals is a challenging task, since the lattice parameters are comparatively large,
the symmetry is low, and the scattering properties are normally very low. For any given crystalline form of a substance, the
knowledge of the crystal structure allows for calculating the corresponding XRPD pattern, thereby providing a preferred orien-
tation-free reference XRPD pattern, which may be used for phase identification.