0% found this document useful (0 votes)
72 views16 pages

Encyclopedia of Pharmaceutical Technology, Optimization

Uploaded by

Frooti Souji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views16 pages

Encyclopedia of Pharmaceutical Technology, Optimization

Uploaded by

Frooti Souji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Optimization Methods

Gareth A. Lewis
Sanofi-Synthelabo Research, Chilly Mazarin, France

INTRODUCTION Brief Historical Review

What is Optimization? Statistical methods for screening, factor studies, and


optimization have been available for a long time: fac-
Optimization of a formulation or process is finding torial designs since 1926;[1] screening designs since
the best possible composition or operating conditions. 1946;[2] and the central composite design for response
Determining such a composition or set of conditions is surface optimization, was introduced by Box and
an enormous task, probably impossible, certainly Wilson, in 1951.[3] Their use started to be described
unnecessary, and in practice, optimization may be con- in the pharmaceutical literature from the early 1970s,
sidered as the search for a result that is satisfactory and but it was only from approximately 1988 that there
at the same time the best possible within a limited field was a sudden increase in the number of published arti-
Non-Prescription–

of search. Thus, the type and components of a formu- cles, and the numbers have continued to rise. A con-
Outsourcing

lation may be selected, according to previous experi- ception or presupposition of the difficulty or
ence, by expert knowledge (possibly using an expert complexity of experimental design had to be overcome.
system), or by systematic screening as described later. The change has been attributed of course to a great
Then the relative and/or total proportions of the exci- extent to the availability of computing power and of
pients are varied to obtain the best endpoint, or a pro- relatively inexpensive high-performance software that
cess is chosen, and a study is carried out to determine allows previously difficult or advanced methods to be
the best operating conditions to obtain the desired for- applied. In particular, much attention is now being
mulation properties. Both of these are optimization given to robust processes and formulation, and
studies. This article concentrates on statistical experi- there are developments in treating non-linear and
mental design-based optimization. highly correlated responses.[4]

Screening, Factor Studies, and Optimization Methods for Optimization

Systematic screening and factor influence studies are There are four primary methods. First, there is the stat-
closely related to optimization, being often sequential istically designed experiment, in which experiments are
stages in the development process and involving statisti- set up in a (normally regular) matrix to estimate the
cal experimental design methods. Screening methods coefficients in a mathematical model that predicts
are used to identify important and critical effects, for responses within the limits of formulation or operating
example, in the manufacturing process. Factor studies conditions being studied. This is generally the most
are quantitative studies of the effects of changing poten- powerful method, provided the experimentation zone
tially critical process and formulation parameters. They has been correctly identified, and is the subject of most
involve factorial design and are also quite often referred of this article.
to as screening studies; however, the resulting relation- Second, the direct optimization method, the best
ships have just as often been used for optimization. known being the sequential simplex, is a rapid and
The type of study carried out will depend on the powerful method for determining an experimental
stage of the project. In particular, experimental design domain, best combined with experimental design for
may be carried out in stages, and the experiments of a the optimization itself.
factor study may be augmented by further experiments Third, there is the one-factor-at-a-time method in
to a design giving the detailed information needed which the experimenter varies first one factor to find the
for true optimization. It cannot be stressed to highly best value, then another. Its disadvantages are that it can-
that the quality of a statistically designed experiment not be used for multiple responses and that it will not
depends on the choice of experimental run with respect work when there are strong interactions between factors.
to an a priori model, and this quality can and must be Finally, the non-systematic approach in which the
assessed before starting the experiments. knowledge and intuition of the developer allow him
Encyclopedia of Pharmaceutical Technology DOI: 10.1081/E-EPT-100200031
2452 Copyright # 2007 by Informa Healthcare USA, Inc. All rights reserved.
Optimization Methods 2453

to improve results, changing a number of factors at the It is assumed that there are no interactions between
same time is often surprisingly successful in the hands factors; that is to say, the effect of a given excipient on
of a skilled worker. Where he is less skilled or less stability does not depend on what other excipients are
lucky, he can waste a remarkable amount of time found in the formulation. (The same reasoning applies
and resources. to other kinds of factors or responses.) This can only
The use of artificial intelligence and expert systems be an approximation; however, if it should be neces-
is treated elsewhere in this work. sary to take interactions into account, many more
experiments would be needed, and it would probably
be necessary to limit the number of levels for each
SCREENING factor to two for the number of experiments to be
manageable.
Obtaining a Formulation Suitable The choice of excipients may be considered a quali-
for Optimization tative optimization, their quantitative compositions
not having yet been optimized. This and the fact that
Once the dosage form has been selected, the excipients the process used will most likely be on a small labora-
must be identified, their choice often limited by practi- tory scale may affect the affect the choice of excipients.
cal considerations of time and resources determined However, it is in most circumstances an unavoidable
by patents, company practice, or according to expert limitation.
knowledge. However, it may be possible or necessary An example of such a qualitative screening is shown
to test a number of different excipients for each func- in Table 1. This is an experimental design for testing

Non-Prescription–
tion, for example, several diluents, lubricants, binders. the compatibilities of experimental drug (at two con-

Outsourcing
This approach has proved useful in drug–excipient centrations) with a number of number of excipients.
compatibility testing in which protoformulations are The samples, which were wet granulated, were stored
set up according to a statistical screening design to for 3weeks at 50 C/50% relative humidity. The results
assess stability and compatibility. are also given in Table 1. The mean degradation level
Here the factor is the excipient’s function. This can was high, at 6.2%, indicating a fairly unstable drug.
be set at different levels, the level being the excipient The effects of each excipient were calculated by linear
itself. So the factor may be ‘‘binder,’’ and the levels regression, or, because the design is orthogonal, by
are, for example, HPMC, povidone, polyvinylacetate, linear combinations of the responses, and plotted in
and no disintegrant present. A mathematical model Fig. 1. There, the degradation for each excipient is
relates the response (in this case, degradation) to com- calculated in each excipient type (e.g., disintegrant),
position. It includes variables corresponding to each setting the excipients in the remaining type to a hypo-
factor with (qualitative) levels corresponding to each thetical mean value. Thus, the value for magnesium
excipient. Plackett and Burman[2] described designs is the mean response for all mixtures containing mag-
suitable for treating this kind of problem. Designs with nesium stearate, and the effect of stearate on the
the factors at only two levels are widely used. However, response is the difference between this figure and the
there are other designs at 3, 4, and 5 levels as well as global mean.
asymmetric designs derived from them in which the Inspecting the results shows that the disintegrant
various factors take a different number of levels.[5,6] and binder have major effects, and mixtures containing

Table 1 Experimental design and plan for granulated protoformulations


Number Diluent Disintegrant Lubricant Binder Dose (%) Degradation
a
1 Lactose CCNa Mg stearate Povidone 0.25 12.26
b
2 Cellulose CCNa Mg stearate HPMC 1.0 7.27
3 Phosphatec CCNa Glyceryl behenate Povidone 1.0 11.43
4 Mannitol CCNa Glyceryl behenate HPMC 0.25 4.94
d
5 Lactose NaSG Glyceryl behenate HPMC 1.0 1.63
6 Cellulose NaSG Glyceryl behenate Povidone 0.25 4.56
7 Phosphate NaSG Mg stearate HPMC 0.25 2.49
8 Mannitol NaSG Mg stearate Povidone 1.0 4.79
a
Croscarmellose sodium.
b
Microcrystalline cellulose.
c
Calcium hydrogen phosphate.
d
Sodium starch glycolate.
2454 Optimization Methods

DILUANTS necessary to identify the critical factors before optimiz-


lactose
cellulose
ing the process. This stage will probably be at the
phosphate laboratory scale, whereas the optimization proper is
mannitol
DISINTEGRANTS carried out at pilot scale.
croscarmell. Na
Na starch gly. Because process factors are usually quantitative
LUBRICANTS
glyceryl behena
and continuous, two levels only, at minimum and
Mg stearate
BINDERS
maximum values, are often tested in screening and fac-
povidone tor influence studies. Thus, the highly efficient, two-
HPMC
DOSE level Plackett–Burman designs and two-level factorial
low
medium designs may be used for screening. For example, in
–3 –2 –1 0 1 2 3
screening (assuming no interactions), up to 11 factors,
Effect on degradation % (continuous or discrete or qualitative with two levels)
may be tested by means of 12 experimental runs
Fig. 1 Compatibility study—effects of excipients calculated (Table 2). The difference between minimum and maxi-
from data of Table 1, relative to a hypothetical (‘‘mean’’) mum for each factor is generally quite large. Such a test
reference state. clears the ground for optimization process.

sodium starch glycolate and HPMC are more stable Methods for screening factors
than those containing croscarmellose sodium and povi-
done, respectively. Diluents had only small effects here, Because a large number of factors may need to be
Non-Prescription–

however, these were much greater in the mixtures screened, the postulated model must be simple. It is
Outsourcing

stored at low humidity, where mixtures containing usually assumed that the response(s) y depends only
microcrystalline cellulose or, especially, calcium phos- on the level (value or state) of each factor xi separately
phate were less stable than those containing lactose and not on combinations of levels. The model is thus
or mannitol. Thus, a capsule based on lactose, sodium first-order, for example:
starch glycolate, HPMC, and magnesium stearate (the
last being selected for reasons of feasibility, there being y ¼ b0 þ b1 x1 þ b2 x2 þ b3 x3 þ b4 x4
no difference in stability between it and glyceryl behe- þ b5 x5 þ e
nate) was formulated and gave satisfactory stability.
If the factors are quantitative, they are set at their
extreme values. Thus, if the factor is granulation time,
Before Optimizing a Process and the possible range is 1.5–7 min, the normal values
tested are 1.5 min and 7 min. They are expressed in
The major choice to be made here is that of equipment, terms of dimensionless coded variables, normally tak-
and that will depend on what is available in the labora- ing values 1 and þ1. Thus, on transformation to the
tory and also in the factory. There may be a very large coded variable x1, 1.5 min corresponds to x1 ¼ 1,
number of factors to be studied, and it will probably be and 7 min corresponds to x1 ¼ þ1.

Table 2 A Plackett–Burman design of 12 experiments


Experiment X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11
1 þ1 þ1 1 þ1 þ1 þ1 1 1 1 þ1 1
2 1 þ1 þ1 1 þ1 þ1 þ1 1 1 1 þ1
3 þ1 1 þ1 þ1 1 þ1 þ1 þ1 1 1 1
4 1 þ1 1 þ1 þ1 1 þ1 þ1 þ1 1 1
5 1 1 þ1 1 þ1 þ1 1 þ1 þ1 þ1 1
6 1 1 1 þ1 1 þ1 þ1 1 þ1 þ1 þ1
7 þ1 1 1 1 þ1 1 þ1 þ1 1 þ1 þ1
8 þ1 þ1 1 1 1 þ1 1 þ1 þ1 1 þ1
9 þ1 þ1 þ1 1 1 1 þ1 1 þ1 þ1 1
10 1 þ1 þ1 þ1 1 1 1 þ1 1 þ1 þ1
11 þ1 1 þ1 þ1 þ1 1 1 1 þ1 1 þ1
12 1 1 1 1 1 1 1 1 1 1 1
Optimization Methods 2455

If the factors are quantitative, they may take any complex situations, it is advisable to carry out a more
number of levels. Only two-level designs are described detailed study between the screening and optimization
here. Qualitative levels are set arbitrarily at the coded (response surface studies). This could be a completion
levels. If, for example, the screening method was one of the screening study by means of a complementary
of the factors tested, wet screening could be set at 1 foldover design[3,7] or by a separate quantitative study
and dry screening at þ1 (or vice versa). to allow individual effects of the factors and/or their
Quite wide limits are generally chosen for screening binary interactions to be calculated separately (shown
quantitative factors. They are then often narrowed for in factorial designs, later).
more detailed quantitative study of the influence of All these studies on the process are generally done
factors where interactions between factors them are after the optimization of the formulation. However,
taken into account and for determining a predictive because the effects of formulation and process changes
model for optimization. are not generally independent, it may become neces-
The designs, proposed by Plackett and Burman in sary to carry out some sort of process study at the
1946,[2] comprise experiments in multiples of four. same time as the formulation optimization.
They will allow screening of up to one less factor than
the number of experiments. Those with 2n experiments
(4, 8, 16, 32, . . . ) are also fractional factorial designs.
The non-factorial designs have particular properties QUANTITATIVE PROCESS STUDIES USING
and complex aliasing, which has been held to make FACTORIAL DESIGNS
their interpretation difficult but also gives them certain
Purpose

Non-Prescription–
advantages over the fractional factorial designs. The

Outsourcing
12-experiment design, shown in coded variables
(Table 2), is such a design, and is useful for about Whereas the purpose of a screening study is to deter-
7–11 factors. mine which of a large number of factors have an influ-
The structure of the design is shown clearly in the ence on the formulation or process, that of a factor
table because the experiments are in their standard study is to determine quantitatively the influence of
order. However, they should be carried out in a ran- the different factors together on the response variables.
dom order, as should all the designs described here, The number of levels is usually again limited to two,
as much as is practicable. but sufficient experiments are carried out to allow for
The coefficient bi is the effect of the factor Xi, and interactions between factors.
is equal to half the average change in the response y
when the level of the factor is changed from xi ¼ 1 Two-level full factorial designs
to xi ¼ þ1. It is estimated (as bi) in the Plackett–
Burman design by subtracting the sum of the responses The simplest such designs are the 2k full factorial
for experiments for which xi ¼ 1 from those for designs, in which the experiments are all the 2k possible
which xi ¼ þ1 and dividing by the number of experi- combinations of two levels of k factors variables.
ments. Important and unimportant effects can then be Therefore, they consist of 4, 8, 16, 32, 64, . . . experi-
identified according to their absolute values. (Deter- ments for 2, 3, 4, 5, 6, . . . factors. Examples are given
mining active factors from the results of a factorial in Table 3 of the 22, 23, and 24 designs (each enclosed
design are shown later.) at the right and below by the solid lines). Thus, lines
1–4 of columns 1 and 2 show a 22 design for two fac-
Use of results of a screening design tors, and lines 1–16 of columns 1–4 a 24 design for four
factors.
Estimation of the effects allows influential or possibly The design is transformed into an experimental plan
influential factors to be identified. Non-influential fac- (with the natural or experimental values of the factor
tors (small effects) will not require further study. They variable at each level /þ1. The mathematical model
may be set at their midpoints, at their most economical associated with the design consists of the main effects
values (e.g., a short mixing time), or at their apparently of each variable plus all the possible interaction
best value even if the measured effect is apparently effects, interactions between two variables, but also
non-significant. between three and four factors and, in fact, between
After elimination of these non-influential factors, as many as there are in the model. However, although
there may still be too many factors to optimize in terms two-factor interactions are important, three-factor
of the resources available (time, raw material, opera- interactions are normally far less so. Higher-order
tors, availability of equipment, etc.). Generally, these interactions are invariably ignored and the values
less influential factors are kept constant, equal to determined for them attributed to the random vari-
their best level and the remainder optimized. In more ation of the experimental system.
2456 Optimization Methods

Table 3 Some full and fractional factorial designs for two to The responses are usually treated separately;
five factorsa however, when there are a number of more or less cor-
X1 X2 X3 X4 X5 Response related responses being studied, appropriate combina-
1 1 1 1 1 1 189
tions (principal components) may be analyzed instead
of the original responses.[9]
2 1 1 1 1 1 56
Once the important effects have been identified, a
3 1 1 1 1 1 94 simplified model can be written. If an interaction term
4 1 1 1 1 1 80 has been identified, the corresponding main effects
5 1 1 1 1 1 212 should also be included in the model even if they are
6 1 1 1 1 1 212 not all found active. Thus, if the interactions between
7 1 1 1 1 1 76
the factors X1 and X2 and the main effect of the factor
X1 are active, b2x2 should be included in the model
8 1 1 1 1 1 125
as well as b1x1 and b12x1x2.
9 1 1 1 1 1 351
10 1 1 1 1 1 534
11 1 1 1 1 1 275 Two-Level Fractional Factorial Designs
12 1 1 1 1 1 219
13 1 1 1 1 1 154
The number of experiments needed to study five or
more factors in a full factorial design is large, and to
14 1 1 1 1 1 752
determine the main effects and their interactions, a
1 1
Non-Prescription–

15 1 1 1 374 fraction of the full design is often sufficient. These


Outsourcing

16 1 1 1 1 1 478 are 2kr fractional factorial designs, where r ¼ 1,


a 51
The response particle size (mm) in for the 2 fractional factorial 2, . . . for the half, quarter, etc. fractions. An example
design. (From Ref.[8].) of a half-factorial design for five factors (251) is given
in Table 2 (the entire table). Note that the first four
columns are the same as the four factor, full-factorial
Determining Active Factors from the
design, and the column for the fifth factor is con-
Results of a Factorial Design
structed by multiplying the first four columns together.
Methods for constructing such designs and their limi-
We take the four-factor model as an example. The
tations are described in many textbooks.[5–7]
complete synergistic mathematical model consists of
Evidently, for the 251 design, the 16 triple and
the constant term, four main variables (b1x1 . . . b1x4),
higher interactions are not determined. In fact, they
six interactions between two factors (b12x1x2, . . . ), four
are confounded with the calculated effects. Thus, the
interactions between three factors (b123x1x2x3, . . . ) and
estimate of the interaction between factors one and
one between four factors. The last five of these are not
two includes the triple interaction between the other
generally expected to be important. The model is thus:
three factors. Because the latter is assumed negligible,
this does not usually matter.
y ¼ b0 þ b1 x1 . . . þ b12 x1 x2 . . . þ b123 x1 x2 x3 . . . Menon et al. studied the formation of pellets by
þ b1234 x1 x2 x3 x4 þ e fluid-bed granulation using this design.[8] The five fac-
tors investigated were (X1), the binder concentration;
(X2), the method of introducing it (dry or solution);
The effects (coefficients) bi in the model are estimated, (X3), the atomization pressure; (X4), the spray rate;
usually by multilinear regression. The values obtained and (X5), the inlet temperature. Particle sizes of the
bi are estimates because of the random experimental resulting particles are shown in Table 3.
error (represented by e in the equation). The next step The coefficients of the model are calculated by lin-
is to decide which of the 15 effects calculated are active ear regression (the logarithm of the particle size was
or important. used here) and then plotted as a cumulative distri-
The are a number of ways of doing this. If the bution of a normal plot (Fig. 2). The important coeffi-
experiments have been replicated, ANOVA will reveal cients are those that are strongly positive or negative,
which effects are statistically significant. Otherwise, we for example, the spray rate b4 and the interaction
rely on the fact that most of the effects are probably between atomization pressure and inlet temperature b35.
small and distributed randomly about zero. Thus, we Others not identified on the diagram are not con-
look for the effects with the largest absolute values that sidered significant and could well be representative
stand out from the others.[6] Making a normal prob- mainly of experimental error. The equation can thus
ability plot of the distribution of their values is a be simplified to include only the important terms.
widely used method. However, if interactions are included, their main
Optimization Methods 2457

2.88
99
D
95
90 AC 2.69

Normal % probability
AD High spray rate
80 AE
70 C
A
50 E 2.50

log10(particle size)
30
20
10 B 2.31
5
CE

A: Povidone 1
2.12
B: Binder
C: Atomisation
D: Spray rate Low spray rate
E: Temperature –0.19 –0.02 0.14 0.31 0.48 1.94
Effect on log10(particle size)

Fig. 2 Calculated effects form a two-level factorial design.


1.75
Those to the left and right of the line are considered active.

Non-Prescription–
(From Ref.[8].)

Outsourcing
–1 +1
% PVP (coded values)
effects should be included also, even if they are small.
Fig. 3 Calculated effects from a two-level factorial design.
Here, we have:
Interaction diagram for spray rate and % povidone on par-
y ¼ b0 þ b1 x1 þ b2 x2 þ b3 x3 þ b4 x4 þ b5 x5 ticle size (logarithmic scale).
þ b13 x1 x3 þ b14 x1 x4 þ b15 x1 x5 þ b35 x3 x5

between the upper and lower limits. This is useful if


Information that Can Be Obtained only to find a more restricted zone for further
study. However, in the case of a screening study, the
The significant main effects are identified and also quan- limits studied are often so wide that it would be most
tified. Thus, increasing the spray rate over the range unlikely for the estimated model to be accurate enough
studied will give an increase in the log(particle size) of for prediction, and there is also likely to be curvature
twice 0.24, representing a more than threefold increase. of the response surface over the experimental domain.
However, it can be seen that there is an interaction with Such attempts are less risky for the more detailed fac-
the binder concentration; that is, the effect of spray rate torial studies, but even then, they should be used with
depends on the amount of binder in the formulation. caution.
The effects of increasing spray rate are shown in Fig. 3 Adding center points (experiments at the center of
for both high and low levels of binder; the effect of spray the domain, coded co-ordinates 0, 0, . . . 0 is useful for
rate is much greater at high levels of binder. factorial and screening experiments), even though they
However, the effect of binder also interacts with two do not enter into the calculation of the model equation
other factors, the atomization pressure and the inlet because:
temperature. Thus, the individual variables cannot be
considered separately. 1. They are often a priori at or near the most inter-
Note also that there is a great deal of information esting conditions;
often hidden in large designs (16 or more experiments), 2. They allow identification of curvature in the
and, in particular, indications on factors affecting the responses (by comparing calculated with mea-
robustness of a process may sometimes be extracted sured responses);
(see the last section). 3. If they are replicated, the experimental repro-
ducibility may be assessed; and
Use of Center Points 4. They may allow extension of the experiment
at a subsequent stage to a central composite
In both screening and factor-influence studies in which design for modeling of response surfaces (shown
the factor is quantitative, it is tempting to interpolate in the following sections).
2458 Optimization Methods

EXPERIMENTAL DESIGNS FOR and a second-order model for two factors:


PROCESS OPTIMIZATION
(INDEPENDENT VARIABLES) y ¼ b0 þ b1 x1 þ b2 x2 þ b12 x1 x2 þ b11 x1 2
þ b22 x2 2 þ e
In this section, we look at methods of obtaining a
mathematical model that can be used for qualitative
The coefficients in the models are estimated by multi-
predictions of a response over the whole of the
experimental domain. If the model depends on two linear least-squares regression of the data.
factors, the response may be considered a topographi- Third-order models are very rarely used in the case
cal surface, drawn as contours or in 3D (Fig. 4). of process studies and, in any case, third-order terms
For more factors, we can visualize the surface by tak- are only added for those variables where they can be
ing ‘‘slices’’ at constant values of all but two factors. shown to be necessary (i.e., augmentation of a second-
order model and the corresponding design). This does
These methods allow both process and formulation
not mean that second-order designs are always suf-
optimization.
ficient, and other methods of constructing response
surfaces may sometimes be useful.

Mathematical Models
Statistical Experimental Designs for
The design used is a function of the model proposed. First-Order Models
Thus, if it is expected that the important responses
Non-Prescription–

vary relatively little over the domain, a first-order poly- The design must enable estimation of the first-order
Outsourcing

nomial will be selected. This will also be the case if the effects, preferably free from interference by the inter-
experimenter wishes to perform rather a few experi- actions between factors other variables. It should also
ments at first to check initial assumptions. He may allow testing for the fit of the model and, in particular,
then change to a second-order (quadratic) polynomial for the existence of curvature of the response surface
model. Second-order polynomials are those most com- (center points). Two-level factorial designs may be
monly used for response surface modeling and process used for this (shown earlier).
optimization for up to five variables. Important points to note when using a first-order
Examples of polynomial models are a first order model, with or without interactions, are that:
model for five factors:
1. Maximum and minimum values of responses
are of necessity predicted at the edge of the
y ¼ b0 þ b1 x1 þ b2 x2 þ b3 x3 þ b4 x4 experimental domain;
þ b5 x5 þ e 2. The first-order model should normally be used
only in the absence of curvature of the response
surface. If the experimental values of the center
points are different from the calculate values
(i.e., there is lack of fit), then the response sur-
face is curved and a second-order design and
model should be used; and
3. The experimenter should test for interaction
terms between two factors in the model. If inter-
actions seem to be important he should make
sure that they are properly identified.

Statistical Experimental Designs for


Second-Order Models

The central composite design


(Box–Wilson design)

This is the design most often used for response sur-


Fig. 4 Central composite design for three factors. The fac- faces. It is a combination of a factorial with an axial
torial points are shaded, the axial points unshaded, and the design[3,10] with experiments at a distance of a along
center point(s) filled. each axis (thus, the name). It requires a relatively large
Optimization Methods 2459

number of experimental runs, which can be a disadvan- does the central composite design but cannot be set up
tage if resources are limited. However, it can be carried by augmenting a factorial design.
out in two stages: the factorial design first then the The design for three factors is shown in Table 5.
axial design if the results are satisfactory. It can be seen that the hexagonal design for two factors
If we wish to study the system by varying the para- is the first seven rows and the first two columns. Thus,
meters around a point of interest, the domain is a it possible to add a factor to a design. Another advan-
sphere, and the coordinates of axial experiments are tage is that because it is part of a continuous network,
outside those of the factorial ones. a is chosen to give it allows the experimental domain to be shifted in any
the best statistical properties (e.g., constant prediction direction by adding experiments at one side of the
precision) and lies between 2 and approximately 2.4. domain and eliminating them at the other (Fig. 6).
The design for three factors, where a is set at 1.682, Vojnovic et al.[13] give an example of its use in granu-
is shown in Fig. 5 and Table 4. lation.
Center-point experiments must be done as part of Hybrid designs are saturated or almost saturated
both stages. Another advantage is that each factor is designs; that is, they have only enough experimental
at five levels, thus allowing testing of lack of fit and runs to calculate the coefficients of the quadratic model
for the possible need for cubic terms in the model. (10 runs for 3 factors 16 runs, for 4 factors, and 28 runs,
Fig. 4 shows response surfaces calculated from the for 6 factors). They are useful when the responses are
data of Senderak, Bonsignore, and Mungan[11] obtained not expected to vary enormously but where the quad-
using such a design at a constant value of the third factor. ratic model is esteemed necessary and resources (in
possible numbers of experiments) are low.[6,14]

Non-Prescription–
Other standard designs If the experimental region is defined by maximum

Outsourcing
and minimum values of each factor, then the domain
The central composite design is most often used, how- is ‘‘cubic.’’ The central composite design can be
ever, there are others whose particular properties make applied to such a situation, the axial points being set
them particularly useful. One of these is the Doehlert then at 1, coded values corresponding to the mini-
design, which is part of a continuous hexagonal mum and maximum allowed values. Other designs
network.[12] It requires slightly fewer experiments than for the cubic domain are reviewed in Ref.[6].

A B
65 1.5 61
5
70
2
4.5

60
58
Sucrose invert medium

4
% Sucrose invert medium

72
2.5
3.5

55
3 55 74
3

3.5

4
50 52 76
4.5

5
78
5.5

45 49
15 17.5 20 22.5 25 17 18.5 20 22.5 25
% Propylene glycol % Propylene glycol

Fig. 5 Contour diagrams of (A) turbidity and (B) cloud point as function of % propylene glycol and sucrose invert medium (slice
taken at constant value of 4.3% polysorbate 80). (From Ref.[11].)
2460 Optimization Methods

Table 4 A central composite design for three factors


Number X1 X2 X3
1 1 1 1 Factorial design 23
2 þ1 1 1
3 1 þ1 1
4 þ1 þ1 1
5 1 1 þ1
6 þ1 1 þ1
7 1 þ1 þ1
8 þ1 þ1 þ1
9 1.682 0 0 Axial design
10 þ1.682 0 0
11 0 1.682 0
12 0 þ1.682 0
13 0 0 1.682
14 0 0 þ1.682
15 0 0 0 Center pointsa (number of replicates flexible)
Non-Prescription–

16 0 0 0
Outsourcing

17 0 0 0
a
To be included with both axial and factorial designs if carried out separately.

Mixed and Irregular Domains— discrete but numerical values or may even be qualitat-
D-Optimal Designs ive in nature.
There are no classic experimental designs that exist
If the experimental domain is cubic and spherical or for such circumstances, and a purely empirical
spherical, the standard experimental designs can nor- approach is required: 1) to postulate a mathematical
mally be used. However, the domain may be irregular model that is expected to describe the response and
in shape as certain combinations of values variable 2) to then select from among the many possible experi-
may be excluded a priori for technical reasons or ments a design that will determine the model coeffi-
may even have been tried and failed to give a result, cients with maximum efficiency.
or certain factors may be forced to take either fixed There are various ways of obtaining such a design,
by far the most common being based on the exchange
algorithm of Fedorov. There are also a number of
Table 5 Doehlert design for three factors criteria for describing how good the design is, the
k X1 X2 X3
1 0 0 0 1
2 1 0 0
3 0.5 0.866 0
4 0.5 0.866 0 0
5 0.5 0.866 0 X2
6 0.5 0.866 0
7 1 0 0 –1 Initial design
Additional experiments
8 0.5 0.289 0.816 X3 = 0
9 0.5 0.289 0.816 X3 = 0.816
X3 = –0.816
10 0 0.577 0.816 –2
–2 –1 0 1 2
11 0.5 0.289 0.816 X1
12 0.5 0.289 0.816
Fig. 6 Doehlert design in three dimensions (factors) show-
13 0 0.577 0.816
ing extension to a new experimental domain.
Optimization Methods 2461

D-optimal criterion being the most usual, based on the fractions x1, x2, x3, because x1 þ x2 þ x3 ¼ 1,
optimization of the overall precision of estimation of
the coefficients of the model.[6,15,16] This method and y ¼ a0 þ a1 x1 þ a2 x2 þ a3 x3 þ e
type of design is extremely flexible because:
becomes
1. It allows experiments within irregular experi-
mental domains; y ¼ b1 x1 þ b2 x2 þ b3 x3 þ e
2. Previous experiments carried out within the
experimental domain may be taken into The variables cannot be varied independently. If
account; there are no upper and lower restraints on the propor-
3. Classical designs in which experiments have tions of the components, the domain for three factors
failed to give a result may be repaired by rede- can be described as a equilateral triangle whose apices
fining the domain and finding the best experi- represent the pure components. A four-component
ments (according to the D-optimal criterion) mixture is described by a regular tetrahedron. For
to replace the experiment(s) which failed; five components, the equivalent 4D figure must be
4. The models may be polynomials with missing imagined.
coefficients, or even non-polynomials; Just as the first-order mixture model has a different
5. The experiments may be carried out in two or form from that for independent variables, so does the
more stages, with models of increasing com- second-order design:
plexity;
y ¼ b0 þ b1 x1 þ b2 x2 þ b3 x3 þ b12 x1 x2

Non-Prescription–
6. Further experiments may be added to a D-

Outsourcing
optimal design to validate the model (lack of þ b13 x1 x3 þ b23 x2 x3 þ e
fit); and
7. They can be used for mixture models with The special cubic model describes a certain third-
constraints (see below). order curvature in the response surface:

In conclusion, a wide variety of experimental y ¼ b0 þ b1 x1 þ b2 x2 þ b3 x3 þ b12 x1 x2


designs is available, allowing the design to be selected þ b13 x1 x3 þ b23 x2 x3 þ b123 x1 x2 x3 þ e
according to the problem in question, rather than
adapting the experiment to the design.

Mixture Designs and the


Simplex Experimental Domain
EXPERIMENTAL DESIGNS FOR
FORMULATION OPTIMIZATION The equilateral triangle and regular tetrahedron are
(MIXTURE DESIGNS) described above as the domain of a mixture where all
possible compositions of the components are allowed
Formulations almost invariably consist of mixtures for are regular simplexes. (In the remainder of the sec-
of a drug substance and excipients. Their properties tion, they are referred to as simplexes.) Such circum-
usually depend not so much on the quantity of each stances in which there are no composition restraints
substance present as on their proportions. The total are rare in formulation. However, if each component
comes to 100%, so the number of independent vari- is present at a minimum level, and no other constraints
ables is one less than the number of components. This are imposed, then the domain is also a simplex.
has the effect that the models and the designs have Designs in this case, primarily attributed to
particular properties, and the designs described above Scheffé,[18] are derived very simply. That shown in
(screening, factor studies, and response surfaces) nor- Fig. 7 for three components is suitable for first-,
mally cannot be used. The entire topic of mixture second-, and partial third-order models. The latter is
designs is fully described by Cornell.[17] the central composite design and is quite commonly
used. Test points for checking model fit are also shown.

Mathematical Models for Mixtures


Constrained Systems and
Because there is one less independent variables than Pseudocomponents
the number of components, the polynomials take a
particular form. For example, for three components, Simplex designs are quite rarely used because such
where the response y has a first-order dependence on circumstances in which there are no composition
2462 Optimization Methods

X1 lactose or cellulose may make up most of the amount


of a tablet or capsule, whereas magnesium stearate is
0.0 1.0
limited to between 0.5 and 2%. In particular, when
there are both upper and lower limits, the space is
0.2 almost invariably non-simplex.
0.8
Mixture models (such as those of Scheffé) are still
useful, especially when there are three or more such
0.4 excipients with fairly large ranges of variation. In solid
0.6
formulations, this is often the case for diluents (or fil-
lers) and also for the polymers or waxes incorporated
0.6 0.4 into controlled-release tablets to form a matrix
through which the drug diffuses slowly out when
0.8 immersed in aqueous fluid, i.e., in the gastrointestinal
0.2
tract.
The experimental designs of non-simplex experi-
1.0 0.0 mental regions are D-optimal for the selected model,
X2 0.0 0.2 0.4 0.6 0.8 1.0 X3 obtained by an exchange algorithm.[19]
Design points Thus, we have the example of the optimization of a
Test points sustained-release tablet for which the release rate of a
highly water-soluble drug was limited by its diffusion
Fig. 7 Scheffé central composite design for three factors.
Non-Prescription–

though a matrix. The matrix-forming substance is a


Outsourcing

Open squares are test points.


cellulose derivative swelling in water (hydroxypropyl-
methylcellulose) but the diluents microcrystalline
cellulose, lactose, and calcium phosphate also have a
restraints are rare in formulation. However, if each role. These four components were varied as well as
component is present at a minimum level, and no other the percentage of drug substance (to have two doses
constraints are imposed, then the domain is also a sim- at constant tablet mass), and the experimental domain
plex. An example could be of the solubility of a drug defined. A D-optimal design was then obtained for a
being tested in ternary or quaternary mixtures of phar- second-order mixture model (using an exchange algor-
maceutically acceptable solvents. The single constraint ithm), the experiments performed, and the results
might be that a minimum percentage of water is analyzed by multilinear regression to give response
required. In any case, the experimental domain would surfaces as contour plots (Fig. 8). The formulation
be a regular simplex, and standard designs may be could thus be optimized to give the required drug
used. In the case of solid dosage forms, simplex release profile.[6]
domains are rarer still. A possible example might be It is interesting to note that the work was done in
a study of the optimum composition of a diluent in a two stages. Initially, experiments were chosen for a
tablet formulation, the proportions of the active first-order mathematical model from the projected
substance and other excipients being held constant. second-order design. These were carried out first, to
The diluent might consist of a mixture of lactose, check that there was no problem and that the experi-
microcrystalline cellulose, and starch, and its compo- mental domain was adequate, before doing the remain-
sition might then be adjusted to obtain optimum ing experiments for a predictive model that could be
tableting properties as well as rapid disintegration used for optimizing.
and dissolution (for rapid action of the drug after the
patient swallows the tablet). Again, standard experi-
mental designs such as the simplex–centroid design Conditions for Independent
may be used. Variable Designs

If one of the components (for example, a diluent or sol-


vent) is in considerable excess, and the limits for all
Constrained Systems and other components are narrow in comparison, then it
Non-Simplex Designs can be eliminated from the analysis because its con-
centration changes little. The concentrations of the
Limits in the amounts of excipients present normally remaining components can then be treated as inde-
lead to the domain taking on an irregular shape. Each pendent variables, and the methods described pre-
component must be present within a given concen- viously can be applied without using the special
tration range to fulfill its function. For example, considerations for mixtures.
Optimization Methods 2463

A B
64.00

Cellulose

1' 1'

2' 2'
3 5 5.00 5.00
10
9 11
4 6 12 13
7' 7'

8' 8'

Lactose Phosphate

Non-Prescription–
Outsourcing
Polymer 64.00 5.00 64.00
Lactose Phosphate

Fig. 8 D-optimal mixture design. (A) definition of the design space. (B) Contour plot of mean dissolution time at 25% polymer
content. (From Ref.[6].)

OPTIMIZATION METHODS USING Graphical Optimization of Two


RESPONSE SURFACE Opposing Responses
METHODOLOGY
When there are only two independent factors (includ-
Graphical Methods ing the case of three mixture components), the
responses may be plotted on a single graph. Graphics
It is usually relatively simple to find the optimum con- programs that allow plotting of upper and lower
ditions for a single response that does not depend on allowed limits of the responses, with portions of the
more than four factors once the coefficients of the diagram where there the responses are outside the lim-
model equations have been estimated, provided, of its shaded, are useful because they allow an acceptable
course, that the model is correct. Real problems are zone to be identified very rapidly. An example of
usually more complex. In the case of pellet formation, graphical analysis for formulation of an oral sol-
it is not only the yield of pellets that is important but ution[11] is shown in Fig. 9. The objective was to reduce
also their shape (how near to spherical), friability, the turbidity as much as possible and to obtain a
smoothness, and ease of production. The optimum is solution with a cloud point less than 70 C. A level of
a combination of all these. invert sucrose as high as possible was preferred (in
One possible approach is to select the most spite of its deleterious effect on the cloud point). Slices
important response, the one that should be optimized, were taken in the propylene glycol, sucrose plane
such as the yield of pellets. For the remaining (X2, X3) at different levels of polysorbate, that at
responses, we can choose acceptable upper and lower 4.3% being shown in the Fig. 9. This can to be com-
limits. Response surfaces are plotted with only these pared with the response surface in Fig. 5. An optimum
limits, with unacceptable values shaded. The unshaded compromise formulation is found at approximately
area is the acceptable zone. Within that acceptable 58% sucrose medium, 4.3% polysorbate 80, and 23%
zone, we may either select the center for maximum rug- polyethylene glycol.
gedness of formulation or process or look for a The method becomes difficult with four inde-
maximum (or minimum or target value) of the key pendent continuous factors, and for five or more vari-
response. ables, the method is totally impracticable despite its
2464 Optimization Methods

Overlay Plot dependence of the partial desirability on the value of


65 the response. Weighting of responses is also possible.
The method requires appropriate computer software,
but it is a very powerful method of optimization, and
with practice, it is relatively easy. It is especially appro-
60 Cloud point: 70 priate for four or more factors. McLeod et al. gives an
example.[21]
% Invert sucrose

55 Limitations of Response
Turbidity: 3 Surface Methodology
Turbidity: 3

The approach of using a mathematical model to map


responses predictively and then to use these models
50
to optimize is limited to cases in which the relatively
simple, normally quadratic model describes the pheno-
Domain limit menon in the optimum region with sufficient accuracy.
When this is not the case, one possibility is to reduce
45 the size of the domain. Another is to use a more
15 17.5 20 22.5 25 complex model or a non-polynomial model better sui-
Non-Prescription–

% Propylene glycol ted to the phenomenon in question. The D-optimal


Outsourcing

designs and exchange algorithms are useful here as in


Fig. 9 Superposition of contour plots for turbidity <3 ppm
and cloud point <70 to determine an optimum region all cases of change of experimental zone or mathemat-
(‘‘slice’’ at 4.3% polysorbate 80). Compare with Fig. 5. (From ical model. In any case, response surface methodology
Ref.[11].) in optimization is only applicable to continuous
functions.
Lately, there has been a great deal of interest in the
simplicity. The number of ‘‘slices’’ to be examined is use of artificial neural networks in many fields, includ-
simply too high—up to 125 diagrams to be displayed ing that of prediction and expert systems, and they are
or plotted. Under such circumstances, the desirability of interest here for the description of response surfaces
method must be used. that have a non-linear relation to the factor vari-
ables.[22,23] In such cases, the response surface may well
fit the data better than that calculated from the model
Desirability estimated by least-squares regression.[24]
However, the choice of experiments is still
Derringer and Suich[6,20] described a way of overcom- important for the artificial neural network approach,
ing the difficulty of multiple, sometimes opposing, and it is best selected in a regular pattern. The
responses. Each response is associated with its own central composite design, in which each factor takes
partial desirability function. If the value of the five levels, is a generally a good compromise.[24] Great
response is optimum, its desirability equals 1, and if care must be taken not to ‘‘overfit,’’ and, in general,
it is totally unacceptable, its value is zero. Thus, the more experiments are required than for the classic
desirability for each response can be calculated at a RSM approach.
given point in the experimental domain. An overall
desirability function can be calculated by multiplying
all of the r partial functions together and taking the
rth root. Evidently, if the desirability for any response SEARCHING FOR A NEW DOMAIN
is zero at a point, the overall desirability there is also
zero. The optimum is the point with the highest value The Steepest Ascent Method and
for the desirability. The experimenter should study Optimum Path Methods
the contour plot of the desirability surface around
the optimum and combine this with contour plots of Screening and factor studies will sometimes indicate
the most important responses. A large area or volume whether, and if so, where we should search for an opti-
of high desirability will indicate a robust formulation mum within the domain being studied. However, if the
or set of processing conditions. optimum (we are considering a single ‘‘key’’ variable
A number of different forms, linear, convex, here) lies outside the present experiment, then the stee-
concave, unilateral, bilateral, are available for the pest ascent method comes into its own. The direction
Optimization Methods 2465

of steepest increase of the response in terms of the


E
coded variables is determined, and then experiments
are carried out along this line. If a maximum or mini- N R
mum value (according to the target) is found along this
line, the point at which it is found could be the center X2
CR
of a new experimental design for optimization.[7] The
optimum path method[6,13] is similar and is used for CW
extrapolating from a second-order design along a
curved trajectory. W B

X1
Sequential Simplex Optimization
Fig. 10 Summary of the expanded simplex method of
Introduction Nelder and Mead.

Unlike the other optimization methods described here, is carried out opposite point W to give a new simplex
the sequential simplex method for optimization neither reflecting the original one. Depending on the value of
assumes nor determines a mathematical model for the the response at R relative to that at W, N, and B, the
phenomena studied. step size may be expanded to arrive quickly at the
A simplex is a convex geometric figure of kþ1 non- region of the optimum, and then be contracted around

Non-Prescription–
planar vertices in k dimensional space, the number of the optimum. The various possibilities are shown in

Outsourcing
dimensions corresponding to the number of inde- Fig. 10 for two factors. ‘‘R > W’’ means that point
pendent factors. Thus, for two factors, it is a triangle, R is better than point W, etc.
and for three factors, it is a tetrahedron. The method
is sequential because the experiments are analyzed
one by one as each is carried out. The basic method R replaces W if : N  R  B Reflection
used a constant step size,[25] allowing the region of or: R > B and E  B
experimentation to move at a constant rate toward E replaces W if : R > B and E  B Expansion
the optimum. However, a modification that allows
CR replaces W if : W<R  N Contraction
the simplex to expand and contract, proposed by
(exterior)
Nelder and Mead[26] in 1965, is more generally used.
It has been reviewed recently by Waters.[27] CW replaces W if : W > R Contraction
(interior)

Optimization by the extended


simplex method At the end of the sequential simplex, if more
detailed information is needed, the experimenter may
Assume that we wish to optimize a response depending carry out a response surface study around the sup-
on three to five factors without assuming any model posed optimum.
for the dependence other than the domain being
continuous. We choose an initial domain and place a
regular simplex in it. The experiments for the initial DESIGNING ROBUST PROCESSES
simplex are then carried out and the response AND FORMULATIONS
measured. In the basic simplex method, an experiment
is done outside the simplex in a direction directly Until now, optimization and improvement have been
opposite to the ‘‘worst’’ point of the simplex. The taken as being equal or closer to what is considered
worst point is discarded, and a new simplex is most desirable with respect to the mean responses.
obtained, the process being repeated. The simplex However, it is also necessary that all units of all
therefore moves away from the ‘‘poor’’ regions toward batches manufactured fall within those specifications.
the optimum. In the extended simplex method, if the Apart from variation in the measurement method, all
optimum is outside the initial experimental domain, variation is attributed to the manufacturing process
we may leave it rapidly while expanding the simplex and the manufacturing and storage environment.
for a region with an improved response. As the simplex Taking the traditional quality control approach,
approaches the optimum, it is contracted rapidly. any product that is within the specifications will pass
Of the experiments of a given simplex let W, N, and and is considered equally good. However, one might
B be the ‘‘worst’’ (W), ‘‘next worst’’ (N), and ‘‘best’’ still normally consider that the nearer the response to
(B) points of the initial simplex. A new experiment R the target, the better the product. Therefore, the key
2466 Optimization Methods

is to choose a formulation and/or condition that gives REFERENCES


a product not only as close as possible to the target,
but with as little variability as possible. 1. Fisher, R.A. The Design of Experiments; Oliver and Boyd:
London, 1926.
The basic concepts and seminal work in this field are 2. Plackett, R.L.; Burman, J.P. The design of optimum multi-
from Taguchi,[28] who stated that any product whose factorial experiments. Biometrica 1946, 33, 305–325.
performance characteristics are different from the tar- 3. Box, G.E.P.; Wilson, K.B. On the experimental attainment
of optimum conditions. J. Royal Stat. Soc. Ser. B 1951, 13,
get values suffers a loss in quality, which he quantified 1–45.
by a parabolic function. He then classified factors as: 4. Myers, R.H. Response surface methodology—current sta-
1) control factors, which can be controlled under nor- tus and future directions. J. Qual. Technol. 1999, 31 (1),
30–44.
mal operating conditions and 2) noise factors, which 5. Montgomery, D.C. Design and Analysis of Experiments,
are difficult, impossible, or very expensive to control. 2nd Ed.; Wiley: New York, 1984.
The effects and interactions of control and noise 6. Lewis, G.A.; Mathieu, D.; Phan-Tan-Luu, R. Pharmaceuti-
cal Experimental Design; Marcel Dekker, Inc.: New York,
factors could be measured by means of an experi- 1999.
mental design, and then settings of the control factors 7. Box, G.E.P.; Hunter, W.G.; Hunter, J.S. Statistics for
would be determined that would minimize the effects Experimenters; Wiley: New York, 1978.
8. Menon, A.; Dhodi, N.; Mandella, W.; Chakrabarti, S. Iden-
of the noise factors. tifying fluid-bed parameters affecting product variability.
One problem in such an approach, apart from the Int. J. Pharm. 1996, 140 (2), 207–218.
difficulty of controlling noise factors, is to know what 9. Carlson, R.; Nordahl, A.; Barth, T.; Myklebust, R. An
approach to evaluating screening experiments when several
they are. Examples of possible noise factors are the responses are measured. Chemom. Intell. Lab. Syst. 1992,
drug substance and excipient batches, the ambient 12, 237–255.
Non-Prescription–

temperature and humidity, the machine used, the exact 10. Box, G.E.P.; Draper, N.R. Empirical Model-Building and
Outsourcing

Response Surface Analysis; Wiley: New York, 1987.


granulation time, and the rate at which liquid is added. 11. Senderak, E.; Bonsignore, H.; Mungan, D. Response sur-
The simplest approach in many cases would be to face methodology as an approach to the optimization of
set up an experimental design in the control factors an oral solution. Drug Dev. Ind. Pharm. 1993, 19,
405–424.
and repeat each experiment many times, hoping for 12. Doehlert, D.H. Uniform shell designs. Appl. Stat. 1970, 19,
enough natural variation in the noise factors to be able 231–239.
to find conditions to minimize variation. This requires 13. Vojnovic, D.; Rupena, P.; Moneghini, M.; Rubessa, F.;
Coslovich, S.; Phan-Tan-Luu, R.; Sergent, M. Experimental
a very large number of experiments, but it is sometimes research methodology applied to wet pelletization in a high-
the only possible way. shear mixer. Part 1. S.T.P. Pharma Sci. 1993, 3, 130–135.
Taguchi’s solution was to vary the noise factors 14. Roquemore, K.G. Hybrid designs for quadratic response
surfaces. Technometrics 1976, 18, 419–424.
artificially.[28] A design is set up in the control factors, 15. de Aguiar, P.F.; Bourguignon, B.; Khots, M.S.; Massart,
another (factorial) design in the noise factors, and the D.L.; Phan-Tan-Luu, R. D-optimal Designs. Chemom.
two multiplied together. The effect of changes in the Intell. Lab. Syst. 1995, 30, 199–210.
16. Atkinson, A.C. The usefulness of optimum experimental
noise factors can thus be assessed at each point and designs. J. Royal Stat. Soc. Ser. B. 1996, 58 (1), 58–76.
the variability minimized. This method is preferred to 17. Cornell, J.A. Experiments with Mixtures; 2nd Ed.; J. Wiley:
the previous method, but non-etheless, the number of New York, 1990.
18. Scheffé, H. Experiments with mixtures. J. Royal Statist.
experiments required using Taguchi’s orthogonal Soc. Ser. B 1958, 20, 344–360.
networks is extremely high. Now it is more usual to 19. Snee, R.D. Computer-aided design of experiments—some
set up designs in which the number of experiments, practical examples. J. Qual. Technol. 1985, 17 (4), 222–236.
although still high, is minimized[29–31] and to find 20. Derringer, G.; Suich, R. Simultaneous optimization of
several response variables. J. Qual. Technol. 1980, 12,
regions where the response is equal to the target value 214–219.
and is at the same time highly insensitive to the noise 21. McLeod, A.D.; Lam, F.C.; Gupta, P.K.; Hung, C.T.
factors. The design must allow interactions among Optimized synthesis of polyglutaraldehyde nanoparticles
using central composite design. J. Pharm. Sci. 1988, 77,
noise factors, and not only the control factors 704–710.
themselves, but preferably all the terms in the control 22. Bourquin, J.; Schmidli, H.; van Hoogevest, P.; Leuenberger,
factor model. H. Basic concepts of artificial neural networks (ANN) mod-
It should be noted that there is a great deal of infor- eling in the application to pharmaceutical development.
Pharm. Dev. Technol. 1997, 2 (2), 95–109.
mation ‘‘hidden’’ in large factorial designs (n ¼ 16). 23. Bourquin, J.; Schmidli, H.; van Hoogevest, P.; Leuenberger,
When analysis shows that only a few factors are signifi- H. Application of artificial neural networks (ANN) in the
cant, the residuals (differences between calculated and development of solid dosage forms. Pharm. Dev. Technol.
1997, 2 (2), 111–121.
measured values) may be analyzed.[32] A small spread 24. Takahara, J.; Takayama, K.; Nagai, T. Multi-objective
of residuals under certain conditions as opposed to simultaneous optimization technique based on an artificial
others may indicates better reproducibility of the neural network in sustained release formulations. J. Con-
trol. Rel. 1997, 49 (1), 11–20.
process or formulation under these conditions. An
25. Spendley, W.; Hext, G.R.; Himsworth, F.R. Sequential
example of its pharmaceutical use is presented in the application of simplex designs in optimization and evolu-
example of Menon et al.[8]. tionary operation. Technometrics 1962, 4, 441.
Optimization Methods 2467

26. Nelder, J.A.; Mead, R. A simplex method for function mini- BIBLIOGRAPHY
mization. Comput. J. 1965, 1, 308.
27. Walters, F. Sequential simplex optimization—an update.
Anal. Lett. 1999, 32 (2), 193. Anderson, V.L.; McLean, R.A. Design of Experiments:
28. Taguchi, G. System of Experimental Design: Engineering A Realistic Approach; Marcel Dekker, Inc.: New York,
Methods to Optimize Quality and Minimize Cost; UNI- 1974.
PUB/Fraus International White Plain, 1987. Grove, D.M.; Davis, T.P. Engineering Quality and Experi-
29. Nair, V. Taguchi’s parameter design: a panel discussion. mental Design; Longman Scientific and Technical: Harlow,
Technometrics 1992, 34, 127–161. UK, 1992.
30. Shoemaker, A.C.; Kwok, L.; Wu, C.F.J. Economical exper- Haaland, P.D. Experimental Design in Biotechnology; Marcel
imentation methods for robust design. Technometrics 1991, Dekker, Inc.: New York, 1989.
33 (4), 415–427. Myers, R.H.; Montgomery, D.C. Response Surface Method-
31. Montgomery, D.C. Using fractional factorial designs for ology; Wiley-Interscience: New York, 1995.
robust process development. Qual. Eng. 1990, 3, 193–205. Schmidt, S.R.; Launsby, R.L. Understanding Industrial
32. Box, G.E.P.; Meyer, R.D. Dispersion effects from fractional Designed Experiments, 3rd Ed.; Air Academic Press: Color-
designs. Technometrics 1986, 28, 19–27. ado Springs, 1993.

Non-Prescription–
Outsourcing

You might also like