Optimization Algorism
Optimization Algorism
Research Paper
a r t i c l e i n f o a b s t r a c t
Article history: One of the major difficulties in applying optimization to reservoir engineering problems is that each function
Received 6 October 2008 evaluation requires a complete simulation which is computationally expensive. Moreover, some problems
Accepted 16 February 2010 are known to be multimodal with several local minima. A common approach to tackle these problems is to
construct cheap global approximation models of the responses often called metamodels or surrogates. These
Keywords: are based on simulation results obtained for a limited number of designs using data fitting. The optimization
polymer injection
algorithm is coupled to the cheap metamodel. In this study a two-stage approach is employed based on the
global optimization
parallel computation
efficient global optimization algorithm, EGO, due to Jones. First an initial sample of designs is obtained using
Latin hypercube. Parallel simulation runs for the initial sample are used to construct a Kriging metamodel. In
the second stage the metamodel is used to guide the search for promising designs which are added to the
sample in order to update the model until a suitable termination criterion is fulfilled. The selection of designs
which are adaptively added to the sample is based on the maximization of the expected improvement merit
function which balances the need for improving the value of the objective function with that of improving
the quality of the metamodel prediction. In this study the original EGO algorithm is modified to exploit
parallelism. The modified algorithm is applied to a polymer injection optimization problem. This eight-
variable problem maximizes economical return by controlling the starting time and slug duration in each
injector well. In the presented example a parametric study was conducted varying oil price. It is concluded
that polymer flooding is feasible for oil prices above US$20.00/STB and gains increase with oil price.
© 2010 Elsevier B.V. All rights reserved.
0920-4105/$ – see front matter © 2010 Elsevier B.V. All rights reserved.
doi:10.1016/j.petrol.2010.02.002
196 B. Horowitz et al. / Journal of Petroleum Science and Engineering 71 (2010) 195–204
balance the need of improving the value of the objective function with where: RI(x) = relative improvement of injection schedule; COP(x),
that of improving the quality of the prediction so that one does not get CWI(x) = cumulative oil production and cumulative water injection
trapped in a local minimum. Considering the computational cost of the from simulation; CPI(x) = cumulative polymer injection; op = oil
simulation and the availability of parallel computing it is highly price; wic = water injection cost and pc = polymer cost. An
desirable for the ISC to include in each iteration multiple promising alternative objective function taking into account discounted cash
designs whose simulation may be performed concurrently. In this flow values is given by:
study the ISC proposed by Jones et al. (1998) that uses the expected
T
improvement merit function is modified to exploit parallelism. 1
NPVRIðxÞ = ∑ F ðx Þ
Initially the polymer injection problem is described and then the τ=0 ð1 + dÞτ τ
optimization formulation is given. The optimization strategy is described with : Fτ ðxÞ = ðOPτ ðxÞ−OPτ0 Þ·op−ðWIτ ðxÞ−WIτ0 Þ·wic−PIτ ðxÞ·pc
starting with the construction of the DACE surrogate model followed by
ð4Þ
the original EGO algorithm and the proposed parallel ISC. Finally, an
example application is presented to demonstrate the potentials of the where: Fτ(x) = cash flow; OPτ, WIτ, PIτ = oil production, water
proposed methodology. injection, and polymer injection in time interval τ; d = discount rate.
The optimization problem can be formulated as:
2. The polymer injection problem
Maximize OF ðxÞ
Polymer flooding uses high molecular-weight polymer to improve subject to : x2i−1 + x2i ≤cp; i = 1…niw ð5Þ
waterflooding performance by increasing the viscosity of the injected x≥0
water thereby resulting in more favorable mobility ratios. It has been
demonstrated to accelerate oil production while delaying water break- where: OF(x) = objective function in use, RI(x) or NPVRI(x); and cp =
through, resulting in a higher recovery without affecting the residual oil concession period. Well constraints such as maximum/minimum
saturation. The major aspects to take into account in numerical modeling bottom hole pressure BHP and maximum/minimum fluid rates are
are the mobility control and polymer retention (Kaminsky et al., 2007). handled by the reservoir simulator and become hidden constraints for
The main beneficial effect is the increase in water viscosity that is a the optimizer. This may result in objective functions that are not
function of polymer concentration in water. Another important aspect to continuously differentiable with respect to design variables. This
mobility control is the reduction in absolute permeability due to the should not be necessarily a problem for the proposed optimization
mechanism of polymer retention in reservoir rock. In the model, this solver since it is not gradient-based, and uses smooth surrogate data
reduction is a function of concentration of polymer retained by the rock. fitting models.
The polymer retention effects are due to two different mechanisms:
adsorption by the rock surface and blocking of smaller caliber pores by the 4. Optimization strategy
polymer molecules, resulting in an inaccessible porous volume to the
fluids. From the experimental point of view it is difficult to quantify these As mentioned above the optimization strategy adopted in this
two mechanisms separately. Therefore the retention effects are modeled study is based on the efficient global optimization algorithm, EGO
by a single non-linear adsorption isotherm. The mass conservation equa- (Jones et al., 1998), modified by a proposed parallel infill sampling
tions of the problem are those of blackoil model modified to include the criterion. The strategy is described in the following sections starting
polymer mass transport. Polymer reduced porosity is a fraction of the out by the construction of the DACE metamodels followed by the
original rock porosity modeled by the inaccessible pore volume constant, original EGO infill sampling criterion. The EGO algorithm is then
IPV: ϕ̄ = (1−IPV)ϕ. The modified absolute permeability tensor is a frac- briefly described and finally the proposed parallel infill sampling
tion of the original tensor which depends on the residual resistance factor, criterion motivation and implementation are presented.
RRF, and the ratio between actual adsorbed polymer concentration and
maximum adsorptive capacity of the rock, AdMAX. A linear model is 4.1. DACE metamodels
adopted for water viscosity which is a function of polymer concentration
in the water phase. Metamodels construction typically involves one of the following
strategies: data fitting schemes (Giunta and Watson, 1998; Simpson et al.,
3. Problem definition 2001; Keane and Nair, 2005), Taylor series expansions (Giunta and
Watson, 1998; Giunta and Eldred, 2000), and reduced basis (Afonso and
In this work the chemical choice and concentration, as well as Patera, 2003). Data fit type surrogates typically involve interpolation or
injection water rates, are considered fixed. The design variables for regression (polynomial) of a set of data generated from the high-fidelity
each injector are the starting time and slug duration: model. The regression models present two major drawbacks for a meta-
model construction: (1) the difficulty to specify the regression terms as
x2i−1 = starting time for well i
x2i = slug duration for well i g
i = 1…niw ð1Þ
the functional form of the high-fidelity function is unknown a priori;
(2) the assumption of independent errors (do not consider the correlation
between the points).
where niw = number of injector wells. In order to define the objective Interpolation models commonly used as surrogates are based on
function, let: techniques known as Kriging, a well-known stochastic based process
model in the field of statistics and geostatistics, which started by the late
Base Casef CWI
COP 0
0 = cumulative water injection
= cumulative oil production
: ð2Þ eighties to be used as approximation technique of outputs obtained from
deterministic computer simulations. Kriging models (Jones et al., 1998;
Simpson et al., 2001; Gano and Renaud, 2004) differ from regression
The Base Case is the reference where the water injection rates as well models in the sense that they in general give a global approximation to
as well constraints are kept the same for the optimization problem the response and also can capture oscillatory response trends. Moreover,
simulation except no polymer is injected. A simple objective function the sample values are assumed to exhibit spatial correlation with
used in this study is given by: response values modeled via a Gaussian process around each sample
location. After the unknowns are estimated some validation checks need
RIðxÞ = ðCOP ðxÞ−COP0 Þ·op−ðCWIðxÞ−CWI0 Þ·wic−CPIðxÞ·pc ð3Þ to be conducted to judge the quality of the generated substitute model to
B. Horowitz et al. / Journal of Petroleum Science and Engineering 71 (2010) 195–204 197
be used. This work will focus on Kriging type of approximation. The main 4.1.2. Kriging formulation
aspects of this methodology are described next. In this technique the following model is considered to the true
unknown function
4.1.1. Design of experiments (DOE)
The first step on the construction of a data fitting based metamodel is k
to generate the sampling points. These are unique locations in the f ðxÞ = ∑ βj Nj ðxÞ + εðxÞ: ð7Þ
j=1
design space determined by a design of experiments (DOE) approach
(Giunta and Watson, 1998; Keane and Nair, 2005). In such locations the
In the above equation the first part is a linear regression of the data
response values of the high-fidelity model are obtained to construct the
with k regressors, in which βj ( j = 1…k) are the unknowns and ε(x), the
approximated model (by instance, Kriging interpolation is very much
error, is responsible to create a ‘localized’ deviation from the global
influenced by the sampling location). The samplings selection is a very
model. As previously mentioned, polynomials are generally used to
important stage to build a reliable metamodel. Specifically for high
construct Nj(x). A traditional approach is called ordinary Kriging in
computational cost function evaluations one must seek for an effective
which zero order (constant) function is employed.
sampling plan, which means the minimum number of points that
In the Kriging process a correlation between errors related to the
ensure a metamodel with good accuracy.
distance between the corresponding points is assumed. Different forms
Commonly considered approaches are Monte Carlo, Quasi Monte
of correlation functions may be employed (Giunta and Eldred, 2000). In
Carlo, Latin hypercube sampling (LHS), orthogonal array, centroidal
this work the following Gaussian correlation form is assumed
voronoi tessellation (CVT) (Giunta and Watson, 1998; Keane and Nair,
2005). In this work LHS sampling will be used throughout. To obtain a
h i
LHS sampling, the range of each variable is divided into p “bins” (sub- ðiÞ
Corr x ; x
ð jÞ ðiÞ
= exp −d x ; x
ð jÞ
ð8Þ
intervals) of equal probability. For n design variables this partitioning
yields a total of pn subintervals in the parameter space. Next, p samples
are randomly selected in the design domain space under certain in which d(x(i), x( j)) is a special normalized distance given by:
restrictions such as: each sample is randomly placed inside a domain
j j
partition and for each unidimensional projection (xi) of the samples and n pk
ðiÞ ð jÞ ð jÞ ðiÞ
partitions, there will be one and only one sample in each partition d x ;x = ∑ θk xk −xk ðθk ≥0; pk ∈½1; 2Þ: ð9Þ
k=1
(Giunta et al., 2003). The above explanation is easily perceived in Fig. 1
in which four samples are to be placed in a 2D (x1, x2) domain. For this
particular case p = 4, consequently four partitions are placed in both x1 Where n is the total number of variables, θk are the unknown
and x2 axes. This gives a total of 16 bins of which four will be chosen correlation parameters used to fit the model which measures the
satisfying both restrictions. Bullets in Fig. 1 represent the four sample importance or activity of xk and pk relates the smoothness of function in
sites chosen randomly in each bin. terms of variable xk (Jones et al., 1998). In this work pk = 2 is considered.
The randomness inherent in the procedure means that there is more As already pointed out, this correlation form is so powerful that a simple
than one possibility of arrangement of sampling that meet the LHS constant term for the regression part of Eq. (7) (ordinary Kriging) can be
criteria. As the LHS sampling is stochastic in nature, it is advised to run used in substitution of the model previously presented such as
such scheme several times and select the best sampling for usage. This
can be automatically calculated following the suggestion given by Keane
ði Þ ði Þ
and Nair (2005) in which for each LHS a quantity Δ is obtained as: f x =μ+ε x ði = 1; ::: mÞ ð10Þ
0 1 where µ is the mean of the stochastic process and ε(x(i))is Normal(0, σ 2).
Bm−1 m 1 C
Δ = B
@ ∑ ∑ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 2ffiA
C ð6Þ
i=1 j=i
xj −xi + yj −yi
Fig. 1. Latin hypercube four samples example in a 2D (x1, x2) domain. Fig. 2. Initial DACE model.
198 B. Horowitz et al. / Journal of Petroleum Science and Engineering 71 (2010) 195–204
Under the application of the above estimates into Eq. (11) and the
maximization of lf function the remaining unknowns are obtained
(correlation parameters θ). After that the best linear unbiased
predictor (BLUP) at any point of the design domain can be obtained as
T −1
fˆðxÞ = μ̂ + r R f−l μ̂ ð13Þ
in which f = ( f (1),…, f (m))T is the true function at the samplings; l = The second stage of the metamodel based approach used in this
(1, …, 1)T and R is a m × m correlation matrix with unity values along study searches at each iteration for promising design points to enter
the diagonal whose (i, j)entry is Corr[ε(x(i)), ε(x( j))] between any two the training sample set. The simplest approach would be to choose the
of the m sampled data points x(i) and x( j). minimizer of the predictor itself. This strategy would put too much
The maximization of the lf function (Eq. (11)) leads to (Jones et al., emphasis on the local behavior of the objective function which would
1998): force convergence to the local minimum closest to the predictor
minimizer. This can be easily demonstrated in the example shown in
0
l R−1 f 2 ð f−l μ̂ Þ′ R−1 f−l μ̂ Fig. 2 where the exact, true, function is depicted in a dashed line while
μ̂ = 0 and σ̂ = : ð12Þ
l R−1 l m
Fig. 4. Functions after addition of first local maximum. Fig. 5. Functions after addition of first promising point to temporary sample.
B. Horowitz et al. / Journal of Petroleum Science and Engineering 71 (2010) 195–204 199
the Kriging approximation based on five samples is plotted with a exploring regions that are undersampled thus having greater
continuous line. There is a larger concentration of samples on the left uncertainty (Bichon et al., 2007).
which can easily happen in the case of larger number of variables, A plot of the expected improvement function is shown on the
especially when some new designs are added to the sample. Also bottom of Fig. 3. It can be seen that it has two local maxima: one in the
plotted on the bottom of the figure is the value of the RMSE. The region of expected function decrease and another in the poorly sampled
standard error is zero at the sampled points and larger in the poorly area.
sampled right half of the plot. If one chooses the predictor minimizer The optimization of the expected improvement will add to the
to enter the sample it is clear that the process will eventually converge sample a point very close to the local minimum of the true objective
to the local minimum on the left, missing the global minimum of the
objective function.
The example shows that one should not concentrate only on
improving the value of the objective function but also on improving
the accuracy of the predictor. In the latter case designs with large
prediction uncertainty should also be included in the training sample.
In order to have a balanced approach one should also search for points
where the probability of improvement is higher (Jones et al., 1998).
Let fmin be the current best value of the objective function:
Table 1
Reservoir characteristics.
Property i j k
function. But once this new point enters the training sample set the
resulting expected improvement function attains a maximum close to Fig. 8. Polymer adsorption isotherm for a rock with permeability of 10 mD.
the global minimum, as can be seen from Fig. 4. Now it can be readily
perceived that the addition of this new design to the sample will
ensure final convergence to the true global minimum. As the computation cost of reservoir simulation is high and the
As it can be observed from Figs. 3, and 4 the expected improvement availability of parallel computation has increased dramatically in the oil
function vanishes at sampled points and tend to be highly multimodal industry setting it is highly desirable to include multiple designs in the
(Jones et al., 1998). The local maxima of the expected improvement ISC. In fact, as has been suggested earlier, local maxima of the expected
function are at designs with the highest probability of function decrease improvement function are generally promising points whose addition
or predictor uncertainty. Therefore those designs are promising points to the sample may increase efficiency of the algorithm.
to enter the sample in order to better update the metamodel (Sobester
et al., 2005). 4.4. Parallel infill sampling criterion
4.3. The EGO algorithm We propose below a parallel infill sampling criterion. It is motivated
by the following observations:
The algorithm proposed by Jones et al. (1998) is schematically
described below: (1) Local maxima of the expected improvement function are
designs where either there is a high probability of objective
(1) Generate a small number of samples from the objective function: function decrease or high predictor uncertainty. These are
promising designs to improve the objective function as well as
a) An initial sample of m = 10n, where m = initial sample size metamodel predictive capability.
and n = number of variables (Jones et al., 1998) has been (2) As one includes a local maximum of the expected improvement
suggested. Others propose m = (n + 1)(n + 2) / 2, the num- in the sample the value of the updated expected improvement
ber of points necessary to fit a full quadratic polynomial drastically reduces in the neighborhood of the point and another
(Bichon et al., 2007). local maximum becomes dominant as observed in Fig. 4.
b) Latin hypercube or other sampling technique is used to
generate the initial sample (Giunta et al., 2003). Let np be the number of available processors and nd ≤ np be the
number of promising designs to enter the sample. The proposed
(2) Construct an ordinary Kriging based metamodel from initial change to step [3] of EGO algorithm is given below.
sample:
Copy current sample set to temporary working sample set.
a) Other technique such as radial basis functions may also be
For i = 1…nd
used (Sobester et al., 2005).
Maximize the expected
improvement
function, obtaining x*i .
b) The metamodel is crossvalidated by leaving one observation * ˆ *
Append the pair xi ; f xi to temporary working sample.
at a time out and then predicting it based on the remaining
Temporarily update Kriging metamodel.
sample points. If crossvalidation fails a log or inverse
Next i.
transformation is tried.
(3) Find the design that maximizes the expected improvement The following remarks further detail the proposed implementation:
function. (1) DIRECT algorithm (Finkel, 2003) is used to optimize the
(4) If maximum improvement is less than TOL ⁎ fmin, stop. The expected improvement function. It is a derivative-free global
suggested value for TOL is 1% but this value may have to be optimizer that reaches a solution by selecting and subdividing
reviewed if log or inverse transformation was applied (Jones et al., at each iteration hyper-cubes that are most likely to contain the
1998). global optimum.
(5) Add new design to sample and update metamodel. Go to step [3].
Table 2
Oil and water properties. Table 3
Permeability related polymer constants.
Property Oil Water
Permeability AdMAX IPV RRF
Density (lbm/ft3) 46.244 62.238
(mD) (lb/bbl)
Compressibility (1/psia) 1.3687 × 10− 5 3.04 × 10− 6
Formation volume factor (RB/STB) 1.50 1.04 10.0 0.30 0.05 1.20
Viscosity (cp) 1.04 0.31 1000.0 0.20 0 1.20
B. Horowitz et al. / Journal of Petroleum Science and Engineering 71 (2010) 195–204 201
(2) Observe that the true function is never invoked to compute sample: if minimum distance of new point to the existing
functional values at new sample points. The cheap metamodel sample is less than tol, then perturb the new point in a random
is always used instead. direction so that its distance from the nearest point found varies
(3) As Kriging predictor is used to compute f ̂(x*i ) there is no need to randomly from one to ten times tol. The value used for tol is:
recompute maximum likelihood parameters for metamodel
3 pffiffiffiffiffiffi
updating. tol = 10 εM ð20Þ
(4) In Fig. 5 the maximizer of the expected improvement function
where: εM = machine epsilon.
evaluated using the Kriging predictor is added to temporary
(6) In step [5] of the EGO algorithm we add the n additional design
working sample and the metamodel is updated. Note that the
points to the sample where true functions are concurrently
added point lies on the continuous line representing the
evaluated to update the metamodel for next iteration.
approximating metamodel. Note also that the resulting function
filters out the added solution so that the global optimizer may
find the next local maximum. 5. Example application
(5) Correlation matrix R becomes ill-conditioned when samples get
clustered around a given design. This happens because rows and Consider the field shown in Fig. 6, containing four injectors and
columns of R become almost identical (Sasena, 2002). Therefore nine producer wells, with property data based on IMEX template
we use the following safeguard when adding a point to the MXSPR005 (CMG, 2007) and geometry similar to Zerpa et al. (2005),
Fig. 11. Relative improvement and mass of injected polymer as functions of oil price.
modeled with 1083 cells. Polymer is injected at a constant production and a decrease in water production of 2.95 MMSTB, relative
concentration of 0.7 lb/STB while the maximum water injection rate to the non optimal case. Using the original EGO algorithm, without
is 10,000 STB/day at a maximum BHP of 9000 psi during the whole additional samples, the number of iterations increases by a factor of 2.33,
simulation period of ten years. Producers operate at a maximum fluids doubling the required clock time to obtain the solution. The non optimal
rate of 2500 STB/day and minimum BHP of 1500 psi. case results in RI = US$26.3 × 106, which is 23% of the optimal value.
In all studied cases the reservoir is initially under-saturated (pressure In order to assess the sensitivity of the optimal polymer injection
above the bubble point) with an initial oil saturation So =0.8 (Sw =0.2). solution relative to oil price, A parametric study for nine oil prices from
The adopted capillary curve and relative permeabilities are shown in op=US$20/STB to US$100/STB was conducted. Relative improvement
Fig. 7. and total mass of injected polymer variations with oil price are shown in
Tables 1 and 2 detail the rock and fluids parameters adopted in the Fig. 11. The sharp increase of both values with increase in oil prices is
simulations. The adsorption of polymer by the rock as a function of readily appreciated. Some insight can be gained from Fig. 12, which shows
polymer concentration in water is shown in Fig. 8 as a non-linear the variations of oil and water cumulative productions as functions of oil
adsorption isotherm for a rock with permeability of 10 mD. For price. It is clear that polymer injection method becomes increasingly
different rock permeabilities linear interpolation of rock/polymer feasible with higher oil prices. This is corroborated by the increase in size
parameters from Table 3 may be used. of the polymer slugs with oil price as depicted in Fig. 13 for the cases of
We initially consider three cases: base, non optimal and optimal. In op=US$40/STB, US$70/STB and US$100/STB. It is also worth noting that
the base case only water is injected. Polymer is injected for the first optimal injection strategies starts out injecting water only, followed by
three years at each injector for the non optimal case. polymer injection after the second year of exploitation. Also, polymer
The optimization problem formulated as described above has eight injection concentrates at the first years of the concession period rather
variables. The RI function is computed using op= US$70/STB, wic = US than at the end. When objective function RI(x) of Eq. (3) is exchanged for
$0.29/STB and pc = US$15.15/STB. The starting time and slug duration NPVRI(x) of Eq. (4), with a mean annual rate d=9.3%, optimum results
for the non optimal and the obtained solution are shown in Fig. 9. The change slightly with an average reduction of 0.3% in the final cumulative
distribution of injected polymer in reservoir at different times is shown oil production, and 14.7% in the total mass of injected polymer.
in Fig. 10 for the optimal solution. The different injection starting times
for the injectors and the fast spreading of polymer in the reservoir can be 6. Conclusions
recognized. Seven additional samples were used in the proposed
parallel infill sampling criterion. The algorithm converged in 21 In reservoir engineering problems, each functional evaluation
iterations, requiring 248 simulation runs, with an optimal RI value of requires a complete model simulation which is computationally
US$115.2 × 106, corresponding to an increase of 2.76 MMSTB in oil expensive. Some of these problems are also multimodal with several
Fig. 12. Oil and water cumulative productions as functions of oil price.
B. Horowitz et al. / Journal of Petroleum Science and Engineering 71 (2010) 195–204 203
Fig. 13. Initial injection time and slug duration for different oil prices.
local minima. A common approach to tackle these problems is to IPV inaccessible pore volume constant
construct cheap global approximation models of the responses, often PRESS prediction residual error sum of square
called metamodels or surrogates. RMSE root mean squared error
In this study a two-stage approach is employed based on the efficient RRF residual resistance factor
global optimization algorithm, EGO, due to Jones. First an initial sample
of designs is obtained using a design of experiments technique. Parallel Symbols
Admax maximum adsorptive capacity of the rock
simulation runs for the initial sample are used to construct a Kriging
metamodel. In the second stage the metamodel is used to guide the CWI cumulative water injection
COP cumulative oil production
search for promising designs which are added to the sample in order to
update the model until a suitable termination criterion is fulfilled. The Corr correlation term
selection of designs which are adaptively added to the sample should Cp concession period
balance the need for improving the value of the objective function with CPI cumulative polymer injection
that of improving the quality of the prediction so that one does not get d(x(i), x( j)) normalized distance
trapped in a local minimum. In the EGO algorithm this balance is EI(x) expected improvement function
achieved through the use of the expected improvement merit function. f true function
The original EGO algorithm is modified to exploit parallelism by f̂ approximate function
selecting some local minima of the merit function to enter the sample f true functions at samples
at each iteration. The modified algorithm is applied successfully to a I(x) the improvement function
polymer injection optimization problem. The advantages of adopting lf likelihood function
the proposed parallel infill sampling criterion are noticeable: the total m total number of samples in a DOE scheme
number of iterations is decreased, often the additional samples turned niw number of injector wells
out to be the best design thereby improving the quality of the n total number of design variables
nd number of promising design for the parallel ISC scheme
surrogate, and the quality of final solution is also improved. Since
np number of computer processors
evaluation of the additional sampling points is done concurrently, the
Nj regressors
total CPU time to solve the problem is significantly decreased.
NPVRI alternative objective function
As demonstrated by the simple case study analyzed herein,
OF objective function
scheduling of polymer injection is in general non trivial and application
op oil price
of optimization techniques is advisable. It was observed that the optimal
pc polymer cost
solution started out injecting water on the first two years and after that
R regression matrix
polymer injection began. For this example it can be concluded that
RI relative improvement of injection schedule (objective
polymer should start at the first years of the concession period rather
function)
than at the end. It was also observed that optimal solution is sensitive to
So initial oil saturation
oil price, with the method becoming increasingly feasible with higher oil
S2 mean squared error of the predictor
prices.
TOL, tol prescribed tolerances
wic water injection cost
Nomenclature x design variables
Acronyms
BHP bottom hole pressure Greek symbols
BLUP best linear unbiased predictor βj, θj Kriging unknown parameters
DACE Design and Analysis of Computer Experiments ε error in the Kriging model
DOE design of experiments εM machine epsilon
EGO efficient global optimization φ standard normal density function
EOR enhanced oil recovery ϕ reservoir rock porosity
ISC infill sampling criterion ϕ̄ polymer affected porosity
204 B. Horowitz et al. / Journal of Petroleum Science and Engineering 71 (2010) 195–204
Φ standard normal cumulative distribution function Giunta, A.A., Watson, L.T., 1998. A comparison of approximation modeling techniques:
polynomial versus interpolating models. Proceedings of the 7th AIAA/USAF/NASA/
µ mean of the stochastic process ISSMO Symposium on Multidisciplinary Analysis and Design, St. Louis.
σ standard deviation Giunta, A.A., Wojtkiewicz, S.F., Eldred, M.S., 2003. Overview of modern design of
experiments methods for computational simulations. Proceedings of the 41st Aerospace
Sciences Meeting, Reno.
Jones, D.R., 2001. A taxonomy of global optimization methods based on response
Acknowledgments surfaces. J. Glob. Optim. 21, 345–383.
Jones, D.R., Schonlau, M., Welch, W.J., 1998. Efficient global optimization of expensive
black-box functions. J. Glob. Optim. 13 (4), 455–492.
The authors acknowledge the financial support for this research Kaminsky, R.D., Wattenbarger, R.C., Szfranski, R.C., Coutee, A.S., 2007. Guidelines for
given by CNPq (National Research Council, Brazil), and PETROBRAS. polymer flooding evaluation and development. Paper IPTC 11200, International
Petroleum Technology Conference, Dubai.
Keane, A.J., Nair, P.B., 2005. Computational Approaches for Aerospace Design: the
References Pursuit of Excellence. Willey.
Queipo, N.V., Pintos, S., Contreras, N., Rincon, N., Colmenares, J., 2000. Surrogate
Afonso, S.M.B., Patera, T., 2003. Structural optimization in the framework of reduced modeling-based optimization for the integration of static and dynamic data into a
basis method. Proceedings of CILAMCE 2003, XXIV Iberian Latin American Congress reservoir description. SPE63065. SPE Annual Technical Conference, Dallas.
on Computational Methods, Ouro Preto, Brazil. Queipo, N.V., Goicochea, J.V., Pintos, S., 2002. Surrogate modeling-based optimization of
Bichon, B.J., Eldred, M.S., Swiler, L.P., Mahedavan, S., McFarland, J.M., 2007. Multimodal SAGD processes. J. Pet. Sci. Eng. 35, 83–93.
reliability assessment for complex engineering applications using efficient global Sacks, J., Welch, W.J., Mitchell, W.J., Wynn, H.P., 1989. Design and analysis of computer
optimization. Proceedings of the 48th AIAA/ASME/ASCE/AHS/ASC Structures, experiments. Stat. Sci. 4 (4), 409–435.
Structural Dynamics, and Materials Conference, Honolulu. Sasena, M.J., 2002. Flexibility and efficiency enhancements for constrained global
CMG, 2007. IMEX User's Guide, Computer Modeling Group. design optimization with kriging approximations. Ph.D. Dissertation, Mechanical
Eldred, M.S., Dunlavy, D.M., 2006. Formulations for surrogate-based optimization with Engineering Dept., University of Michigan, Ann Arbor.
data fit, multifidelity, and reduced order methods. Proceedings of 11th AIAA ISSMO Schonlau, M., 1997. Computer experiments and global optimization. Ph.D. Dissertation,
Multidisciplinary Analysis and Optimization Conference, Portsmouth. Dept. of Statistics and Actuarial Science, University of Waterloo, Ontario, Canada.
Finkel, D.E., 2003. DIRECT optimization algorithm users guide. Center for Research and Simpson, T.W., Peplinski, J.D., Koch, P.N., Allen, J.K., 2001. Metamodels for computer
Scientific Computation. CRSC-TR03-11. North Carolina State University, Raleigh. based engineering design: survey and recommendations. Eng. Comput. 17,
Forester, A.I.J., Keane, A.J., Bresloff, N.W., 2006. Design and analysis of noisy computer 129–150.
experiments. AIAA J. 44 (10), 2331–2339. Sobester, A., Leary, S.J., Keane, A.J., 2005. On the design of optimization strategies based
Gano, S.E., Renaud, J.E., 2004. Variable fidelity optimization using a kriging based scaling on global response surface approximation models. J. Glob. Optim. 33 (1), 31–59.
function. Proceedings of 10th AIAA/ISSMO Multidisciplinary Analysis and Optimi- Zerpa, L.E., Queipo, N.V., Pintos, S., Salager, J.L., 2005. An optimization methodology of
zation Conference, Albany. alkaline-surfactant-polymer flooding processes using field scale numerical simulation
Giunta, A.A., Eldred, M.S., 2000. Implementation of a trust region model management and multiple surrogates. J. Pet. Sci. Eng. 47, 197–208.
strategy in the DAKOTA optimization toolkit. Proceedings of the 8th AIAA/USAF/
NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Long Beach.