Stochastic Simplex Approximation Gradient For Reservoir Production Optimization Algorithm Testing and Parameter Analysis
Stochastic Simplex Approximation Gradient For Reservoir Production Optimization Algorithm Testing and Parameter Analysis
A R T I C L E I N F O A B S T R A C T
Keywords: Production optimization is an effective technique to maximize the oil recovery or the net present value in
Stochastic simplex approximation gradient reservoir development. Recently, the stochastic simplex approximation gradient (StoSAG) optimization algo
Production optimization rithm draws significant attention in the optimization algorithm family. It shows high searching quality in large-
Algorithm testing
scale engineering problems. However, its optimization performance and features are not fully understood. This
Computational cost
Reservoir numerical simulation
study evaluated and analyzed the influence of some key parameters related to the optimization process of StoSAG
including the ensemble size to estimate the approximation gradient, the step size, the cut number, the pertur
bation size, and the initial position by using 47 mathematical benchmark functions. Statistical analysis was
employed to diminish the randomness of the algorithm. The quality of the optimization results, the convergence,
and the computational time consuming were analyzed and compared. The parameter selection strategy was
presented. The results showed that a larger ensemble size was not always favorable to obtain better optimization
results. The increase of the search step size was favorable to escape from the local optimum. A large step size
needed to match a large cut number. The increase of cut number was beneficial to increase the local search
ability, but also made the algorithm more easily fall into the local optimum. The random initial position was
beneficial to find the global optimal point. Moreover, the effectiveness of the parameter selection strategy was
tested by a classical reservoir production optimization example. The final net present value (NPV) for water
flooding reservoir production optimization substantially increased, which indicated the excellent performance of
StoSAG by adjusting the key parameters.
1. Introduction et al., 2016; Lerlertpakdee et al., 2014; Van Essen et al., 2011; Liu and
Forouzanfar, 2018; Liu et al., 2018; Liu and Reynolds, 2020; Liu and
It is a broad consensus that the chance to discover new oilfields Reynolds, 2021).
remarkably decreases and further development of mature fields is The optimization process with uncertainty and unconstrained deci
becoming increasingly attractive. Mature fields are defined as the oil sion variables can lead to very complex and large solution space,
fields, which reach the production peak or are in their declining mode, resulting in many possible solutions. Due to a large number of possible
after a certain production period (Babadagli, 2007). In order to maxi combinations of variables, it is not sufficient to determine the optimal
mize the economic benefit or reservoir recovery of the production life of set of variables based on the intuitive engineering judgement alone (Al
mature fields, reservoir engineers must plan optimally for production Dossary and Nasrabadi, 2016). Optimization algorithms are usually
parameters such as well location, well type, flowing bottom hole pres combined with reservoir simulators to solve this optimization problems.
sure (BHP) and interval control valves (ICV) (Yang et al., 2017).Reser The optimization target is usually to obtain the maximum expected net
voir production optimization is influenced by multiple factors such as present value (NPV) or maximum recovery ratio. (Liu and Reynolds,
geology, engineering and economics. It involves multiple decision var 2016). Numerical reservoir simulation is used to calculate objective
iables and constraints. Therefore, it is a complex and challenging functions such as cumulative oil production and recovery ratio (Chen
problem(Nasrabadi et al., 2012; Al Dossary and Nasrabadi, 2016; Wang et al., 2017b). Optimization algorithms are used to implement the
* Corresponding author. Key Laboratory of Unconventional Oil & Gas Development (China University of Petroleum (East China)), Ministry of Education, Qingdao
266580, PR China.
E-mail addresses: [email protected], [email protected] (J. Xu).
https://doi.org/10.1016/j.petrol.2021.109755
Received 11 August 2021; Received in revised form 7 October 2021; Accepted 29 October 2021
Available online 3 November 2021
0920-4105/© 2021 Elsevier B.V. All rights reserved.
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
optimization process. In order to achieve efficient optimization, a great robust optimization. This approach treats the reservoir numerical
deal of work has been carried out by previous researchers. Many opti simulator as a black box and calculates the approximate gradient by
mization algorithms have been proposed. These algorithms fall into two simulating the response of a single (deterministic) reservoir model to a
broad categories: derivative-free optimization algorithms and randomly generated well control vector, where each control vector
gradient-based optimization algorithms. The derivative-free optimiza contains the settings of all wells in all time steps. As the research with
tion algorithms used in the field of reservoir development optimization the EnOpt algorithm progressed, Fonseca et al. (2017) proposed an
include: genetic algorithms (GA) (Tabatabaei Nejad et al., 2007), par improved EnOpt algorithm with better performance, namely the sto
ticle swarm algorithms (PSO) (Lee and Stephen, 2019), simulated chastic simplex approximation gradient formulation (StoSAG). Subse
annealing algorithms (Tukur et al., 2019), neural network algorithms quently, the StoSAG algorithm was widely used in reservoir production
(Ali et al., 2015), covariance matrix adaptive-evolutionary strategy al optimization processes such as well location optimization. Hanea et al.
gorithms (Forouzanfar et al., 2016), imperialist competition algorithms (2017) used multiple model implementations to represent uncertainties
(Hosseini-Moghari et al., 2015), etc. These methods have the better in reservoir structure and phase distribution to account for geological
global searching ability. However, they require extensive reservoir uncertainty constraints, and used the StoSAG algorithm to optimize the
simulation runs and are not suitable for optimization of large reservoir target well and borehole trajectory. Chen et al. (2017a) developed a
models. The gradient-based optimization algorithms include: adjoint framework based on the lexicographic method. They used the
gradient algorithm (Zhang et al., 2010), ensemble optimization (Leeu stochastic-simple-approximation-gradient algorithm to maximize the
wenburgh et al., 2010), and steepest descent method (Liu and Reynolds, expected NPV and minimize the associated risk or uncertainty for robust
2021), etc. Gradient-based methods converge faster, but have the poor life-cycle production optimization. The channelized reservoir model and
global searching ability. They always fall into local optimum. the Brugge reservoir model indicated the effectiveness of risk measure
The adjoint method, which is subordinate to the gradient-based al selection. Lu et al. (2017a) proposed an efficient robust optimization
gorithm, is widely used in the calculation of model-based life-cycle algorithm using the steepest ascent method with StoSAG, where a large
optimization problems in which the gradients are obtained through number of representative realizations were considered. The results
adjoint techniques. The algorithm is highly accurate and computation showed that the algorithm not only improved the speed of convergence,
ally efficient, but it is difficult to apply in practice because it is an but also achieved a higher optimal NPV. Lu et al. (2017b) used a slightly
invasive method that requires access to the reservoir simulator source modified version of StoSAG for the bi-objective optimization of optimal
code and also requires running a large number of simulations (Fonseca well trajectories and optimal control settings for injection and produc
et al., 2015a). To address this problem, researchers have worked to find tion wells to maximize the expected value of life-cycle NPV and mini
alternative algorithms, one of which is a non-invasive approximate mize risk. They also presented the first graphical solution for the joint
gradient method known as integrated optimization method-ensemble well location and well control optimization of the StoSAG method
based optimization (EnOpt). It was firstly proposed by Lorentzen et al. considering the minimum well spacing constraint, and discussed the
(2006) and Nwaozo (2006). Then, Chen et al. (2009) used EnOpt for implementation of the computationally efficient decoupled StoSAG
Table 1
Many Local Minima functions.
No. Function Dim Range fmin
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ ⎞
⎛
1 √ d
√1 ∑
(
d
) 2 xi ∈ [ − 32.768, 32.768] 0
1∑
f(x) = − a exp⎝ − b√ x2i ⎠ − exp cos(cxi ) + a + exp(1)
d i=1 d i=1
( )T
3 5 2 1 7
d = 2 , m = 5 , c = (1, 2, 5, 2, 3) and A =
5 2 1 4 9
10 ∑d− xi − 1 2 xi ∈ [ − 10, 10] 0
f(x) = sin 2 (πw1 ) + 1
i=1 (wi − 1)2 [1 + 10 sin 2 (πwi + 1)] + (wd − 1)2 [1 + sin 2 (2πwd )] wi = 1 + ,for i = 1, ...,d
4
11 2 2 2
f(x) = sin (3πx1 ) + (x1 − 1) [1 + sin (3πx2 )] + (x2 − 1) [1 + sin (2πx2 )] 2 2 2 xi ∈ [ − 10, 10] 0
12 ∑ 2 xi ∈ [ − 5.12, 5.12] 0
f(x) = 10d + di=1 [x2i − 10 cos(2πxi )] , d = 2
13 sin 2 (x21 − x22 ) − 0.5 2 xi ∈ [ − 100, 100] 0
f(x) = 0.5 + 2
[1 + 0.001(x
⃒ 1 + x⃒2 )]
2 2
14 cos 2 (sin(⃒x21 − x22 ⃒)) − 0.5 2 xi ∈ [ − 100, 100] 0.29258
f(x) = 0.5 + 2
[1 + 0.001(x
∑d
2 + x2 )]
1 √2̅̅̅̅̅̅̅
15 f(x) = 418.9829d − 2 xi ∈ [ − 500, 500] 0
i=1 xi sin( |xi |) , d =2
∑5 ∑
16 f(x) = ( i=1 icos((i + 1)x1 + i))( 5i=1 icos((i + 1)x2 + i)) 2 xi ∈ [ − 5.12, 5.12] − 186.7309
2
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
method for solving bi-objective optimization problems. Chen and Rey Table 2
nolds (2018) proposed an optimization framework based on the Bowl-Shaped functions.
augmented Lagrangian method and the newly developed StoSAG algo No. Function Dim Range fmin
rithm, and applied it to the simultaneous optimization of well control
17 f(x) = + x21 2x22
− 0.3 cos(3πx1 ) − 2 xi ∈ [ − 100, 100] 0
and WAG half-cycle lengths during CO2 water-alternating-gas injection 0.4 cos(4πx2 ) + 0.7
for the enhanced oil recovery process. Chen and Xu (2019) provided a 18 2 xi ∈ [ − 2, 2] 0
theoretical demonstration of the superiority of StoSAG over EnOpt, and f(x) =
provided real case numerical examples. The optimization results illus ( ( ))2
trated the advantages of the StoSAG algorithm over EnOpt algorithm. ∑d ∑d 1
+ β) xij − i ,d =
i=1 j=1 (j
Based on our previous research (Chen and Xu, 2019), the main work j
of this study was to test the influence of key parameters on the optimi 2 , β = 10
∑d ∑i
zation performance of the StoSAG algorithm. Based on the analysis 19 f(x) = 2 2 xi ∈ [ − 0
i=1 j=1 xj , d =2
65.536, 65.536]
obtained from the work above, some parameters were selected and ∑d
20 f(x) = 2 2 xi ∈ [ − 0
applied to the optimization problem of a real reservoir case, and the i=1 xi , d =2
5.12, 5.12]
optimization results were discussed. In this paper, the basic StoSAG al ∑d
21 f(x) = i+1
d =2 2 xi ∈ [ − 1, 1] 0
gorithm was shown in section 2, the algorithm testing results were i=1 |xi | ,
∑d
22 f(x) = 2 2 xi ∈ [ − 10, 10] 0
presented in section 3, and a reservoir production optimization example i=1 i xi
d =2 ,
∑ ∑d
was shown to indicate how to choose reasonable parameters to find a 23 f(x) = di=1 (xi − 1)2 − 2 xi ∈ [ − 4, 4] − 2
i=2 xi xi− 1 ,
better global solution in section 4. d =2
covariance matrix which is defined as: No. Function Dim Range fmin
⎛ ⎞ 29 x6 2 xi ∈ [ − 5, 5] 0
f(x) = 2x21 − 1.05x41 + 1 + x1 x2 +
CU1 0 ... 0 6
⎜ ⎟ x22
⎜ 0 C2 ... 0 ⎟
⎜ ⎟
CU = ⎜ U
⎟ (2) 30
(
x4
)
2 x1 ∈ [ − 3, 3], − 1.0316
⎜ ... ... ... ... ⎟ f(x) = 4 − 2.1x21 + 1 x21 + x1 x2 +
⎝ ⎠ 3 x2 ∈ [ − 2, 2]
0 0 ... CUn ( − 4 + 4x22 )x22
31 f(x) = (x1 − 1)2 + 2 xi ∈ [ − 10, 10] 0
∑d 2
Where CwU , w
= 1, 2, ..., n is the covariance matrix which is used to 2
i=2 i (2xi − xi− 1 ) , d = 2
3
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table 5
Steep Ridges/Drops functions.
No. Function Dim Range fmin
( )−
33 25 1 2 xi ∈ [ − 65.536, 65.536] 0.9980
∑ 1
f(x) = 0.002 +
i=1 i + (x1 − a1i )6 + (x2 − a2i )6
( )
− 32 − 16 0 16 32 − 32 ... 0 16 32
a=
− 32 − 32 − 32 − 32 − 32 − 16 ... 32 32 32
34 f(x) = − cos(x1 )cos(x2 )exp( − (x1 − π)2 − (x2 − π)2 ) 2 xi ∈ [ − 100, 100] − 1
( 2)
35 ∑d 2m ixi
2 xi ∈ [0, π] − 1.8013
f(x) = − i=1 sin (xi )sin , m = 10 , d = 2
π
Table 6
Other functions.
No. Function Dim Range fmin
sensitivity analysis for each parameter, the statistical analysis was domain, we fix the initial search position at the midpoint; otherwise, the
employed. We control the parameters as constant and 50 repetitions of initial search position is set at the midpoint of the first quadrant of the
optimization were performed. The optimal value, average value, and coordinate system in the function search domain. In addition, a loga
computational time are recorded. All the algorithms are programmed in rithm transformation is applied to each variable (Chen and Reynolds,
Matlab (2012a). Simulations are performed by a Core i5 PC with 3 GHz 2016). The variables will be truncated to the interval [− 7, 7] in the
processing frequency of CPU and 16 GB of RAM. We set Ne = 10, cut optimization process.
number = 5, step size = 1, Cx = 0.1 as basic values. The following
principle for the selection of the initial search position is adopted: if the
global optimum is not located at the midpoint of the function search
4
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Fig. 1. Optimization curves of functions falling into different local optimums in algorithm tests.
Fig. 2. Optimization curves for cases that converge to the global optimum by different paths in algorithm tests.
Table 7
Ne sensitivity-partial test data I.
Ne F3 Ne F4
best ave time(s) best ave time(s)
Ne F23 Ne F24
best ave time(s) best ave time(s)
Ne F30 Ne F33
best ave time(s) best ave time(s)
Ne F39 Ne F40
best ave time(s) best ave time(s)
3.1. Sensitivity analysis of the ensemble size (Ne) an overall gradient of comparable quality to the exact gradient. They
showed that a larger ensemble size could improve the quality of the
A previous study is done to address the effect of the ensemble size approximate gradient. The effect of the ensemble size on the optimiza
(number of perturbations) on optimization results by reservoir produc tion results was also discussed by Chen and Reynolds (2016). The results
tion optimization problem. Fonseca et al. (2015b) used the principle of showed that a large ensemble size was not favorable to obtain better
statistical hypothesis testing to quantify the ensemble size required for optimization results (a higher NPV at the final time). On the basis of
5
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table 8 optimum. The fluctuation happens due to the nature of the test function,
Ne sensitivity-partial test data II. which will be specifically analyzed in Section 3.3.
Ne F6 Ne F22 Table 8 gives the data for the second case of the test results. The
best ave time best ave time complete data are detailed in Table B2 in Appendix B. The optimization
(s) (s) results of these functions show that the global optimization ability de
5 − 0.52336 − 0.01885 8.855 5 3.27E- 2.09E- 12.743 creases as Ne increased. In addition, over 60% of the listed functions are
03 02 valley functions. Here we take function F6 as an example. Fig. 4 shows
10 − 0.52336 0.014048 14.522 10 7.11E- 2.21E- 20.879 the function graph and the test curve of F6. From the figure we can see
04 02
15 0.025015 0.025015 20.573 15 1.63E- 2.23E- 29.099
that the algorithm also finds the global optimum when Ne equals 10,
04 02 while the average value increases compared to the results of Ne = 5.
20 0.025015 0.025015 26.786 20 1.98E- 2.26E- 37.19 When Ne is equal to 15 and 20, the optimal value and the average values
03 02 overlap and the optimal value is much larger than the global optimum.
Ne F25 Ne F29 And this indicated that the results of 50 repetitions of the optimization
best ave time best ave time all converge on the same local optimum. It seems that the increase of Ne
(s) (s) in this case enhances the local optimization-seeking ability of StoSAG,
5 1.07E-04 9.28E-04 10.662 5 1.37E- 1.07E- 11.188 while weakening the global search capability of it. Meanwhile, the
03 01 running time greatly increases due to the increase of Ne.
10 1.74E-05 8.53E-04 17.901 10 4.42E- 1.07E- 18.444
As can be seen, the impact of ensemble size (Ne) on optimization
04 01
15 4.10E-05 7.92E-04 25.176 15 5.52E- 1.07E- 25.996 mainly includes the following two aspects: under the same test condi
04 01 tions, the size of Ne has a great impact on running time. The larger the
20 5.80E-05 7.89E-04 32.442 20 7.35E- 9.51E- 33.412 Ne, optimization consumes more time. Large size of Ne does not promote
05 02 the global optimum ability of the algorithm. The large size of Ne might
Ne F34 Ne F36 enhance the local search ability of the algorithm and increase the pos
best ave time best ave time sibility of falling into the local optimum.
(s) (s)
5 − 0.99908 − 0.32614 10.674 5 7.89E- 2.52E- 10.831 3.2. Sensitivity analysis of the step size
06 04
10 − 0.99949 − 0.35067 18.019 10 1.60E- 2.52E- 18.28
05 04 In this section, the impact of the step size on the optimization per
15 − 0.98902 − 0.35469 25.314 15 3.26E- 3.17E- 25.767 formance of StoSAG algorithm is analyzed. Similar to the sensitivity
05 04 analysis of ensemble size (Ne) in Section 3.1, we have presented partial
20 − 0.98748 − 0.29531 32.683 20 5.30E- 3.09E- 33.04 data for two cases of the test results in Tables 9–10, respectively. The
05 04
complete data are detailed in Tables B3-B4 in Appendix B. The functions
in Table 9 and Table B3 show the first case, i.e., the optimal value
these investigations, this section focuses on the verification of the
fluctuated with the increase of the step size. As a result, the average
findings above by test function examples. We evaluate each of the 47 test
value increases or fluctuated. As the step size increases, the probability
functions and then list the test results in Tables 7–8. Note that the time
of falling into local optimum increases, and the global optimization
given in the table is the total time required to perform 50 trials in the
capability could not be guaranteed.
optimization process.
From the test data in Table 9 and Table B3, it could be observed that
Table 7 gives partial data for the first case of the test results. The
in this case, the algorithm has better performance in seeking the opti
complete data are detailed in Table B1 in Appendix B. These functions
mum when the step size is 1. That is, when the step size is 1, the algo
present similar characteristics at different ensemble size (Ne). It could be
rithm obtains the smallest optimal value or the average value of the
observed that here Ne had no significant effect on the optimization re
result is smaller when the optimal value is close. While for other step
sults. The optimal and average values in the tests fluctuated as Ne
sizes, the optimal value is difficult to find. The reason is that the cut
increased. From Table 7 and Table B1, we can see that except for F39,
number do not appropriately increase with the increase of step size. The
which fluctuated significantly with the change of Ne, the results of all
minimum search step length will increase as the step size increased,
other test functions fluctuate slightly around the optimization value
which could lead to a significant decrease in the local search ability of
obtained at Ne = 5. We take function F39 as an example for analysis. The
the algorithm. Therefore, it is necessary to set a suitable cut number for
function graph and test data curve of F39 are shown in Fig. 3. When Ne
large step sizes in practical optimization applications. Meanwhile, the
= 5 or Ne = 15, the algorithm partially converges to the global optimum.
benefit is found by increasing the step size in another case shown in
While Ne = 10 or Ne = 20, the algorithm converges all to the same local
Table 10 and Table B4. If the step size is greater than 1, the results may
Fig. 3. The function graph of test function F39 and its test results about Ne.
6
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Fig. 4. The function graph of test function F6 and its test results about Ne.
Table 9
Step size sensitivity-partial test data I.
Step size F1 Step size F18
best ave time(s) best ave time(s)
Table 10
Step size sensitivity-partial test data II.
Step size F5 Step size F27
best ave time(s) best ave time(s)
be better than those obtained when the step size is equal to 1. Test to 1. The mean values for other step sizes are significantly lower
function F5 is most obvious. All four tests with step size larger than 1 compared to the test results for step size 1. This indicates that the in
have significantly smaller optimal values than those with step size equal crease of step size is beneficial to get rid of local optimum near the
7
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table 11 size. Table 12 gives the test results before and after the improvement,
Step length and cut number. respectively. The symbol * indicates the optimized result after
Step size 1 2 3 4 5 improvement.
Cut number 5 6 6 7 7 Comparing the optimization results before and after the algorithm
improvement in Table 12, it is easy to see that the optimization results of
current search region. most functions become better. Moreover, in most cases with the improved
To verify the conjecture that the matching relationship between the strategy, the mean value of step size equal to 3 and 5 increases compared to
cut number and the step size affects the optimization results, we select step size equal to 2 and 4, respectively. The relatively large step size in
some functions in Table B3 for testing. The initial search position and Ne dicates the large minimum search step size. This makes the local search
are kept constant, while the cut number is allowed to increase with the ability weak. In addition, the results in Table 12 are better for step size
step size to ensure that the minimum search step size is the same. Note greater than 1 for F17* and F24* than when the step size is equal to 1.
that when the above method to increase the cut number is used, the cut
number corresponding to step lengths equal to 3 and 5 is a decimal. We
3.3. Sensitivity analysis of cut number
set the cut number equal to 3 and 5 to be the same as step sizes 2 and 4,
respectively. Table 11 gives the cut number corresponding to each step
When testing the cut number, we continue to organize them
Table 12
Data of function test before and after algorithm improvement.
Step size F1 Step size F1*
best ave time(s) best ave time(s)
Table 13
Cut number sensitivity analysis-partial test data I.
Cut number F2 Cut number F17
best Ave time(s) best ave time(s)
8
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table 14
Cut number sensitivity analysis-partial test data II.
Cut number F1 Cut number F12
best ave time(s) best ave time(s)
according to 3.1. The partial test results for the first case are shown in of falling into local optimum.
Table 13, and the complete test data is detailed in Table B5 in The optimal values for most test cases listed in Table 14 and Table B6
Appendix B. In this case, the optimal and average values decrease are obtained before the cut number equals 20. Some functions are
further as cut number increases. continuously decreased as the cut number increases, while the average
As the cut number increased, the optimal and average values of most value fluctuates in the process. Take the function F6 as an example to
test functions are decreased. However, the results of test function F39 analyze. From the function image of F6 shown in Fig. 4, we can see that
show a significant decrease in the optimal value when the cut number the function is a multi-peaked function with a denser peak distribution.
increased from 15 to 20. From the average values corresponding to the During the testing, the optimal value is decreased when the cut number
different cut numbers, we could see that all the optimization processes increases from 5 to 10. The mean value of the results shows a fluctuating
fell into the local optimum except for the cut number equal to 20. trend. The algorithm does not find the global optimum during the test
Combined with the function images shown in Fig. 3, we believe that it is about the cut number, which may be caused by the effect of the step size,
mainly due to the enhanced local search capability of the algorithm as as described in Section 3.2. The increase in the cut number increases the
the cut number increased. The global optimum of function F39 is very chance of falling into local optimum due to the dense distribution of
close to the search starting point x = 0.5. The local optimum is close to peaks. As each local optimum is different, the mean value fluctuates
the starting point, which easily induces the algorithm to fall into the across the iterations. This implies that increasing the cut number
local optimum. When the cut number is small, it is easy to skip the region indiscriminately is not an appropriate strategy to increase the optimi
that closes to the global optimal point in the searching process. This zation performance of the algorithm.
leads the algorithm easily to fail into the local optimum. Therefore, we
can see that the increase of cut number will improve the local search 3.4. Sensitivity analysis of the perturbation size
performance of the algorithm. It is beneficial for the global optimum of
some functions with small peak ranges but high peaks, although it takes Because the StoSAG algorithm calculates the approximate gradient
more time. based on randomly generated perturbations around the current search
The partial test results for the second case are given in Table 14, and point, the effect of the perturbation size on the accuracy of the random
the complete test data is detailed in Table B6 in Appendix B. In this case, approximate gradient is non-negligible. In this section, we analyze the
the algorithm may find the global optimum, but increases the possibility effect of the perturbation size on the optimization process by different
Table 15
Perturbation step sensitivity analysis-partial test data.
Cx F1 Cx F6
best ave time(s) best ave time(s)
Cx F28 Cx F29
best ave time(s) best ave time(s)
Cx F41 Cx F43
best ave time(s) best ave time(s)
9
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table 16
Initial position sensitivity analysis-partial test data.
Initial search location F5 Initial search location F15
best ave time(s) best ave time(s)
types of test functions. The partial test results are given in Table 15
below, and the complete test data is detailed in Table B7 in Appendix B.
In general, we believe that a smaller perturbation step corresponds to
a better gradient approximation accuracy. This is observed in the test
results of some functions in Table 15, such as test functions F10, F13, etc.
However, more results present that the smallest perturbation step does
not necessarily correspond to the best optimization result. The effect of
randomness is unavoidable. In the experiments, we conducted several
tests on function F6. The results indicated that better results might be
achieved for a larger Cx. Combined with the function image of F6, we
can see that a small Cx is easy to fall into the local optimum points near
the initial search location. A larger Cx is easier to escape from these local
optimum points and find the global optimum.
From the above results, it could be stated that the smaller the Cx is,
the stronger the local search ability of the algorithm has. Enhancing the
local search performance of the algorithm by increasing the cut number
requires more time cost, while the change of Cx has almost no particular
effect on the running time.
initial position facilitates the algorithm to escape from the local opti
where u is a Nu -dimensional column vector which contains all well
mum and enhances the global optimal seeking ability. The mean value of
random initial search positions increases compared to the mean value of control information; ndenotes the nth time step for the reservoir simu
fixed positions for some specific functions, which means that random lation; Nt is the total number of time steps; the time at the end of the nth
initial positions also increase the chance of the algorithm falling into time step is denoted by tn ; tn is the nth time step size; b is the annual
other local optimum. discount rate; NP and NI denote the number of producers and injectors,
In summary, to find the optimal value of complex functions, we still respectively; ro is the oil revenue, in $/STB; cw , cwi , denote the disposal
recommend using random initial search positions to increase the prob cost of produced water and the cost of water injection in units of $/STB
ability of finding the global optimum. The running time of each test case respectively; qo,j and qw,j , respectively, denote the average oil produc
slightly increases. This is mainly due to the process of generating the tion rate and the average water production rate at the ith producer for the
initial random positions. The increase in running time is acceptable in nth time step, in units of STB/day; qwi,k denote the average water injec
order to find the global optimum. tion rate at the kth injector for the nth time step, in units of STB/day.
The objective is to obtain the maximum NPV shown in Eq. (4). The
4. Reservoir production optimization example oil price is set to be $60/STB. The treatment cost of both injected and
produced water is set to be $5/STB. The annual discount rate is set at
Considering the difference between the actual reservoir production
10
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Fig. 6. Optimization process corresponding to different Ne. Fig. 8. Optimization process corresponding to different cut number.
Fig. 7. Optimization process corresponding to different step size. Fig. 9. Optimization process corresponding to different Cx.
0.1. The optimized parameters are the bottom hole pressure with an
upper bound of 4351.13 psi and a lower bound of 2175.57 psi for pro
duction wells and an upper bound of 5801.51 psi and a lower bound of
4351.13 psi for injection wells. The control variables approach is used
for sensitivity analysis. In which we set the parameters of the control
group as follows: Ne = 20, cut number = 6, step size = 1, Cx = 0.01. The
initialized BHP for production and injection wells were 3263.35 psi and
5076.32 psi, respectively. The maximum number of iterations is 100.
The optimization results are shown in Figs. 6–11.
The optimization curves for Ne are given in Fig. 6. The best opti
mization result is obtained when Ne = 10 under the same test condi
tions. The final optimization result is 262214500 $. This result is
consistent with the results mentioned by Chen and Reynolds (2016).
That is, a larger Ne does not ensure a better optimization result. Table 17
gives the running time. Large Ne needs more calculating times.
Fig. 7 shows the searching curves with different step sizes. We set
different cut numbers for different step sizes to ensure the same mini
mum search step size to avoid the reduction of the local search capa Fig. 10. Optimization process corresponding to different initial search location.
bility. From Fig. 7, a large step size at the beginning of the iteration
The optimization curves for different cut numbers are given in Fig. 8.
yields better optimization results. As the iteration progresses, the dif
Large cut number obtain relatively slightly better optimization results.
ference among different step sizes becomes small. At the end of the
The increase is not big. The step size is kept constant in the test, so the
iteration, the optimization curves overlap. It shows that the increase in
increase of cut number can only improve the local search capability, and
step size does not significantly affect the final optimization results.
does not improve the global search capability. Meanwhile, the runtimes
However, step size 6 obtains the optimal value with the least iterations.
of the groups with different cut number settings are given in Table 19,
The runtimes with different step sizes are shown in Table 18. The run
and the impact of the change of cut number on the runtimes is not
time with a step size of 6 is much larger than that with a step size of 1.
obvious. Therefore, it is a relatively suboptimal strategy to improve
Alternatively, we can use a variable-step optimization strategy, i.e., use
reservoir development optimization by increasing the number of cuts
a large step size in the early stage and a small step size in the later stage
only without changing other conditions.
to increase the optimization efficiency.
11
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table 20
The optimization time corresponding to different Cx.
Cx 0.01 0.1 0.5 1
Time(s) 11026.49 9621.92 9744.74 8979.88
Table 21
Optimization time corresponding to different initial
positions.
Case Time(s)
Fig. 11. Optimization curves before and after parameter improvement. final NPV is 2.62 × 10^8 $, while using the parameters from Chen et al.
(2016), the final NPV is 2.59 × 10^8 $. The running time of the first case
which parameters from Chen et al. (2016) is 4743.38 s, while the second
Table 17 is 4558.24 s. Therefore, the optimization result and optimization effi
The optimization time corresponding to different Ne. ciency are improved.
Ne 5 10 15 20
Time(s) 3019.52 5881.57 8100.14 9701.56 5. Conclusion
12
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
13
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
14
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Ne F3 Ne F4
best ave time(s) best ave time(s)
Ne F5 Ne F8
best ave time(s) best ave time(s)
Ne F9 Ne F11
best ave time(s) best ave time(s)
Ne F12 Ne F13
best ave time(s) best ave time(s)
Ne F16 Ne F17
best ave time(s) best ave time(s)
Ne F18 Ne F19
best ave time(s) best ave time(s)
Ne F20 Ne F21
best ave time(s) best ave time(s)
Ne F23 Ne F24
best ave time(s) best ave time(s)
Ne F27 Ne F28
best ave time(s) best ave time(s)
Ne F30 Ne F33
best ave time(s) best ave time(s)
15
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Ne F35 Ne F37
best ave time(s) best ave time(s)
Ne F39 Ne F40
best ave time(s) best ave time(s)
Ne F41 Ne F42
best ave time(s) best ave time(s)
Ne F45 Ne F46
best ave time(s) best ave time(s)
Table B.2
Ne sensitivity test data II
Ne F1 Ne F2
best ave time(s) best ave time(s)
Ne F6 Ne F7
best ave time(s) best ave time(s)
Ne F10 Ne F14
best ave time(s) best ave time(s)
Ne F15 Ne F22
best ave time(s) best ave time(s)
Ne F25 Ne F26
best ave time(s) best ave time(s)
16
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Ne F29 Ne F31
best ave time(s) best ave time(s)
Ne F32 Ne F34
best ave time(s) best ave time(s)
Ne F36 Ne F38
best ave time(s) best ave time(s)
Ne F43 Ne F44
best ave time(s) best ave time(s)
Table B.3
Step size sensitivity test data I
17
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
18
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table B.4
step size sensitivity test data II
19
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table B.5
Cut number sensitivity analysis test data I
20
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table B.6
Cut number sensitivity analysis test data II
Cut number F1 Cut number F5
best ave time(s) best ave time(s)
21
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table B.6
Cut number sensitivity analysis test data II
Cut number F1 Cut number F5
best ave time(s) best ave time(s)
22
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table B.7
Perturbation step sensitivity analysis test data
Cx F1 Cx F6
best ave time(s) best ave time(s)
Cx F10 Cx F13
best ave time(s) best ave time(s)
Cx F16 Cx F17
best ave time(s) best ave time(s)
Cx F22 Cx F25
best ave time(s) best ave time(s)
Cx F28 Cx F29
best ave time(s) best ave time(s)
Cx F32 Cx F34
best ave time(s) best ave time(s)
Cx F41 Cx F43
best ave time(s) best ave time(s)
23
J. Xu et al. Journal of Petroleum Science and Engineering 209 (2022) 109755
Table B.8
Initial position sensitivity analysis test data
Credit roles
Jianchun Xu: Conceptualization, Methodology, Shuyang Liu: Writing-Reviewing and Editing, Hangyu Li and Ling Fan: Supervision, Xiaopu Wang:
Writing- original draft, Wenxin Zhou: Data curation, Visualization, Investigation.
References Liu, Z., Forouzanfar, F., 2018. Ensemble clustering for efficient robust optimization of
naturally fractured reservoirs. Comput. Geosci. 22 (1), 283–296.
Liu, Z., Forouzanfar, F., Zhao, Y., 2018. Comparison of SQP and AL algorithms for
Al Dossary, M.A., Nasrabadi, H., 2016. Well placement optimization using imperialist
deterministic constrained production optimization of hydrocarbon reservoirs.
competitive algorithm. J. Petrol. Sci. Eng. 147, 237–248.
J. Petrol. Sci. Eng. 171, 542–557.
Ali, D.H., Al-Jawad, M.S., Van Kirk, C.W., 2015. Distribution of new horizontal wells by
Liu, Z., Reynolds, A.C., 2020. A sequential-quadratic-programming-filter algorithm with
the use of artificial neural network algorithm. In: SPE Middle East Oil & Gas Show
a modified stochastic gradient for robust life-cycle optimization problems with
and Conference. OnePetro.
nonlinear state constraints. SPE J. 25, 1938–1963, 04.
Babadagli, T., 2007. Development of mature oil fields—a review. J. Petrol. Sci. Eng. 57
Liu, Z., Reynolds, A.C., 2021. Gradient-enhanced support vector regression for robust
(3–4), 221–246.
life-cycle production optimization with nonlinear-state constraints. SPE J. 26,
Chen, B., Reynolds, A.C., 2016. Ensemble-based optimization of the water-alternating-
1590–1613, 04.
gas-injection process. SPE J. 21 (03): 0786-0798.
Lee, S., Stephen, K., 2019. Field application study on automatic history matching using
Chen, B., Reynolds, A.C., 2018. CO2 water-alternating-gas injection for enhanced oil
particle swarm optimization. In: SPE Reservoir Characterisation and Simulation
recovery: optimal well controls and half-cycle lengths. Comput. Chem. Eng. 113,
Conference and Exhibition. OnePetro.
44–56.
Leeuwenburgh, O., Egberts, P.J., Abbink, O.A., 2010. Ensemble methods for reservoir
Chen, B., Xu, J., 2019. Stochastic simplex approximate gradient for robust life-cycle
life-cycle optimization and well placement. In: SPE/DGS Saudi Arabia Section
production optimization: applied to brugge field. J. Energy Resour. Technol. 141 (9).
Technical Symposium and Exhibition. OnePetro.
Chen, B., Fonseca, R.M., Leeuwenburgh, O., et al., 2017a. Minimizing the risk in the
Lorentzen, R.J., Berg, A.M., Naevdal, G., Vefring, E.H., 2006. A new approach for
robust life-cycle production optimization using stochastic simplex approximate
dynamic optimization of waterflooding problems. Amsterdam, Netherlands. In:
gradient. J. Petrol. Sci. Eng. 153, 331–344.
Proceedings of the SPE Intelligent Energy Conference and Exhibition. Apr. 11–13,
Chen, H., Feng, Q., Zhang, X., et al., 2017b. Well placement optimization using an
No. SPE 99690.
analytical formula-based objective function and cat swarm optimization algorithm.
Lu, R., Forouzanfar, F., Reynolds, A.C., 2017a. An efficient adaptive algorithm for robust
J. Petrol. Sci. Eng. 157, 1067–1083.
control optimization using StoSAG. J. Petrol. Sci. Eng. 159, 314–330.
Chen, Y., Oliver, D.S., Zhang, D., 2009. Efficient ensemble-based closed-loop production
Lu, R., Forouzanfar, F., Reynolds, A.C., 2017b. Bi-objective optimization of well
optimization. SPE J. 14, 634–645, 04.
placement and controls using stosag. In: SPE Reservoir Simulation Conference.
Forouzanfar, F., Poquioma, W.E., Reynolds, A.C., 2016. Simultaneous and sequential
OnePetro.
estimation of optimal placement and controls of wells with a covariance matrix
Nasrabadi, H., Morales, A., Zhu, D., 2012. Well placement optimization: a survey with
adaptation algorithm. SPE J. 21, 501–521, 02.
special focus on application for gas/gas-condensate reservoirs. J. Nat. Gas Sci. Eng.
Fonseca, R.M., Leeuwenburgh, O., Van den Hof, P.M.J., et al., 2015a. Improving the
5, 6–16.
ensemble-optimization method through covariance-matrix adaptation. SPE J. 20,
Nwaozo, J.E., 2006. Dynamic Optimization of a Water Flood Reservoir. University of
155–168, 01.
Oklahoma.
Fonseca, R.R.M., Chen, B., Jansen, J.D., et al., 2017. A stochastic simplex approximate
Tabatabaei Nejad, S.A., Aleagha, A.A.V., Salari, S., 2007. Estimating optimum well
gradient (StoSAG) for optimization under uncertainty. Int. J. Numer. Methods Eng.
spacing in a Middle East onshore oil field using a genetic algorithm optimization
109 (13), 1756–1776.
approach. In: SPE Middle East Oil and Gas Show and Conference. OnePetro.
Fonseca, R.M., Kahrobaei, S.S., Van Gastel, L.J.T., et al., 2015b. Quantification of the
Tukur, A.D., Nzerem, P., Nsan, N., et al., 2019. Well placement optimization using
impact of ensemble size on the quality of an ensemble gradient using principles of
simulated annealing and genetic algorithm. In: SPE Nigeria Annual International
hypothesis testing. In: SPE Reservoir Simulation Symposium. OnePetro.
Conference and Exhibition. OnePetro.
Hosseini-Moghari, S.M., Morovati, R., Moghadas, M., et al., 2015. Optimum operation of
Van Essen, G.M., Van den Hof, P.M.J., Jansen, J.D., 2011. Hierarchical long-term and
reservoir using two evolutionary algorithms: imperialist competitive algorithm (ICA)
short-term production optimization. SPE J. 16, 191–199, 01.
and cuckoo optimization algorithm (COA). Water Resour. Manag. 29 (10),
Wang, X., Haynes, R.D., Feng, Q., 2016. A multilevel coordinate search algorithm for
3749–3769.
well placement, control and joint optimization. Comput. Chem. Eng. 95, 75–96.
Hanea, R.G., Casanova, P., Wilschut, F.H., et al., 2017. Well trajectory optimization
Yang, H., Kim, J., Choe, J., 2017. Field development optimization in mature oil
constrained to structural uncertainties. In: SPE Reservoir Simulation Conference.
reservoirs using a hybrid algorithm. J. Petrol. Sci. Eng. 156, 41–50.
OnePetro.
Zhang, K., Li, G., Reynolds, A.C., et al., 2010. Optimal well placement using an adjoint
Lerlertpakdee, P., Jafarpour, B., Gildin, E., 2014. Efficient production optimization with
gradient. J. Petrol. Sci. Eng. 73 (3–4), 220–226.
flow-network models. SPE J. 19, 1083–1095, 06.
Liu, X., Reynolds, A.C., 2016. A multiobjective steepest descent method with applications
to optimal well control. Comput. Geosci. 20 (2), 355–374.
24