tNavAHMUserGuideEnglish PDF
tNavAHMUserGuideEnglish PDF
July 2019
19.2
Copyright Notice
Rock Flow Dynamics r (RFD), 2004–2019. All rights reserved. This document is the intel-
lectual property of RFD. It is not allowed to copy this document, to store it in an information
retrieval system, distribute, translate and retransmit in any form or by any means, electronic
or mechanical, in whole or in part, without the prior written consent of RFD.
Trade Mark
RFD, the RFD logotype and tNavigator r product, and other words or symbols used to identify
the products and services described herein are trademarks, trade names or service marks of
RFD. It is not allowed to imitate, use, copy trademarks, in whole or in part, without the prior
written consent of the RFD. Graphical design, icons and other elements of design may be
trademarks and/or trade names of RFD and are not allowed to use, copy or imitate, in whole
or in part, without the prior written consent of the RFD. Other company, product, and service
names are the properties of their respective owners.
Security Notice
The software’s specifications suggested by RFD are recommendations and do not limit the
configurations that may be used to operate the software. It is recommended to operate the
software in a secure environment whether such software is operated on a single system or
across a network. The Licensee is responsible for configuring and maintaining networks and/or
system(s) in a secure manner. If you have any questions about security requirements for the
software, please contact your local RFD representative.
Disclaimer
The information contained in this document is subject to change without notice and should
not be construed as a commitment by RFD. RFD assumes no responsibility for any error that
may appear in this manual. Some states or jurisdictions do not allow disclaimer of expressed
or implied warranties in certain transactions; therefore, this statement may not apply to you.
Since the software, which is described in the present document is constantly improved, you
may find descriptions based on previous versions of the software.
2
19.2
Contents
1. Introduction 7
2. Defining Variables 9
2.1. Standard scenarios of variables definition in GUI . . . . . . . . . . . . . . . 11
2.1.1. Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2. Relative Permeability (RP) . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.3. Multiply Permeability by Regions . . . . . . . . . . . . . . . . . . . . 18
2.1.4. Multiply Permeability by Layers . . . . . . . . . . . . . . . . . . . . . 21
2.1.5. Adjust KV/KH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.6. Multiply Transmissibility by Regions . . . . . . . . . . . . . . . . . . . 26
2.1.7. Multiply Pore Volume by Regions . . . . . . . . . . . . . . . . . . . . 29
2.1.8. Modify Scale Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.9. Multiply Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1.10. Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.2. File’s structure of history matching project . . . . . . . . . . . . . . . . . . 41
2.2.1. File’s structure of experiment . . . . . . . . . . . . . . . . . . . . . . . 41
2.2.2. Saving project’s modifications . . . . . . . . . . . . . . . . . . . . . . 42
2.2.3. Deleting results of experiment . . . . . . . . . . . . . . . . . . . . . . 43
2.3. Defining Variables for models with Reservoir Coupling option . . . . . . . . 43
3. Experimental Design 44
3.1. Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2. Custom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3. Grid search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4. Latin hypercube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5. Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.6. Tornado . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.7. Plackett-Burman design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.7.1. General Plackett-Burman . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.7.2. Include line with minimal values . . . . . . . . . . . . . . . . . . . . . 54
3.7.3. Folded Plackett-Burman . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.8. Box-Behnken design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.9. Implementation of Variable Filter . . . . . . . . . . . . . . . . . . . . . . . 56
4. Objective Function 57
4.1. Specifying the Objective Function . . . . . . . . . . . . . . . . . . . . . . . 58
4.2. History matching objective function . . . . . . . . . . . . . . . . . . . . . . 61
4.2.1. Objective function for different objects . . . . . . . . . . . . . . . . . . 61
4.2.2. Objective function formula . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.3. Weights automatic calculation . . . . . . . . . . . . . . . . . . . . . . . 63
4.2.4. Selecting historical points for a history matching . . . . . . . . . . . . 64
4.2.5. Loading a pressure’s history into a base model . . . . . . . . . . . . . . 66
CONTENTS 3
19.2
5. Optimization Algorithms 81
5.1. Creating New Experiment From Selected Variants . . . . . . . . . . . . . . 83
5.2. Termination criteria of algorithms . . . . . . . . . . . . . . . . . . . . . . . 85
5.3. Multi-objective approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3.1. Brief description of approach . . . . . . . . . . . . . . . . . . . . . . . 87
5.4. Response Surface (Proxy models) . . . . . . . . . . . . . . . . . . . . . . . 88
5.5. Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.5.1. Brief description of the algorithm . . . . . . . . . . . . . . . . . . . . . 89
5.5.2. More about parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.5.3. Algorithm versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.6. Multi-objective Differential Evolution algorithm . . . . . . . . . . . . . . . 94
5.6.1. Multi-objective Differential Evolution algorithm implementation . . . . 94
5.6.2. MODE algorithm parameters . . . . . . . . . . . . . . . . . . . . . . . 94
5.7. Simplex method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.7.1. Definitions and brief algorithm description . . . . . . . . . . . . . . . . 95
5.7.2. Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.7.3. Termination tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.8. Particle Swarm Optimization algorithm . . . . . . . . . . . . . . . . . . . . 100
5.8.1. Brief algorithm description . . . . . . . . . . . . . . . . . . . . . . . . 100
5.8.2. Particle Swarm Optimization algorithm in general . . . . . . . . . . . . 100
5.8.3. Velocity update formula . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.8.4. Parameters influence on algorithm working . . . . . . . . . . . . . . . 103
5.9. Multi-objective Particle Swarm Optimization algorithm . . . . . . . . . . . . 106
5.9.1. Multi-objective Particle Swarm Optimisation algorithm implementation 106
5.9.2. MOPSO algorithm parameters . . . . . . . . . . . . . . . . . . . . . . 107
5.10. Ensemble approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.10.1. Brief algorithm description . . . . . . . . . . . . . . . . . . . . . . . . 108
5.10.2. Algorithm parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
CONTENTS 4
19.2
8. Workflows 172
8.1. Editing workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.2. Creating variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8.3. Running workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
CONTENTS 5
19.2
CONTENTS 6
19.2
1. Introduction
tNavigator is a software package, offered as a single executable, which allows to build static
and dynamic reservoir models, run dynamic simulations, perform extended uncertainty anal-
ysis and build surface network as a part of one integrated workflow. All the parts of the
workflow share common proprietary internal data storage system, super-scalable parallel nu-
merical engine, data input/output mechanism and graphical user interface. tNavigator supports
METRIC, LAB, FIELD units systems.
tNavigator is a multi-platform software application written in C++ and can be installed on
Linux, Windows 64-bit OS and run on systems with shared and distributed memory layout
as a console or GUI (local or remote) based application. tNavigator runs on workstations and
clusters. Cloud based solution with full GUI capabilities via remote desktop is also available.
tNavigator contains the following 8 functional modules licensed separately:
• Compositional simulator;
• Thermal simulator;
In this document there is a description of the module Assisted History Matching which
is fully integrated with simulation modules (Black Oil simulator, Compositional simulator,
Thermal simulator).
• Sensitivity test;
• Probabilistic forecast;
• Production optimization;
1. Introduction 7
19.2
• Risks analysis;
• Research validation.
tNavigator User Manual contains the description of physical model, mathematical model
and the keywords that can be used in dynamic model.
1. Introduction 8
19.2
2. Defining Variables
To run any algorithm it is required to define variables in advance. Different parameters can
be used as variables for Assisted History Matching (AHM) and uncertainty analysis. For
example:
• permeability;
• RP data;
• aquifer parameters;
• well data;
• fault transmissibility.
• wells’ trajectories;
• wells’ parameters.
The number of variables defined by user in a project is not limited. However, increase in
the number of variables leads to increase in the AHM, uncertainty analysis and optimization
challenges.
Variables can be set by two ways:
• using the keyword DEFINES (see 12.1.25) (for models in tN, E1, E3, IM, ST, GE
formats) and the keyword VDEF (see 12.1.26) (for models in MO format).
The set of variables in GUI is limited by standard scenarios. Using the keyword DEFINES
(see 12.1.25) it is possible to define any parameter as variable.
2. Defining Variables 9
19.2
2. Defining Variables 10
19.2
• Equilibrium;
• Adjust KV/KH;
• Multiply Faults;
• Other.
2.1.1. Equilibrium
Variables of scenario
In this scenario of history matching a depth of water-oil contact (WOC) and a depth of
gas-oil contact (GOC) are used as variables.
Availability of scenario in GUI
This scenario is available in the History Matching Variables Manager if the keyword
EQUIL (see 12.16.2) is defined in the model’s data file. WOC in the first equilibrium region
is defined as a variable for history matching in the figure 3.
2.1.1. Equilibrium 13
19.2
Example
DEFINES
'WOC_EQLREG_1' 1877 1876 1878 REAL/
/
...
EQUIL
– depth pres depth-wo pres-wo depth-go pres-go rsvd rvvd accu-
racy
1816 180 @WOC_EQLREG_1@ 0 1816 0 0 /
/
In this example WOC depth is defined as a variable. Its initial value is 1877, its minimum
value is 1876, its maximum value is 1878. Using the keyword EQUIL (see 12.16.2) the WOC
depth DWOC is set equal to the value of variable WOC_EQLREG_1 in all blocks of the model.
2.1.1. Equilibrium 14
19.2
If RP are defined by tables (e.g., SWOF (see 12.6.1), SGOF, see 12.6.2) it is necessary
to convert a table into the Corey (LET) correlation and then to run this scenario. To convert
RP tables into the Corey (LET) correlation go to menu Documents and select in the pop-up
menu Approximate RP and Convert to Corey (or LET) correlations.
It is possible to define variables for regions as follows (see figure 4):
• Set by reg. The variable’s value is set in the selected regions or in all regions. The
variable’s value will be set as target parameter’s value;
• Mult by reg. The target parameter’s value will be multiplied by the variable’s value in
a region (regions);
• Plus by reg. The variable’s value will be added to the target parameter’s value in a
region (regions).
Example
DEFINES
'K_RORW_M_2_4' 1 0.5 2 REAL /
'N_W_P_2_4' 0 -0.1 0.1 REAL /
'S_WCR_S_2_4' 0.39 0.29 0.49 REAL /
/
...
COREYWO
– SWL SWU SWCR SOWCR KROLW KRORW KRWR KRWU PCOW NOW NW NP SPC0
0.238 1 0.296 0.254 0.8 0.52 0.28 1 0 3.3 2.4 0 -1 /
0.238 1 @S_WCR_S_2_4@ 0.23 0.8 @0.11 * K_RORW_M_2_4@ 0.22 1 0
4 @2.4 + N_W_P_2_4@ 0 -1 /
0.238 1 0.34 0.265 0.8 0.435 0.398 1 0 3.5 2.4 0 -1 /
0.238 1 @S_WCR_S_2_4@ 0.27 0.8 @0.217 * K_RORW_M_2_4@ 0.302 1
0 3.3 @1.8 + N_W_P_2_4@ 0 -1 /
0.238 1 0.3 0.266 0.8 0.58 0.344 1 0 2.8 2 0 -1 /
/
In the Example 1 RP end points are defined as variables for 2-nd and 4-th regions of satu-
ration. Further, these variables are used as parameters in the keyword COREYWO (see 12.6.3).
In particular:
• the value of variable N_W_P_2_4, varying from -0.1 to 0.1, is added to the value of
nW , equal to 1.8. The summation is denoted by letter P in the variable’s name;
• the value of SWCR is set to the value of S_WCR_S_2_4 (denoted by S in the variable’s
name), varying from 0.29 to 0.49.
By default the value of variable K_RORW_M_2_4 equals 1, the value of variable N_W_P_2_4
equals 0 and the value of variable S_WCR_S_2_4 equals 0.39. If all regions are selected the
range of regions will be written in the variable’s name, e.g. K_RORW_M_1TO5 means that
regions from 1 to 5 are selected.
Example
COREYWO
– SWL SWU SWCR SOWCR KROLW KRORW KRWR KRWU PCOW NOW NW NP SPC0
0.24 1 0.29 0.25 0.8 0.5 @0.28+K_RWR_P_1TO5@ 1 0 3 2 0 -1 /
0.24 1 0.39 0.23 0.8 0.11 @0.22+K_RWR_P_1TO5@ 1 0 4 3 0 -1 /
0.24 1 0.34 0.27 0.8 0.43 @0.4+K_RWR_P_1TO5@ 1 0 3 2 0 -1 /
0.24 1 0.35 0.28 0.8 0.2 @0.3+K_RWR_P_1TO5@ 1 0 3.3 2 0 -1 /
0.24 1 0.31 0.26 0.8 0.58 @0.34+K_RWR_P_1TO5@ 1 0 3 2 0 -1 /
/
Notice that for multiplication and summation operations the program automatically con-
trols the values of variables defined by user, thus, does not allow to define values of variables
leading to nonphysical results. If a variable’s value is out of the correct range its color will
change from black to red.
In the next example (Example 2) the relative permeability of water krW R equals 0.22 in
the 2-nd region, and krW R will be equal to 0 if the value of variable K_RWR_M_1TO5 is
-0.22.
In this example the multiplier of permeability is defined as a variable for each region:
M_PERM_FIPNUM_1 etc. For all multipliers initial value is 1, minimum value is 0.1 and
maximum value is 10. Type of variables is REAL.
During an assisted history matching process (see Example 1) PERMX, PERMY and
PERMZ are multiplied by selected variables in different FIPNUM regions. PERMX, PERMY
and PERMZ are defined in grid.inc file. In the model’s data file the REGIONS section, in
which FIPNUM regions are defined, follows the GRID section. Therefore, FIPNUM regions
property is included as a property defined by user (i.e. as an array) with a name IWORKFIP-
NUM (see the keyword IWORK, see 12.3.6). Further we can use this FIPNUM-array in the
arithmetic.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2) permeabilities PERMX
(see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13) are multiplied by these multi-
pliers in each FIPNUM region.
• all 3 properties: PERMX (see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13)
are multiplied by a multiplier;
• PERMX (see 12.2.13) and PERMY (see 12.2.13) are multiplied by a multiplier;
• only PERMZ (see 12.2.13) is multiplied by a multiplier.
Figure 6. Defining multipliers of permeability as variables in the selected layers via GUI.
You can calculate permeability in Z direction PERMZ (see 12.2.13) based on permeability
in X direction PERMX (see 12.2.13), using the formula PERMZ = PERMX ∗ @KV _KH@,
in the scenario Adjust KV/KH.
Scenario’s file automatically saved in the USER folder
Having run a history matching project in the USER folder the file with the following
name <project_name>_hm_mult_by_layers.inc is automatically saved. In this file
the keyword DEFINES (see 12.1.25), names of variables, ranges of variables, types of variables
are written. Names of variables are used between symbols @ in the keyword ARITHMETIC
(see 12.3.2). During an assisted history matching process each variable is substituted by the
value from the variable’s range defined in the keyword DEFINES (see 12.1.25).
Example
DEFINES 'MULT_PERMXYZ_1_12' 1 0.1 15 REAL /
'MULT_PERMXYZ_13_25' 1 1 10 REAL /
'MULT_PERMXYZ_26_38' 1 1 10 REAL /
'MULT_PERMXYZ_39_51' 1 0.5 5 REAL /
'MULT_PERMXYZ_52_64' 1 0.1 10 REAL /
'MULT_PERMXYZ_65_76' 1 0.1 10 REAL/
'MULT_PERMXYZ_77_89' 1 0.2 3 REAL /
'MULT_PERMXYZ_90_102' 1 0.1 10 REAL/
'MULT_PERMXYZ_103_115' 1 0.1 10 REAL/
'MULT_PERMXYZ_116_128' 1 0.1 10 REAL/
/ ...
In this example multipliers of permeability for groups of layers in Z direction are defined:
MULT_PERMXYZ_1_12 is set for the group of layers from 1 to 12 etc. All variables have a
REAL type and initial value 1. The variable’s range is different for each variable.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2) permeabilities PERMX
(see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13) are multiplied by these multi-
pliers for each group of layers (e.g., for a group of layers („1:12)).
Example
DEFINES 'MULT_PERMZ_1_12' 1 0.1 15 REAL /
'MULT_PERMZ_13_25' 1 1 10 REAL /
'MULT_PERMZ_26_38' 1 1 10 REAL /
'MULT_PERMZ_39_51' 1 0.5 5 REAL /
'MULT_PERMZ_52_64' 1 0.1 10 REAL /
'MULT_PERMZ_65_76' 1 0.1 10 REAL/
'MULT_PERMZ_77_89' 1 0.2 3 REAL /
'MULT_PERMZ_90_102' 1 0.1 10 REAL/
'MULT_PERMZ_103_115' 1 0.1 10 REAL/
'MULT_PERMZ_116_128' 1 0.1 10 REAL/
/ ...
ARITHMETIC
PERMZ („1:12) = PERMZ(„1:12)*@MULT_PERMZ_1_12@
PERMZ („13:25) = PERMZ(„13:25)*@MULT_PERMZ_13_25@
PERMZ („26:38) = PERMZ(„26:38)*@MULT_PERMZ_26_38@
PERMZ („39:51) = PERMZ(„39:51)*@MULT_PERMZ_39_51@
PERMZ („52:64) = PERMZ(„52:64)*@MULT_PERMZ_52_64@
PERMZ („65:76) = PERMZ(„65:76)*@MULT_PERMZ_65_76@
PERMZ („77:89) = PERMZ(„77:89)*@MULT_PERMZ_77_89@
PERMZ („90:102) = PERMZ(„90:102)*@MULT_PERMZ_90_102@
PERMZ („103:115) = PERMZ(„103:115)*@MULT_PERMZ_103_115@
PERMZ („116:128) = PERMZ(„116:128)*@MULT_PERMZ_116_128@
/
ARITHMETIC
PERMY = PERMX
PERMZ = PERMX * @KV_KH@
/
In this example the variable KV_KH is defined. Its initial value is 0.1, its minimum value
is 0.1 and its maximum value 1. Its type is REAL. In the EDIT section using the keyword
ARITHMETIC (see 12.3.2) a permeability PERMZ (see 12.2.13) is computed as PERMZ =
PERMX * @KV_KH@.
Example
DEFINES 'M_TRANSMISSIBILITY_FIPNUM_1' 1 0.1 15 REAL /
'M_TRANSMISSIBILITY_FIPNUM_2' 1 1 10 REAL /
'M_TRANSMISSIBILITY_FIPNUM_3' 1 0.1 10 REAL /
'M_TRANSMISSIBILITY_FIPNUM_4' 1 0.1 10 REAL /
/ ...
ARITHMETIC
MULTX = IF (IWORKFIPNUM == 1, MULTX *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTX)
MULTXM = IF (IWORKFIPNUM == 1, MULTXM *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTXM)
MULTY = IF (IWORKFIPNUM == 1, MULTY *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTY)
MULTYM = IF (IWORKFIPNUM == 1, MULTYM *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTYM)
MULTZ = IF (IWORKFIPNUM == 1, MULTZ *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTZ)
MULTZM = IF (IWORKFIPNUM == 1, MULTZM *
@M_TRANSMISSIBILITY_FIPNUM_1@, MULTZM)
MULTX = IF (IWORKFIPNUM == 2, MULTX *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTX)
MULTXM = IF (IWORKFIPNUM == 2, MULTXM *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTXM)
MULTY = IF (IWORKFIPNUM == 2, MULTY *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTY)
MULTYM = IF (IWORKFIPNUM == 2, MULTYM *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTYM)
MULTZ = IF (IWORKFIPNUM == 2, MULTZ *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTZ)
MULTZM = IF (IWORKFIPNUM == 2, MULTZM *
@M_TRANSMISSIBILITY_FIPNUM_2@, MULTZM)
...
/
IWORK, see 12.3.6). Further we can use this FIPNUM-array in the arithmetic.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2) permeabilities PERMX
(see 12.2.13), PERMY (see 12.2.13) and PERMZ (see 12.2.13) are multiplied by these multi-
pliers in each FIPNUM region.
Figure 9. Defining multipliers of pore volume as variables in the selected regions via GUI.
Example
DEFINES 'M_PORV_FIPNUM_1' 1.000000 0.100000 10.000000 REAL /
'M_PORV_FIPNUM_2' 1.000000 0.100000 10.000000 REAL /
'M_PORV_FIPNUM_3' 1.000000 0.100000 10.000000 REAL /
/ ...
ARITHMETIC
PORV = IF (IWORKFIPNUM == 1, PORV * @M_PORV_FIPNUM_1@, PORV)
PORV = IF (IWORKFIPNUM == 2, PORV * @M_PORV_FIPNUM_2@, PORV)
PORV = IF (IWORKFIPNUM == 3, PORV * @M_PORV_FIPNUM_3@, PORV)
/
In this example the multiplier of pore volume is defined as a variable for each region:
M_PORV_FIPNUM_1 etc. For all multipliers their initial value is 1, minimum value is 0.1
and maximum value is 10. A type of variables is REAL.
During an assisted history matching process (see Example 1) PORV is multiplied by
defined variables (M_PORV_FIPNUM_1 etc.) in different FIPNUM regions. An effective
pore volume of blocks PORV (see 12.2.27) is modified in the EDIT section. However, in the
data file of the model the EDIT section is followed by REGIONS section, in which FIPNUM
regions are defined. Therefore, FIPNUM regions property is included as a property defined by
user (i.e. as an array) with a name IWORKFIPNUM (see the keyword IWORK, see 12.3.6).
Further we can use this FIPNUM-array in the arithmetic.
In the EDIT section, using the keyword ARITHMETIC (see 12.3.2), a pore volume PORV
(see 12.2.27) is multiplied by corresponding multiplier in each FIPNUM region.
• Set by reg. The variable’s value is set in the selected regions or in all regions. The
variable’s value will be set as target parameter’s value;
• Mult by reg. The target parameter’s value will be multiplied by the variable’s value in
a region (regions);
• Plus by reg. The variable’s value will be added to the target parameter’s value in a
region (regions).
Example
IWORKSATNUM
14*0 4*2 14*0
6*2 10*0 8*2
10*0 9*2 9*0
...
/
DEFINES
'SATNUM_SWCR_S_1TO5' 0.296 0.196 0.396 REAL/
'SATNUM_SWU_P_2_4' 0 -0.1 0.1 REAL/
'SATNUM_KRW_M_1_3_5' 1 0.5 2 REAL/
/
...
ARITHMETIC
SWCR = IF(IWORKSATNUM == 1, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 2, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 3, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 4, @SATNUM_SWCR_S_1TO5@, SWCR)
SWCR = IF(IWORKSATNUM == 5, @SATNUM_SWCR_S_1TO5@, SWCR)
SWU = IF(IWORKSATNUM == 2, SWU+@SATNUM_SWU_P_2_4@, SWU)
SWU = IF(IWORKSATNUM == 4, SWU+@SATNUM_SWU_P_2_4@, SWU)
KRW = IF(IWORKSATNUM == 1, KRW*@SATNUM_KRW_M_1_3_5@, KRW)
KRW = IF(IWORKSATNUM == 3, KRW*@SATNUM_KRW_M_1_3_5@, KRW)
KRW = IF(IWORKSATNUM == 5, KRW*@SATNUM_KRW_M_1_3_5@, KRW)
/
In the second example (see Example 2) for all regions of saturation the variable SAT-
NUM_SWCR_S_1TO5 with initial value 0.296 and varying from 0.196 to 0.396 is defined.
For 1-st, 3-rd and 5-th regions the variable SATNUM_KRW_M_1_3_5 with initial value
1-st and varying from 0.5 to 2-nd is defined. For 2-nd and 4-th regions the variable SAT-
NUM_SWU_P_2_4 with initial value 0 and varying from -0.1 to 0.1 is defined. All variables
have a type REAL.
During an assisted history matching process (see Example 2) SWCR, SWU and KRW are
modified according to defined operation’s type in different SATNUM regions. SWCR, SWU
and KRW are defined in the file props.inc. However, in the model’s data file the PROPS
section is followed by REGIONS section, in which SATNUM regions are defined. Therefore,
each SATNUM regions property is included as a property defined by user (i.e. as an array)
with a name IWORKSATNUM (see the keyword IWORK, see 12.3.6). Further we can use this
SATNUM-array in the arithmetic.
In the EDIT section using the keyword ARITHMETIC (see 12.3.2):
• for 1-st, 3-rd and 5-th regions KRW value is multiplied by the variable SAT-
NUM_KRW_M_1_3_5;
• for 2-nd and 4-th regions the variable SATNUM_SWU_P_2_4 is added to SWU value.
Example
DEFINES
'M_FAULT22' 1 0 1 REAL /
'M_FAULT23' 1 0 1 REAL /
'M_FAULT24' 1 0 1 REAL /
'M_FAULT21' 1 0 1 REAL /
/
MULTFLT
'FAULT22'@M_FAULT22@ /
'FAULT23'@M_FAULT23@ /
'FAULT24'@M_FAULT24@ /
'FAULT21'@M_FAULT21@ /
/
2.1.10. Other
Variables of scenario
Variables defined in the model’s data file using the keyword DEFINES (see 12.1.25) (see
Example 1) will be included in the tab Other of the History Matching Variables Manager
menu.
Example
DEFINES
'L1' 50 50 50 REAL/
'L2' 50 50 50 REAL/
'DAYS' 100 100 100 INTEGER/
'AZIMUTH' 60 60 60 REAL/
PORO_FILENAME 'PORO_1' 1* 1* STRING/
/
...
INCLUDE
@PORO_FILENAME + ".inc"@ /
• INTEGER – integer number (min and max must also be integer numbers);
• STRING – string.
2.1.10. Other 37
19.2
2.1.10. Other 38
19.2
In this example four variables L1, L2, DAYS and AZIMUTH are defined. Three variables
are of REAL type, while DAYS is of INTEGER type. For all variables their initial values
(first number) and varying range (two last numbers) are defined.
The variable PORO_FILENAME is of STRING type and its initial value is set to PORO_1.
Using the keyword INCLUDE (see 12.1.82) files with extension *.inc can be included in
the model instead the variable of STRING type. In this example the file with the name
PORO_1.inc and containing poro property is included into the project instead the variable
PORO_FILENAME.
Values of the STRING variable can be controlled by algorithm or set by external grid
search (see figure 13). When external grid search is used a series of experiments is created, i.e.
for each value of the STRING variable a separate experiment will be created for other model
variables. The external grid search can be used for all experiments (see section Experimental
Design) and algorithms (see section Optimization Algorithms).
A variable of STRING type can be controlled by algorithm for Custom, Grid search,
Latin hypercube, Monte Carlo experiments and Differential Evolution, Particle Swarm Opti-
mization algorithm optimization algorithms. In such case an algorithm configures an optimal
combination of variables including a STRING variable.
In the window Create New Experiment (see figure 14) double–click on Values
field for the variable PORO_FILENAME. In the pop up dialog Configure Values for
”PORO_FILENAME” specify names of loaded files, but without extension *.inc. As can
be seen in the figure 14 files with names PORO_1.inc, PORO_2.inc, PORO_3.inc and
2.1.10. Other 39
19.2
PORO_4.inc will be successively loaded in the model. For each value of STRING vari-
able (PORO_1.inc, PORO_2.inc etc.) a separate experiment will be created for other model
variables L1, L2, DAYS.
2.1.10. Other 40
19.2
In Example 1, in the data file of experiment e1.data, using the keyword PREDEFINES
The Example 2 shows a file of experiment’s variant. In this file values of multipli-
ers of permeability M_PERM_FIPNUM_1 etc. are written in the keyword PREDEFINES
(see 12.1.27). These values are substitute instead of @variable_name@ in the base model.
Moreover, minimum and maximum values and type of variable are defined. A substitution is
carried out using the keyword OPEN_BASE_MODEL (see 12.1.28).
Results of calculations of experiment’s variants (with different values of variables) are
saved in the folder RESULTS created in the folder <experiment_name>.
Right clicking on a model’s variant in the list of variants and select Save as... allows to
save the model’s variant as a standard data file, in which corresponding values of variables
are implemented.
• – delete experiment’s files while keeping the possibility to restore it. All experi-
ment’s files will be deleted. The experiment’s entry in the project will remain. Experi-
ment configurations and list of variants will be available.
• open and run a MASTER model and coupled with it a SLAVE model simultaneously as
independent models;
• open and run several variants of MASTER model sharing SLAVE models;
Variables can be specified in both MASTER model and SLAVEs via GUI using the button
History Matching Variables Manager or using the keyword DEFINES (see 12.1.25).
Names of variables specified in MASTER and SLAVE models should be different.
If names of variables in MASTER and SLAVE models coincide then values
!
of variables in SLAVE models will be equal to the values specified in the
MASTER model. If in different SLAVE models there are variables with the
same name then in the MASTER model the variable value will be taken from
the first SLAVE model readed by MASTER model and the value will be used
in all SLAVE models in each model variant. Having read information from
SLAVE models values of variables are updated only in the MASTER model.
3. Experimental Design
Before launch any optimization algorithm it is required to carry out a sensitivity test of history
matching project to select variables. For this purpose it is recommended to run experiment(s).
• Custom;
• Grid search;
• Latin hypercube;
• Monte Carlo;
• Tornado;
• Plackett-Burman design;
• Box-Behnken design.
figure 15 shows a dialog that prompts user to create an experiment. To make a creation
more convenient there are the following buttons in the dialog:
• Select variable by filter. Allows to include into experiment only variables selected
by the filter (see Implementation of Variable Filter);
• Hide unused variables. Unticked variables will be hide in the dialog Crete New
Experiment;
3. Experimental Design 44
19.2
Examples how to work with experiments in the AHM module are de-
scribed in training tutorials:
3. Experimental Design 45
19.2
(a) (b)
If sensitivity analysis shows that variables and/or their varying ranges are not satisfied (see
the example on the figure 17), i.e. calculated data is far from historical, then if we continue
to find solution with selected variables and ranges then we can spend time on simulations but
don’t find a good history matching case. So in this case you can try to use other variables
or/and change their varying ranges and run once again experiment to evaluate a sensitivity of
variables before launch of optimization algorithms.
3.2. Custom
User can manually define values of variables for each experiment’s variant in GUI:
Figure 19 shows that for each experiment’s variant (Variant #0, Variant #1, Variant #2
etc.) values of variables are defined by user.
3.2. Custom 47
19.2
• Triangular (see figure 22(d)). Triangle peak is located at the base value of the variable;
• Discrete (see figure 23). It is required to specify a Value of variable and Probability that
variable has the specified value (button Add new value and probability). To normalize
the range of probabilities press the button Normalize.
(a) (b)
(c) (d)
• Triangular (see figure 22(d)). Triangle peak is located at the base value of the variable;
• Discrete (see figure 23). It is required to specify a Value of variable and Probability that
variable has the specified value (button Add new value and probability). To normalize
the range of probabilities press the button Normalize.
3.6. Tornado
Tornado experiment is used to build a Tornado diagram. In this experiment each variable is set
to its min and max values while other ones have default values. If M parameters are varied,
then 2M + 1 variants are generated (including base model).
For a Tornado experiment a Tornado Diagram can be calculated and seen on the tab
Results. Analysis
3.6. Tornado 53
19.2
• General Plackett–Burman;
• Folded Plackett–Burman.
There is an example on the figure 27. Maximal value of variable is denoted as ”+”, minimal
one is denoted as ”-”. Columns are variables (there are 3 ones), rows are model variants (there
are 4 ones). Each combination of levels for any pair of variables appears one time.
The standard factorial method can be considered as a simple grid search (see 3.3), i.e. for
k variables each of them taking p values it is required to run pk number of experiments. The
Box-Behnken allows to significantly decrease the required number of experiments.
In the Box-Behnken method each variable is placed at one of three values: minimal,
maximal and base. Blocks of variables are created using the Box-Behnken approach described
in details in [11]. Variables placed in a block are put through all combinations of their maximal
and minimal values, while outside of block variables are kept at base values.
!
In the original paper [11] the approach to construct blocks of variables has
been described only for number of variables less then 16. Therefore, for cases
not being described in [11] all possible combinations of blocks of two vari-
ables are considered.
The Box-Behnken design scheme for three variables is shown in figure 28. In this case
varible values are placed at cube edges and its center (-1 corresponds to minimal value, 0 –
base value, 1 – maximal value). Three blocks of variables are created. In the 1-st block x2
variable is kept at base value while all possible combinations of max and min values of x1
and x3 variables are run. In the 2-nd block x3 variable is kept at base value while all possible
combinations of max and min values of x1 and x2 variables are run. In the 3-rd block x1
variable is kept at base value while all possible combinations of max and min values of x2
and x3 variables are run. Thus the total number of variants is equal to 22 × 3 + 1 = 13. On
the other hand, 33 = 27 number of variants is required for three level factorial design.
If one or two variables are specified in a model one block of one variable or two blocks
of one variable will be created and the Box-Behnken design is similar to Tornado experiment
(see 3.6).
4. Objective Function
To run any optimization algorithm an objective function (criterion of evaluation of model’s
quality) should be configured. The main task of the objective function (hereinafter – OF) – is
to help to choose the best model’s variant for given parameters. In tNavigator two objective
functions are available:
• Custom OF;
• OF of experiment.
• Differential Evolution;
• Simplex Method;
• Forecast optimization.
4. Objective Function 57
19.2
Formulas for calculation of these objective function are presented in the section 6.4.1. These
objective functions are not editable.
To create a customizable objective function select from a drop down menu Custom Ob-
jective Function or press the button and specify a name of objective function. The OF
can be deleted using the button , renamed using , duplicated using and load from
project using the button .
When loading an objective function from another project tick required functions in the list.
Loaded objective functions will appear in the list of objective functions in the tab Objective
Functions (see figure 30). If the objective function was created in another project for another
model then the objective function transfer results in lose of some settings due to inconsistency
of objects and time steps in models. For example, settings for historical points at nonexistent
time steps will be skipped.
Further a type of objective function and its terms are specified. In order to add (delete)
a term of objective function press the button Add term/ Delete term. Several terms can be
added simultaneously using the button Add several terms. For each term of objective function
Objects and corresponding Parameters should be selected. The available objects are:
• Wells;
• Groups;
• Field;
• RFT/PLT;
To select only injectors press the button Check all injectors, only producers – Check all
producers. To select only particular wells load the corresponding well filter using the button
Apply filter.
Oil total, water total, liquid total, etc. can be selected as parameters. For each parameter
a Deviation (acceptable mismatch) and Deviation Type (Relative or absolute) are specified
(see section 4.2.2).
For each object it is possible to specify or calculate its weight in a objective function.
To specify a Weight double click on a weight value corresponding to an object. Weights of
objects can be calculated based on the historical data of the selected parameter (see section
4.2.3). Select Weight Parameter from the drop down menu and press the button Calculate.
Historical values, absolute and relative deviation values of obtained results from historical
ones are shown in the table on the right by clicking the button on the right panel. In oder
to visualize the difference between historical and calculated results press the button .
If an objective function is properly specified the inscription ”Ok” will be displayed in the
bottom of the dialog. Values of created objective function for different model variants can be
seen in the tab Results Table.
The aim of Optimization algorithms is to find a minimum of the objective function of
experiment. After running an optimization algorithm you cannot change a configuration of
objective function of experiment.
A custom objective function is generally used for analysis of results. During analysis of
results you can change parameters of custom objective function and compare a custom OF
with experimental one. Moreover, a custom objective function is used to exclude historical
points from consideration.
Notice that it is possible to use a configured objective function as an objective function of
experiment for optimization algorithms. In order to do this press the button Select objective
function as shown in figure 32.
Thus, the objective function is one function combining at the same time different terms:
rates, pressure, watercut for wells, parameters for groups and field, RFT-pressure, etc.
• wob j is the weight of object (wells, groups etc.). It can be calculated automatically based
on historical data of the selected parameter (see section 4.2.3);
• ln is the length of time step n (from the selected step k to the last one N );
If the Function Type is History Matching, Quadratic, then the deviation S is calculated
as:
value(H)−value(C) 2
• S= g , if Deviation Type is absolute;
value(H)−value(C) 2
• S= g·value(H) , if Deviation Type is relative.
where
• g is the deviation value specified by user. For example, if Deviation is set equal to 0.05
and Deviation Type is relative then if S is lower then 1 means that deviation between
historical and calculated values does not exceed 5%;
If the Function Type is History Matching, Linear, then the deviation S is calculated as:
|value(H)−value(C)|
• S= g , if Deviation Type is absolute;
|value(H)−value(C)|
• S= g·value(H) , if Deviation Type is relative.
! If at some time step a historical rate (for oil, water or gas) is zero, then this
step is not take into account when calculating an objective function.
!
assumed to be equal to:
An objective function (OF) is automatically normalized by terms, objects and time steps
providing independence of OF from their number in the model.
Due to the normalization of the objective function its value equals to one means that for
all OF terms the mismatch between historical and calculated values is of the same order as
the measurement error.
• Press the button Show control points. The difference between historical and cal-
culated values in absolute or relative terms is shown as gaps on the historical graph.
The size of ”segment” can be defined by user when configuring objective function. For
example, in the figure 34 the Deviation Type is defined as relative and the Deviation
is equal to 0.05. Then, a historical data error varies from -0.05 to 0.05.
Press the button to see historical values, absolute/relative deviation values as table
on the right. Dragging the beginning/end of ”segment” you can modify the absolute
and relative deviations from historical values in the table. Moreover, values of abso-
lute or relative deviations in the table can be edited then the lengths of corresponding
”segments” in the graph will be changed.
• Right-click on the selected point in order to exclude/include point by point. Press and
hold Shift to select points inside a rectangle;
• Run a new experiment (e.g., using the button ) and select as an objective function of
experiment the created OF using the button Select objective function (see figure 32).
Model variants can be filtered by relative or absolute deviation values using the button
Hide unmatch models (see figure 35). If a model variant is out of the interval given by
the deviation value for the selected parameter then the variant will be hidden and excluded
from further analysis. As can be seen in figure 35 variants that deviations of total water do
not exceed 10% are shown. Other model variants are hidden.
Figure 35. Hide variants having a total water deviation higher than 10%.
The well data are loaded using a format ”Well” ”Date” ”Pressure”, where:
• BHP. This parameter is compared with calculated BHP at calculation of objective func-
tion;
• THP. This parameter is compared with calculated THP at calculation of objective func-
tion;
• WBP. This parameter is compared with calculated WBP, WBP4, WBP5, WBP9 at
calculation of objective function;
Field data are loaded using a format ”FIELD” ”Date” ”Pressure”, where:
The loaded average pressure is compared with calculated average pressure for calculation
of objective function.
where:
• summing parameter p – set of all selected parameters (Water, Oil, watercut and so on);
• w p – parameter weight;
– Constant rate duration. X is the number of days within which the specified well
(or group) rate stays constant (i.e., does not deviate from the target rate value). X is
the difference between two time moments: X = t2 −t1 , t2 > t1 . The well (or group)
control rate (target value) is specified automatically and is equal to the rate at zero
time step of the forecast model. Calculated and target rates are compared with
accuracy (Rate Accuracy) (by default 1%) specified in the Objective Function
Configuration dialog. If calculated rate deviates from the target rate less than 1%
a well (or group) rate is considered to be constant.
Settings of Economic calculations can be accessed via Create NPV Script button .
When the settings are defined, the NPV value can be set as an Objective Function for
Optimization Experiment (figure 43).
Additionally, the Python script for NPV calculation will be generated with the deined val-
ues as its arguments. This script will be avaliable Graph calculator in and can be edited
manually in order to implement user-defined settings of economic calculation algorithm.
NPV Formula.
N
CFt
NPV = IC + ∑ t
t=1 (1 + i)
where:
• CF – Cash Flow. CF t – cash flow in t time steps (t = 1,...,N);
• IC – Initial Capital . This parameter usually refers to the initial investments (at 0 time
step) and, hence, in general case should be defined as a negative value.
• i – discount rate. It is used for the allocation of future cash flow into a single present value
amount.
• Discount starting step – the time step on which the discount begins to be applied.
CF t = FI − CAPEX, where:
• FI – Finance income (income from sales). Income includes income from the sale of
both domestic and foreign markets (tab Oil and Gas prices – figure 39). FI is considered
as the difference between profit before tax and profit tax.
Oil and Gas prices can increase by a given percentage each time step automatically.
Specify the percent and press Apply – figure 39. To decrease price by a given percent
you need to specify negative percentage value.
• CAPEX – capital expenditures. Includes the cost of drilling new wells, sidetracks.
Specify cost of new well, cost of vertical, horizontal, deviated parts of the wellbore per
meter – tab Wells, figure 40.
PBT – profit before tax: PBT = GP − TAX − OPEX, where:
• GP – gross profit (sales profit).
• TAX – VAT (value-added tax), export duty, transport cost for export – figure 41
(tab Taxes).
• Cost of oil, gas production, water injection (tab Prod. expenses, figure 42).
• Salary, Insure;
This script will be avaliable Graph calculator in and can be edited manually in order
to implement user-defined settings of economic calculation algorithm.
The User Graph exported by the script can be used as an Objective Function for Optimiza-
tion Experiment (figure 45).
Figure 45. Assign NPV value calculated vis Python script as an Objective Function.
N N 2 2 !
FOPT − FOPT H FW PT − FW PT H
∑ ∑ ln S = ∑ 0.05 · FOPT H
+
0.05 · FW PT H
,
p=oil,water n=0 n=0
where:
This OF can be used at the beginning of model history matching process when the goal is
to match total parameter values for field. Then another OF can be used that provides tighter
history matching criteria.
For example Another example of tighter history matching criteria is below in the Example
2 for rates.
Example 2 (figure 47). Object – Wells; 7 producers in the model; parameters – Oil Rate,
Water Rate; Weight (W) – 1; deviation (g) – 1; deviation type – relative; K = 0 – sum for
time steps from zero time step.
Objective function:
! !
7 N 7 N
(WOPR −WOPRH)2 (WW PR −WW PRH)2
∑ 1 · ∑ ∑ ln S = ∑ ∑ 1 ·WOPRH
+
1 ·WW PRH
ln ,
j=1 p=oil,water n=0 j=1 n=0
where:
5. Optimization Algorithms
This section contains description of algorithms of Assisted History Matching and Uncertainty
Analysis. The following optimization algorithms are available:
• Differential Evolution;
• Simplex method;
• Ensemble approach.
To run any algorithm it is required to define variables and an objective function in advance.
Moreover, before running an optimization algorithm it is recommended to carry out a sensitive
test of variables using abovementioned experiments.
Termination criteria of algorithms are discribed in the section – 5.2.
figure 15 shows a dialog that prompts user to create an experiment. To make a creation
more convenient there are the following buttons in the dialog:
• Select variable by filter. Allows to include into experiment only variables selected
by the filter (see Implementation of Variable Filter);
• Hide unused variables. Unticked variables will be hide in the dialog Crete New
Experiment;
5. Optimization Algorithms 81
19.2
5. Optimization Algorithms 82
19.2
• include/exclude variables;
If values of variables are different at least for two selected variants they are considered to
be ”important” and marked with orange color (see figure 48). ”Important” variables are used
in experiment by default (i.e. variables are ticked). Variables having the same value for all
selected variants are considered to be ”unimportant” and they are not ticked by default. When
configuring a new experiment you can include/exclude variables. However, even unticked
initially ”important” variables stay marked with orange color to the end of configuring process.
The variable value taken from the first initially selected variants is indicated as a variable
base value (Base). If it is required the base value can be changed.
Maximum and minimum of variable used in an initial experiment are set as Min. and
Max. by default. If selected variants are taken from different experiments then the minimal
variable value of Min. and the maximal variable value of Max. over all experiments are set as
Min. and Max. for the variable. Min. and Max. values can be changed. At the same time new
Min. and Max. values should be specified such as all variable values from selected variants
are included in the new range. In order to see the variable variation range you can hover
over the line containing the variable (see figure 48). A tip shows a varible name and its value
(in case of the value is the same for all variants) or its variation range (from its minimum
to its maximum). Notice that all above mentioned modifications do not affect variables or
calculations of initially selected variants and are used to create variants of the new experiment.
For example, the variation range of the variable M_PERM_FIPNUM_4 shown in figure 48
is from 0.25215 to 2.17595 over all selected variants. By default Min. and Max. of the variable
are 0.1 and 10, respectively and taken from the initial experiment. If it is required to decrease
the variable range the possible Min. and Max. values are 0.25 and 2.176 (see figure 49).
Notice that Min. values can not be higher than 0.25215 (e.g., 0.26) since the variable value
0.25215 will be outside the new range. In addition, Max. can not be lower than 2.17595 (e.g.,
2.17) otherwise, the variable value 2.17595 will be outside the new range.
Figure 48. Setting variables when creating a new experiment from selected variants.
Figure 49. Variations of variable Min. and Max. values when creating a new experiment from
selected variants.
• Variables Variation;
If the range is sufficiently wide then the algorithm continues its work, else – it will be stopped.
Optimization algorithm will be stopped if at least one of the four conditions described
below is satisfied.
Objective Value to Reach
Define the target value of OF. An algorithm will be terminated if there is a model’s variant
for which OF value is less than a target value. Default value is zero.
Objective Function Variation
Define the value of variation of objective function (in percents). An algorithm will be
terminated if deviation of OF from average value becomes less than the defined value.
Variables Variation
Define the variable’s value (in percents). Algorithm will be terminated if the deviation of
each variable from average characteristic becomes smaller than the defined value.
Stop on slow improvement
Define the number of iterations (Iteration Count) and the value of OF value improvement
(Improvment Value) in percent. Algorithm will be terminated if objective function value will
not be improved by specified number of percent after the selected number of iterations.
Clarification.
Range of OF values in population is the difference between maximal and minimal value.
Average characteristics is an average of maximal and minimal values in population.
The notion of ”population” (set of model variants) for stopping criterion check differs depend-
ing on the type of optimization algorithm.
• Particle Swarm Optimization (classic) – the size of population is predefined for al-
gorithm, it can be changed manually using Advanced settings (parameter swarm size,
default – 14).
• Particle Swarm Optimization (flexi) – the size of population is equal to the product of
swarm size and difference between one and proportion of explorers; it can be changed
manually using Advanced settings (default: population size is 10 · (1 − 0.5) = 5).
• Response Surface – two last calculated variants are taken to check the algorithm stop-
ping criterion.
The detailed theoretical description of Proxy models construction is available in the section
Proxy model.
Mutant vector is composed as a sum of base vector and a few differences of random
vectors from the population multiplied by F parameter.
Selection of base vector from population and number of differences can also be cus-
tomized.
If number of differences is 2, the formula for calculating mutant vector will be:
Once the objective function of sample vector is calculated, it is compared with the objec-
tive function of target vector.
If sample vector provides better objective function value it replaces target vector in pop-
ulation. And then DE switches to next iteration until number of iterations exceeds maximum
defined by user.
Notation.
Niter – maximal number of simulator launches (number of iterations)
Vsample – sample vector (calculated on iteration)
Vtarget – target vector (used in creation of sample vector, will be replaced by it if it will
be better)
Vbase – base vector (used in creation of mutant vector)
Vmutant – mutant vector (used in creation of sample vector)
Vrandom1 , Vrandom2 , . . . – random vectors (different from target and base vectors, used in
creation of mutant vector)
N p – population size (number of vectors from which target, base and random vectors
are selected)
F – weight of differences (used in creation of mutant vector)
Cr – crossover (component replacement probability, used in creation of sample vector)
N_di f f – number of differences (number of random vector pairs, used in creation of
mutant vector)
Random_seed – random number (determines the initial state of pseudorandom number
generator)
N_sim_calc – number of simultaneously calculated models (for parallel run)
Parameters connection.
Algorithm parameters are connected by two main formulae used in creation of sample
vector at each iteration. They have been mentioned in section 5.5.1. Now rewrite them using
the notation.
Parameters domain.
Niter – not less than N p + 1
Np – not less than 2 + 2 · Ndi f f
• Np
Number of points which are used in creation of a new point at each iteration. At the
beginning of algorithm working the initial population is scattered through the search
space.
N p parameter increasing leads to better algorithm "sense" but also increasing of pop-
ulation inertness. Thus probability of finding the global minimum will increase but
rate of convergence to local minimum will slow down. Connection between the rate of
convergence and the population size is ambiguous.
• F Weight of differences vector. The parameter determines a value of deviation from base
vector for mutant vector. With small value of F a premature convergence may appear.
Small values of F lead to search space localization near current population points
which is suit for the local minimization problem. Large values of F make it possible
to examine search space far beyond current population bounds but decrease the rate of
convergence, such values correspond to the global minimization problem. However, too
small or too large values of F do not provide good results in both cases.
Note also that F parameter is changed while algorithm working.
• Cr
Crossover. The parameter determines probability which is used in creation of sample
vector when components of target vector are replaced by components of mutant vector
or stay the same. For each sample vector one randomly chosen component is always
from mutant vector. Other components are from mutant vector with probability of Cr .
Thus the more Cr value, the more components of target vector will be replaced. Small
values of Cr are suit for separable problems, large values are for nonseparable problems.
History matching problems are generally nonseparable. Note, however, that overly large
values of Cr do not provide good results.
• N_sim_calc
Parameter for parallel run. The parameter determines number of variants which will be
calculated simultaneously.
Parallel version of DE is asynchronous, and parallel run enables algorithm to examine
more deeply properties of current population, scatter band of generated sample points
becomes wider. This leads the local search version to an extent turning towards global
search if using parallel run with increasing number of iterations.
It may be good to chose random selection of target and base vectors when using ad-
vanced version with parallel run.
It is recommended to set N_sim_calc = N p if there is corresponding computational
power. Speedup and/or improvement of the quality of convergence may be obtained by
using N_sim_calc 6 2 · N p.
There are two ways of handling Niter when using parallel run. The first way is to in-
crease number of iterations (this is essential if the initial Niter is small) to obtain the
quality improvement with the same calculation time as in sequential version (increasing
of Niter within the bound of Niter · N_sim_calc). The second way is to leave unaltered
number of iterations (e.g. if the initial Niter is large enough) to decrease the time of
algorithm working just by parallelism.
On the whole, for using parallel run number of iterations increasing is desired (may be
less than N_sim_calc times as).
The version is aimed to fast convergence to a local minimum. It does not provide the
possibility to analyse sensitivity of the objective function in the search space and search
for the global minimum.
Recommended values of Niter (for sequential run) : 30-60.
Recommended values of N_sim_calc: 6 (acceptable values 6 12). Corresponding in-
creasing of Niter is desired.
The version is for global minimum search. It requires large number of iterations but
makes it possible to search the most qualitative points in the search space.
Recommended values of Niter : more than 200.
Recommended values of N_sim_calc: 6, 12 (acceptable values 6 24). Corresponding
increasing of Niter is desired.
• Advanced version.
1. Initialization of population: generation of set of vectors from the search space. One
vector corresponds to the base model. The number of generated vectors is specified by
Population size (by default 12). It can be modified using the Advanced Parameters
option.
2. Construction of the Pareto front. Initialization of the best global solution in the Pareto
front (external archive). The Pareto front can be visualized in the crossplot (see section
6.7.1).
6. Update of the Pareto front. Solutions are replaced arbitrary when archive is full.
If any of these conditions is fulfilled the algorithm is terminated. Otherwise the process
continues from step 3.
Hence it belongs to the general class of direct search methods. Objective function can be
set in combination with this algorithm.
The Nelder-Mead method is simplex-based. A simplex S ⊂ Rn is the convex hull of n + 1
vertices x0 , x1 , ..., xn ∈ Rn . For example, a simplex in R2 is a triangle, and in R3 it is a
tetrahedron.
A simplex-based direct search method begins with a points set x0 , ..., xn ∈ Rn , that are con-
sidered as the vertices of a working simplex S , and the corresponding set of function f values
at the vertices fi = f (xi ), i = 0, ..., n. The initial working simplex S has to be nondegenerate,
i.e., the simplex points must not lie in the same hyperplane.
The method then performs a sequence of transformations of the working simplex S , aimed
at decreasing the function values at its vertices. At each step, the transformation is determined
by computing one or more ”test” points, together with their function values, and by comparison
of these function values with those at the vertices.
This process is terminated when the working simplex S becomes sufficiently small in
some sense, or when the function values fi are close enough in some sense (provided f is
continuous).
The Nelder-Mead algorithm typically requires only one or two function evaluations at each
step, while many other direct search methods use n or even more function evaluations.
5.7.2. Algorithm
Initial simplex.
The initial simplex S is usually constructed by generating n + 1 vertices around a given
input point xin ∈ Rn . In practice, the most frequent choice is x0 = xin to allow proper restarts
of the algorithm. The remaining n vertices are then generated to obtain one of two standard
shapes of S :
xi = x0 + hi ei , i = 1, ..., n,
• S is a regular simplex, where all edges have the same specified length.
1. Ordering. Determine the indices xh , xs , xl of the worst, second worst and the best vertex,
respectively, in the current working simplex S
fh = max fi
i
fs = max fi
i6=h
fl = min fi
i
In some implementations, the vertices of S are ordered with respect to the function
values, to satisfy f0 6 ... 6 fn . Then l = 0, s = n − 1, h = n.
2. Centroid. Calculate the centroid c of the best side – this is the one opposite the worst
vertex:
1 n
c = ∑ xi .
n i6=h
3. Transformation. Compute the new working simplex from the current one. First, try to
replace only the worst vertex xh with a better point by using reflection, expansion or
contraction with respect to the best side. All test points lie on the line defined by xh and
c, and at most two of them are computed in one iteration. If this succeeds, the accepted
point becomes the new vertex of the working simplex. If this fails, shrink the simplex
towards the best vertex xl . In this case n new vertices are computed.
Simplex transformations in the Nelder-Mead method are controlled by four parameters:
α for reflection, β for contraction, γ for expansion and δ for shrinkage. They should
satisfy the following constraints:
α > 0,
0 < β < 1,
γ > 1, γ > α,
0 < δ < 1.
The effects of various transformations are shown in the corresponding figures. The new
working simplex is shown in green.
5.7.2. Algorithm 96
19.2
xr = c + α(c − xh ),
• Contract. If fr > fs , compute the contraction point xc by using the better of the
two points xh and xr .
– Outside. If fs 6 fr < fh , then xc = c + β (xr − c); fc is f value at point xc .
If fc 6 fr , accept xc and terminate the iteration. Otherwise, perform a shrink
transformation.
– Inside. If fr > fh , then xc = c + β (xh − c); fc fc is f value at point xc . If
fc < fh , accept xc and terminate the iteration. Otherwise, perform a shrink
transformation.
5.7.2. Algorithm 97
19.2
The shrink transformation was introduced to prevent the algorithm from failing.
Failed contraction can occur when a valley is curved and one point of the simplex
5.7.2. Algorithm 98
19.2
is much farther from the valley bottom than the others; contraction may then cause
the reflected point to move away from the valley bottom instead of towards it.
Further contractions are then useless. The action proposed contracts the simplex
towards the lowest point, and will eventually bring all points into the valley.
• termx is the domain convergence or termination test. It becomes true when the working
simplex S is sufficiently small in some sense (some or all vertices xi are close enough).
• term f is the function-value convergence test. It becomes true when (some or all) func-
tion values f j are close enough in some sense.
• f ail is the no-convergence test. It becomes true if the number of iterations or function
evaluations exceeds some prescribed maximum allowed value.
The algorithm terminates as soon as at least one of these tests becomes true.
If the algorithm is expected to work for discontinuous functions f , then it must have
some form of a termx test. This test is also useful for continuous functions, when a reasonably
accurate minimizing point is required, in addition to the minimal function value. In such cases,
a term f test is only a safeguard for ”flat” functions.
3. Refreshing of the global and local best positions and other parameters of algorithm.
If any of these conditions is true, then algorithm is terminated. Otherwise the process is
continued from step 2.
PBest – vector of search space which describes best local particle’s position;
GBest – vector of search space which describes best global particle’s position;
w – inertia coefficient;
w ·V + r1 · nostalgia · (PBest − X) + r2 · sociality · (GBest − X)+
+r3 · neighborliness · (LBest − X), in case of standard behavior;
V̂ =
w ·V + r1 · nostalgia · (PBest − X), in case of egoistic behavior;
w ·V + r · sociality · (GBest − X), in case of highly social behavior.
2
where
PBest – vector of search space which describes best local particle’s position;
GBest – vector of search space which describes best global particle’s position;
LBest – vector of search space which describes best position among Nneighbor neighbor
particles;
w – inertia coefficient;
Moreover, special set of swarm particles is separated, they are called “researchers”. Their
velocity vector is updated at approximation to the best global position.
• Ns
The number of particles in swarm. At the beginning of algorithm working swarm parti-
cles are randomly scattered through the search space.
Ns parameter increasing leads to probability of finding the global minimum increasing
but rate of convergence to local minimum will slow down. Connection between the rate
of convergence and the population size is ambiguous.
• N_sim_calc
Parameter for parallel run. The parameter determines number of variants which will be
calculated simultaneously.
This parameter doesn’t affect the algorithm “sensitivity” but allows to get results faster
and calculate more variants for the same time.
• wstart , w f inish
Initial and final values of inertia coefficient. It is recommended to provide condition
0 ≤ w f inish ≤ wstart ≤ 1.
This parameter allows particles to explore search space carefully at initial iterations but
to converge faster at final ones.
• nostalgia
Nostalgia of swarm particles.
This parameter is about particle’s attraction to its best local position.
Increasing of this parameter leads to more careful search space exploration but rate of
convergence to local minimum will slow down.
• sociality
Sociality of swarm particles.
This parameter is about particle’s attraction to the best global position of swarm. In-
creasing of this parameter leads to increasing of convergence rate, but it leads to less
careful search space exploration and may cause algorithm stopping at local minimum.
• damping_ f actor
Elasticity factor of collisions with boundaries. This parameter characterizes particles’
behavior around search space boundary.
In algorithm the method of particle reflection from boundary of search space is devel-
oped. It is applied when particle tries to leave search space. In this case particle’s elastic
bump with boundary and its reflection are emulated. After bumping particle’s velocity
is decreasing; elasticity factor characterizes velocity decreasing.
In other words, if elasticity factor is 1, then the bump is perfectly elastic, and the
particles bounce off the wall with the same velocity as before. Otherwise, if elasticity
factor is 0, then the particles change their velocities to 0 and stick to the boundary.
This parameter is important for exploring boundary region. It is not recommended to
set value of it to minimal or maximal value (0 and 1), because then particles either will
stick to boundary, or will be unable to settle around it.
This parameter should be from the interval [0, 1].
• Nneighbor
The number of particle’s neighbors. This parameter is used only in FlexiPSO version.
It allows to use not only the best local and global positions, but the best neighbor
positions too. It allows to explore search space more carefully. Recommended value is
25% of swarm size. That is, it is necessary to provide condition Nneighbor ≤ Ns .
• neighborliness
Swarm particles’ neighborliness factor. This parameter is used only in FlexiPSO version.
This parameter characterizes the attraction of particles to the best position of their neigh-
bors; this parameter is some kind of average of nostalgia and sociality.
• explorer_rate
Percent of particles with special behavior type “explorer”. This parameter is used only
in FlexiPSO version.
This parameter sets percent of particles which make more “wide steps” in search space.
Increasing of this parameter make the exploration more broad, but less detailed.
Parameter should belong to interval [0, 1]. Recommended value is 0.5.
• egoism_rate
Percent of egoism in particles’ behavior. This parameter is used only in FlexiPSO ver-
sion.
It sets frequency of cases in which special behavior type “egoism” is turned on.
Parameter should belong to interval [0, 1]. Recommended value is 0.1.
• comm_rate
Percent of collectivity on particles’ behavior. This parameter is used only in FlexiPSO
version.
It sets frequency of cases in which special behavior type “collectivity” is turned on.
Parameter should belong to interval [0, 1]. Recommended value is 0.6.
Moreover, there is a connection between parameters egoism_rate and comm_rate.
Egoistic behavior and highly social behavior exclude each other, i.e. egoism_rate +
comm_rate ≤ 1.
• rel_crit_dist
Relative critical distance on which particles-”explorers” can approach to the best global
swarm position. This parameter is used only in the FlexiPSO version. Parameter varies
in the interval [0, 1].
Particles-”explorers” search for new best global swarm positions, but they should
not approach too close to the current best global position GBest . For each particle-
”explorer” position X and each variable i the condition: |X(i) − GBest| > (imax −
imin ) × rel_crit_dist should be valid, where imax is the maximum of variable i, imin
is the minimum of variable i. If a particle comes too close to the best global position
it will bounce off. The parameter rel_crit_dist formalizes the concept of ”too close”.
Thus a particle-”explorer” should be located from the current best global position GBest
at minimal distance (imax − imin ) × rel_crit_dist in each coordinate.
2. Construction of Pareto front. Initialization of the best global swarm position in the
Pareto front (external archive). The Pareto front can be visualized in the crossplot (see
section 6.7.1).
3. Update of velocity vectors of particles. Formulas for calculation of new velocities for
different PSO algorithm versions are presented in the section 5.8.
5. Mutation. This operation is implemented if the Flexi PSO algorithm is used (see section
5.8.3).
8. Update of the Pareto front. Particles are replace arbitrary when archive is full.
10. Control of stopping criteria. Algorithm stopping criteria are the following:
If any of these conditions is fulfilled the algorithm is terminated. Otherwise the process
continues from step 3.
• Ensemble size — the initial ensemble size has to be at least number of variables +1.
6. Analysis of results
In this section features of assisted history matching (AHM) module available via graphical user
interface (GUI) are described. The window of AHM project is shown in figure 56. The window
contains a horizontal panel: File, Queue, Results, Settings. Menus Queue and Results are
activated if you switch to tabs Calculations and Results located below, respectively.
Top panel buttons is located below.
There are three main tabs allowing to switch between Project Info, Calculations and Re-
sults.
A horizontal panel consists of:
1. menu File:
2. menu Queue
3. menu Results:
• Check/uncheck all;
• Check new models. If this option is selected then during calculation results of
new calculated variant is automatically added to the Results tab (Graphs, Results
Table etc.);
• Hide error models;
• Keep Sorted. If this option is selected then when adding a new calculated
variant all variants are automatic resorted in accordance with earlier defined sorting
(e.g., in descending order of oil rate mismatch);
• Group Checked. If this option is selected then checked models are grouped
on the top of list of model’s variants;
• Export. Export table data to file;
• Load pressure. Open Load history into base model dialog menu;
• Settings.
4. menu Settings:
• General;
• New Job Options;
• New Job Options (Advanced);
• Common Email Settings. Allows to configure Email settings to get notifications
about status of calculations;
• Current Window Email Settings. Allows to configure Email settings to get noti-
fications about completeness and/or status of calculations;
• Power Savings;
• Appearance;
• Info Columns. Check columns to see on the tab Calculations.
• – delete experiment’s files while keeping the possibility to restore it. All experi-
ment’s files will be deleted. The experiment’s entry in the project will remain. Experi-
ment configurations and list of variants will be available;
The window containing the list of experiment’s variants is located on the left. It is shown
variant’s id, its status (calculated, calculating or queued) etc. By right–mouse clicking on the
variant you can get information about variant, or create new experiment starting from this
variant, or create user variant from this variant (see figure 56).
6.2. Calculations
The Calculations tab contains full information about status of calculations (see figure 57). It is
shown the full Path to experiment’s variant, Model Type, Cores Count using for calculations
and calculation’s Status. If a calculation is done the runtime is shown. If a calculation is still
running the time remaining until its completion is shown. On the right commands to work
with calculations are located:
• Kill Jobs.
• View Graphs.
• View Log.
• (Un)Select All.
• Show Finished.
• Log.
6.3. Results
The Results tab allows to visualize obtained results and analyze them in order to evaluate the
quality of history matching. Main tabs of Results tab are:
• Results Table;
• Mismatch calculation;
• Graphs;
• Crossplot;
• Histogram;
• Stacked Plot;
• Analysis;
• MDS (Multi-dimensional scaling).
Below are described the common interface elements of the Results tab.
• – open base model. Open base model of history matching project for modification.
• Load history for BHP, THP or WBP from file into base model;
• Graph Calculator. Graph calculator allows to do different operation for graphs etc.
by means of Python programming language.
• Create NPV Script. Opens the settings window for NPV calculation.
• Check/uncheck all;
• Check new models. If this option is selected then during calculation results of new
calculated variant is automatically added to the Results tab (Graphs, Results Table etc.);
• Group Checked. If this option is selected then checked models are grouped on the
top of list of model’s variants;
• Keep Sorted. If this option is selected then when adding a new calculated variant
all variants are automatic resorted in accordance with earlier defined sorting (e.g., in
descending order of oil rate mismatch);
• Show/hide log.
• Settings.
where n is the number of calculated steps (from the initial time step to the last step N ), ln
is the length of time step, Rate(H) and Rate(C) are historical (H) and calculated (C) values
of rates, respectively, at nth time step. The difference between historical and calculated totals
(oil, water, liquid etc.) is calculated as:
N
Di f f = ∑ ln|Total(H) − Total(C)|
n=0
where Total(H) and Total(C) are historical (H ) and calculated (C ) total values at nth time
step.
!
To calculate mismatches or the differences between total values for the spe-
cific time period (starting from the intermediate time step k) you can use an
objective function.
! For each model’s variant variables will be written in the corresponding file
of variant in the keyword PREDEFINES (see 12.1.27).
6.5. Graphs
The Graphs tab is visualized the obtained results.
An example of the tab Graphs is shown in figure 60. In the top window on the right an
object (wells, fields etc.) is selected, for which graphs will be created. In the bottom window
a parameter (oil rate, water rate etc. ), which will be visualized, is selected. The Add new
graph button below the list of parameters allows the creation of custom graphs (see 6.6. Graph
calculator).
On the right panel the following buttons are located:
For usage examples see the training tutorial COMMON1.4. How to use
Graph Calculator Python.
Graph calculator is also available in the simulator and Model Designer GUI, where it
allows working with individual connections and other objects.
Text editor of the graph calculator window allows entering of arbitrary code in Python. The
code is executed upon pressing Calculate. Importing standard libraries using import <name>
is possible (see also Importing libraries). Python console output is directed to the window
below and can be used for debug purposes.
An arbitrary number of user scripts can be created and managed using buttons Add /
Delete. They are saved as separate *.py files at hmf/GraphCalculator/ when the project
is saved.
For the resulting graphs to appear in the user interface, they have to be passed through
the export() function (see below). A script may contain arbitrarily many export statements.
Once a script with proper export statements has been executed, the resulting graph appears in
the list of available graphs (see figure 63) and can be selected for display individually or in
combination with other graphs. Its name and dimension are specified in the export statement.
Whether it will appear for Field, Group, Well, FIP, or Connection object is determined by its
type, which in turn is determined by its declaration (see below graph function under Global
functions) or by the type of the graph(s) it was derived from. Inconsistency in these types may
lead to an error in the script.
If a script does not export any graphs, its execution triggers a warning that suggests
to use the Auto Export Graph button. Upon pressing this button, an export statement is
automatically added after the last line of code. The variable used in the last assignment
operator is passed to export() as an argument. Sometimes the calculator may be used just for
some cursory calculations while displaying the result via the console output window, without
exporting any graphs. In this case the warning may be ignored.
!
Note that user graphs from other scripts, including those defined in the same
template, are not accessible from the code by their names. They have to be
imported via get_global_graph() (see below). Also, you may produce multiple
user graphs from a single script.
Custom graph may be used in the objective function (see Objective function based on user
graphs for a field).
may also be combined with scalar values or with graphs of lower dimension. Besides, there
are special functions for numerical differentiation, integration, averaging over sets of objects,
etc.
Lower right section contains the list of mnemonics (same as in the keyword SUMMARY,
see 12.18.1). Their meaning is explained in the pop-up messages. Mnemonics are grouped by
type (field, group, well, etc.); types are selected in the lower left field. Mnemonics can be
used directly in the code and are interpreted as graph objects containing values for all time
steps and for all objects of corresponding type (wells, groups, etc).
Note that the mnemonics only work on the time steps for which the graphs
! have been recorded. The graphs which were not recorded on a particular step
are interpolated with the last available value. Result recording options are
described in section 9.1 of tNavigator User Manual.
If the model contains any variables created by the keyword UDQ (see 12.19.166), those
can be used by putting their names in the code. They are also interpreted as graph objects.
For the purpose of retrieving the subsets or individual values of data, a graph object works
as a multidimensional array indexed by the objects of the following types (depending on its
own type):
For example, wopr[m1,w1,t1] returns a single value of oil rate for the well w1 in the
model m1 at timestep t1. The indexing elements may be entered in arbitrary order (so that
wopr[t1,w1,m1] is equivalent to the example above). An expression where only a part of the
indexes is specified returns the corresponding subset of the graph. For example, wopr[m1,
w1] returns a graph containing oil rates for the well w1 in the model m1 at all timesteps.
The code may include predefined objects (field, wells, groups, time steps, in simulator GUI
version also connections and FIP regions). For treating these objects, the following properties
and functions are defined and accessible on the right panel:
!
Code fragments presented here and below are merely illustrations
of syntax. They are not self-sufficient and not intended to work if
copied-and-pasted to the calculator "as is". For the ready-to-use
examples see Usage examples.
◦ .is_producer() (no arguments) returns a time-dependent graph that casts to boolean
True when the well is a producer, and to False otherwise.
Usage example: if w1.is_producer(): hdo somethingi
◦ .is_opened() (no arguments) returns a time-dependent graph that casts to boolean
True when the well is open, and to False otherwise.
Usage example: if w1.is_opened(): hdo somethingi
◦ .is_stopped() (no arguments) returns a time-dependent graph that casts to boolean
True when the well is stopped, and to False otherwise.
Usage example: if w1.is_stopped(): hdo somethingi
◦ .is_shut() (no arguments) returns a time-dependent graph that casts to boolean True
when the well is shut, and to False otherwise.
Usage example: if w1.is_shut(): hdo somethingi
◦ .name is a property containing the model name (relevant when the results of multiple
model calculations are loaded).
Usage example: s1 = m1.name
◦ .name is a property containing the calendar representation of this time step object
according to the template (selected from the dropdown list in the Date format
field below).
Usage example: s1 = t1.name
◦ .to_datetime() (no arguments) returns the Python datetime object corresponding to
this time step. The object has standard Python properties and methods. Usage
example:
dt1 = t1.to_datetime()
if dt1.year > 2014: hdo somethingi
• Add graph function
Graph object represents a graph which may be either one of the standard graphs or
derived via calculations. The ultimate result of script execution is also an object of this
type. A graph has the following accessible functions:
◦ .fix(model=<model>,object=<object>,date=<timestep>) returns the value of the
specified graph for the given model, object, and timestep, which all must be spec-
ified as Python objects of the corresponding type, and not by name. Type of the
object (well, group, in simulator GUI version also connection or a FIP region)
must correspond to the type of the graph. All arguments are optional. If some of
them are missing, the function returns a data structure containing the values of the
graph for all possible values of the missing argument(s).
Usage example:
graph2 = graph1.fix(object=get_well_by_name('PROD1'))
takes a graph for all wells and returns a graph object for only one well, namely
PROD1.
◦ max,min,avg,sum(models=<models>,objects=<objects>,dates=<timesteps>) re-
trieve a subset of values for the given models, objects, and timesteps (all arguments
may include either arrays or single values), and then return the minimum, max-
imum, average, or sum of the resulting array. Arguments must be specified as
Python objects of the corresponding type, and not by name. Type of the objects
must correspond to the type of the graph. All arguments are optional. If some of
them are missing, the functions return an object containing the values of minimum,
maximum, average, or sum over all specified argument(s) for all possible values
of the missing argument(s).
Usage examples:
graph2 = graph1.max(objects=get_wells_by_mask('WELL3*'))
returns a graph object containing the maximum among the values of the original
graph for the wells with names WELL3*, i.e. WELL31, WELL32, WELL33, etc.;
graph2 = graph1.avg(dates=get_all_timesteps()[15:25])
returns a graph object containing the average among the values of the original
graph from the 15 th to the 24 th time step.
◦ .aggregate_by_time_interval(interval='<interval>',type='<type>') takes the ar-
ray of values of the original graph over the specified interval (possible values:
month, year) and derives a new graph where all steps within the interval have the
same value calculated according to the specified type:
465, 1165, 2188, 3418, 4968 . . . → 465, 700, 1023, 1230, 1550 . . .
◦ diff_t(<series>) is the same as diff, only the results are divided by the time step
length in days. Usage example: graph2 = diff_t(graph1)
In this example we are calculating oil rates from oil totals. Let the time steps
represent months and have the duration of 31, 28, 31, 30, 31... days. Then:
◦ cum_sum(<series>) performs numeric integration of the time series, that is, returns
the series of sums.
Usage example: graph3 = cum_sum(graph1)
In this example we are calculating oil totals from oil totals per time step:
465, 700, 1023, 1230, 1550 . . . → 465, 1165, 2188, 3418, 4968 . . .
– nearest_before: searches for the nearest time step before the specified date;
– nearest_after: searches for the nearest time step after the specified date;
Default: exact_match.
If the step cannot be found within the limitations of the mode, or if the specified
date falls outside the time range of the model, an error is returned.
Usage example:
t1 = get_timestep_from_datetime(date(2012,7,1), mode='nearest_after')
!
Most manipulations with Python datetime object require to load the
corresponding external library beforehand (see Importing libraries).
This is done as follows:
from datetime import datetime
◦ create_table_vs_time(<array>) returns a graph containing a piecewise linear ap-
proximation of the given time series. The series must be represented by an array
of two-element tuples (date,value). Here the date must be a Python object of the
type date or datetime.
Usage example:
oil_price_list = []
oil_price_list.append((date(2011,1,1),107.5))
oil_price_list.append((date(2012,1,1),109.5))
oil_price_list.append((date(2013,1,1),105.9))
oil_price_list.append((date(2014,1,1), 96.3))
oil_price_list.append((date(2015,1,1), 49.5))
oil_price_list.append((date(2016,1,1), 40.7))
oil_price = create_table_vs_time(oil_price_list)
Here we build a graph of oil prices. For maximum clarity, the array is prepared by
adding elements one by one.
◦ get_wells_by_mask(<mask>) returns an array containing wells that match the
given name mask. The mask may contain wildcards: ? means any character, *
means any number of characters (including zero).
Usage example: for w in get_wells_by_mask('prod1*'): hdo somethingi
◦ get_wells_from_filter(<filter name>) returns an array containing wells that are in-
cluded in the given well filter. The filter must be created beforehand using
Well Filter (see the tNavigator User Guide).
Usage example: for w in get_wells_from_filter('first'): hdo somethingi
◦ shift_t(<original series>,<shift>,<default value>) returns the original graph
shifted by the specified number of time steps. The empty positions are padded
with the specified default value.
Usage example: graph2 = shift_t(graph1,3,10)
In this example we shift the historic records of oil rate which were mistakenly as-
signed to the wrong time. The series is shifted 3 steps to the right, and the starting
positions are filled with the first known value of oil rate (10).
10, 12, 19, 24, 30, 33, 31, 27, 25 . . . −→ 10, 10, 10, 10, 12, 19, 24, 30, 33 . . .
| {z } | {z }
graph1 shift_t(graph1,3,10)
◦ get_project_folder() (no arguments) returns the full path to the folder containing
the current model, which you might need in order to write something to a file.
Usage example: path = get_project_folder()
◦ get_project_name() (no arguments) returns the file name of the current model with-
out an extension.
Usage example: fn = get_project_name()
◦ export(<expression>,name='<name>',units='<units>') exports the given expres-
sion to the user graph, while specifying its name and (optionally) units of mea-
surement.
The expression should evaluate to a graph object, otherwise an error will occur.
Units should be specified by the mnemonic name which can be selected from a
dropdown list to the right.
Usage example: export(w1, name='graph1')
◦ get_global_graph(name='<name>') imports and returns the user graph with given
name, which could have been created in another script or otherwise.
Usage example: gr1 = get_global_graph(name='graph1')
◦ graph(type='<type>',default_value=<value>) initializes a graph of the given type
(field, well, group, in simulator GUI version also conn for connections, or fip for
FIP regions) and fills it with the given default values.
Usage example: tmp = graph(type='field', default_value=1)
To obtain the path to modules used by the already installed Python in-
stance, open the interactive Python interpreter and run the following
i commands:
import sys
print(';'.join(sys.path))
If the external Python installation is removed, tNavigator automatically falls back to using
internal Python.
Example
x = wopr * (time >= 215) * (time <= 550)
w1 = cum_sum_t(x)
export (w1, name = 'PeriodProd', units = "liquid_surface_volume")
Example 2
Suppose we want to see what portion of the well’s oil rate comes from the layers with
70 6 k < 100.
!
This is possible in the simulator or Model Designer GUI, where the graph
calculator has access to the data on individual connections, but not in the
AHM GUI.
The script proceeds as follows:
1. Initialize a temporary data structure (tmp) of the appropriate type (graph in the Well
context) and fill it with 0;
3. Export the temporary array divided by the array of total oil rate values for the wells (the
division of graphs is applied elementwise, that is, a sum over connections of any well
is divided by the rate of the same well).
Example
tmp = graph(type='well', default_value=0)
for c in get_all_connections():
if c.k in range(70,100):
tmp[c.well] += copr[c]
export(tmp/wopr, name='wopr_layer2')
! Pay attention to the spaces at the beginning of the lines. They are essential
to Python syntax, and are easily lost during copying-and-pasting.
Example 3
Suppose we want to calculate the average oil rate over a certain subset of wells (those
with names starting with 'WELL3') and compare it with the historic data, which are stored in
a file elsewhere. The deviation will then be used as an objective function for matching. The
script proceeds as follows:
1. Import the standard datetime library which allows handling dates with more agility.
2. Call the avg function and feed to it the array with the required subset of wells, so as to
obtain the desired average (obs).
3. Locate the file input.txt in the model folder and open it for reading.
4. Transform the array of file lines into the array of tuples (string,value).
6. Build the interpolation graph from the obtained array in the file (hist).
Example
from datetime import datetime
obs = wopr.avg (objects = get_wells_by_mask ('WELL3*'))
inpf = open(get_project_folder()+'/input.txt', 'r')
raw = [(line.split()[0],float(line.split()[1])) for line in inpf]
arr = [(datetime.strptime(x[0], '%d.%m.%Y'),x[1]) for x in raw]
hist = create_table_vs_time(arr)
export((obs - hist)**2, name='fuobj')
Example 4
Suppose we have the graphs of historic bottom hole pressure measured only at some
points; the rest is filled with 0. We want to interpolate those for the entire time range. The
script proceeds as follows:
1. Initialize a temporary data structure (tmp) of the appropriate type (graph in the Well
context) and fill it with 0;
Example
tmp = graph (type = 'well', default_value = 0)
for m in get_all_models():
for w in get_all_wells():
current = wbhph[m,w]
observed = []
for t in get_all_timesteps():
if current[t] > 0:
observed.append ((t.to_datetime(), current[t]))
if len (observed) >= 2:
tmp[m,w] = create_table_vs_time(observed)
export(tmp, name='interpolated_wbhph')
6.7. Crossplot
A crossplot visualizes the dependence between two selected parameters (see figure 65). In
the top window, along axes Y, an object (e.g., Group, Well, Mismatches etc.) can be selected,
in the bottom window a parameter corresponding to the selected object can be defined. The
similar menu is available for selecting parameter along X axes.
Figure 65 shows a crossplot between a custom objective function and a number of variant
of optimization algorithm (here, Differential evolution algorithm). Each variant of optimization
algorithm corresponds to its value of objective function. It can be seen that increase in number
of variants leads to decrease in a value of objective function (i.e., an objective function tends
to its minimum). Bringing the cursor to the crossplot’s point the following information appears
in the status bar (at the bottom of the window): experiment’s number, experiment’s variant
and a value of objective function.
• Create a crossplot for the selected objective functions. In the example, shown in fig-
ure 67, the crossplot is constructed for earlier configured objective functions oil_rate_of
and water_rate_of (see section 4.1). The objective function oil_rate_of is selected along
axis Y, water_rate_of along X, where oil_rate_of is a objective function of history
matching quadratic type based on the oil rate (parameter) and the group ”FIELD” (ob-
ject) and water_rate_of is a objective function of history matching quadratic type based
on the water rate (parameter) and the object group ”FIELD” (object);
• Right click on the selected model variants and choose form drop-down menu Create
Pareto Front From Selected Variants;
• In the Create Pareto front dialog it is required to select objective functions (at least
2). The list of available objective functions is shown on the left. To add a new function
press the button Add entry (see figure 66);
Generally speaking the Pareto front is a group of model variants, therefore all features
available for groups can be implemented for Pareto fronts (see 6.14). It is allowed to create
several Pareto fronts. To switch between them press the button Groupsets manager and
select the required front.
6.8. Histogram
A histogram allows to evaluate how large is the number of experiment’s variants having
the selected parameter’s value in the specific range. The parameter along X axes is selected
using the menu in the bottom part of the tab defined histogram settings. An interval between
maximum value X max and minimum value X min of parameter is subdivided into the defined
number of sub-intervals. The amount of sub-intervals can be adjusted in the field Bins. For
each sub-interval [X i , X i+1 ] the number of variants having the value of parameter X in this
sub-interval is shown. Move a time slider to see a histogram at the required time moment.
The histogram is shown for the time period marked by a red line on the time line.
You can change a histogram’s orientation from horizontal to vertical or vice versa. The
parameter’s value can be visualized in percentage. Bring the cursor to the histogram’s bin to
see a corresponding range of a parameter and the number of variants in the status bar.
For example, in the figure 68 5 variants have total oil in the range [328675, 331700] sm3 .
Variants corresponding to this range are highlighted in blue in the list of variants located on
the left.
• Components;
• Terms.
value(H) − value(C) 2
∑∑ g
ob j p
• Absolute mode.
This mode allows to identify high rate wells in order to choose correct weights for the
objective function. The following formula is used:
2
value(C)
∑∑ g
ob j p
Figure 69 shows the stacked plot resolved into objects. As an Objective Function a custom
objective function is defined. The objective function is based on wells (objects) and oil and
water rates (parameters). The objective function is calculated for the time period marked by
the red color. By right-clicking on a histogram bar a value of objective function and a variant’s
number are shown in the status bar.
Using a stacked plot, for example, you can detect wells with history matching problems
and concentrate on them further. In the figure 69 it can be seen that wells ”PRO-20” and
”PRO-4” make the largest contribution into the objective function, i.e. both wells have history
matching problems. Probably, the selected variables or varying ranges are not suitable for
history matching. In this case, you can try to use other variables and/or varying ranges.
Stacked plot with Absolute mode shown in figure 70 allows to identify high rate wells:
”PRO-1”, ”PRO-4”, ”PRO-5” and ”PRO-11”. For the calculation of the objective function
these wells should have larger weights than low rate wells.
figure 71 shows an example of stacked plot resolved into components – Oil and Water
rates. The plot shows a contribution of water and oil rate mismatches into the objective
function. Right-click on a bar to see the value of selected component and variant’s number in
the status bar.
6.10. Analysis
To analyze the obtained results the following tools can be used:
• Pareto chart
• Tornado Diagram
• Quantiles
• Creating a Filter for Variables
• Pearson correlation;
• Spearman correlation.
Pearson correlation
Pearson correlation creating associations between model’s variables and model’s param-
eters (oil rate, water rate, gas rate, mismatches etc.) and is computed using the following
formula:
∑(X − X̄)(Y − Ȳ )
rXY = p
∑(X − X̄)2 ∑(Y − Ȳ )2
The correlation allows to evaluate, which variable stronger affects model’s parameters and
an objective function. Set a time slider at the required time step to see a correlation at this
time step. Generally speaking, the longer a bar is, the closer value of correlation between
parameters to 1 (in absolute terms), while a relation between these parameters is closer to a
linear dependence. A bar can be:
Detected effective variables can be used further, in other experiments; noneffective vari-
ables can be excluded from consideration.
The figure 72 shows the correlation between model’s variables M_PERM_FIPNUM_1 etc.
and model’s parameters. To sort a column press on parameter’s name at the top of column.
Variables strongly affected an objective function will be located at the top of column. It can be
seen that variations of variables M_PERM_FIPNUM_2 and M_PERM_FIPNUM_3 result in
the significant change of oil total, but a variation of variable M_PERM_FIPNUM_1 weakly
affects the parameter Oil Total.
Figure 72. History Matching. Pareto chart based on the Pearson correlation
Spearman correlation
The Spearman correlation specifies a degree of dependency of two arbitrary variables X
and Y based on the analysis of data (X1 ,Y1 ), . . . , (Xn ,Yn ). A rank is set for each value X
and Y . Ranks of X are sequentially arranged: i = 1, 2, . . . , n. The rank of Y , Yi , is a rank of
the pair (X,Y ) for which the rank of X is i. Then the Spearman correlation coefficient is
calculated as:
6 ∑ di2
ρ = 1−
n(n2 − 1)
where di is the difference between ranks Xi and Yi , i = 1, 2, . . . , n. The correlation coefficient
varies from -1 (corresponds to a decreasing monotonic trend between X and Y ) to +1 (cor-
responds to a increasing monotonic trend between X and Y ). The coefficient equal to zero
means that variables X and Y are independent.
The Pearson correlation shows the degree of linearity of dependency between variables:
if the correlation coefficient is equal to 1 (in absolute value) then the one variable linearly
depends on another one. On the other hand, the Spearman correlation shows the degree of
monotonicity of dependency: if the correlation coefficient is 1 (in absolute value) then the
dependence is monotonous but not necessarily linearly.
Set a time slider at the required time step to see a correlation at this time step. The longer
the bar is, the closer the correlation coefficient between variables to 1 (in absolute value) and
the dependence between variables is closer to monotonous one.
A bar can have one of the following colors:
• Green color – positive values of correlation coefficient. The dependence between vari-
able and model parameter is monotonically increasing (the coefficient is equal to +1);
• Blue color – negative values of correlation coefficient. The dependence between variable
and model parameter is monotonically decreasing (the coefficient is equal to -1).
Figure 73. History Matching. Pareto chart based on the Spearman correlation
top of column. It can be seen that Avg. Pressure does not depend on the variable
M_PERM_FIPNUM_4 (correlation coefficient is around 0), while Watercut depends mono-
tonically on M_PERM_FIPNUM_4 (correlation coefficient is around 1).
The longer the bar the stronger correlation between variations of variables and variation
of objective function.
Color of the bar can be:
A Tornado Diagram calculated for Oil Total Difference is created below as an example.
where:
2. If you need to analyze data for all simulation period, then Tornado Diagram should be
visualized at zero time step (move time slider to the left position).
3. Then two experiments are taken for which the variable value is maximum and minimum.
4. For these experiments we calculate if the Oil Total Difference increases or decreases.
Percent is calculated relatively the experiment 0000.
5. Variable decreases – blue color, variable increases – green color. Total Difference de-
creases – left direction, Total Difference increases – right direction.
6. In case if the histogram bar has same direction for variable increasing and decreasing,
this means that we have the same tendency. For example both variable increasing and
decreasing bring us far from historical data. May be we need to change variable ranges
of choose a new variable for AHM process.
6.10.3. Quantiles
Quantiles can be calculated for model variants generated using the Latin Hypercube algorithm
and for forecast models. They are available in the tab Results. Analysis.
The range of uncertainty of the obtained parameters (e.g., oil total, water total etc.) can be
represented by a probability distribution. In case of the range of uncertainty is represented by
a probability distribution, a low, best and high estimate shall be provided such that:
• P90 is the low estimate, i.e. the values of selected parameter will exceed the low estimate
with 90% probability;
• P50 is the best estimate, i.e. the values of selected parameter will exceed the best
estimate with 50% probability;
• P10 is the high estimate, i.e. the values of selected parameter will exceed the high
estimate with 10% probability.
Quantiles are calculated for each parameter. For set of parameter values calculated from
variants of experiment parameter values corresponding to the low P90, best P50 and high P10
estimates are specified. Quantile values are calculated at each time step: in order to see them,
move time slider to another time step.
The same quantiles calculated for different parameters may correspond to different model
variants. For example, quantile P10 for Oil Total corresponds to the third variant of model,
while the quantile P10 for Water Total – the first variant of model. In order to go to a model
variant corresponding to the selected quantile right-click on the quantile value and select Go
to the model of this quantile. The corresponding model will be highlighted in the variants
tree.
Quantiles P90, P50 and P10 corresponding to model variants are visualized on the tab Cdf
as solid diamonds.
Quantile calculation
Quantiles are calculated for successfully calculated model variants of one experiment. It
is supposed that these variant are equally possible. Quantiles are calculated as follows:
In set of N parameter values Vi (i = 1, ..., N ) sorted in ascending order a number i of value
of quantile α (i.e., Pα = Vi ) is equal to b(1 − α)(N + 1)c (these brackets denote rounding
N
to an integer value (below) this value) for α ∈ (0, (N+1) ]. For α = 0 number i = N , for
N
α > (N+1) number i = 1.
As an example, let us suggest that we have N model variants obtained via Latin hypercube.
We want to calculate α -quantile of oil rate, where α varies from 0 to 100. First, variants are
sorted by rate value, then a number of the specific variant is calculated using the formula
b(1 − α)(N + 1)c. Thus, we obtain a number of variant i (from 1 to N ), value of which
corresponds to α -quantile for this parameter (i.e., Pα = Vi ) for this set of variants. In GUI
α -quantiles are shown in percentage, i.e. α × 100%.
Add user Quantile.
There are three defaulted quantiles: P10, P50 and P90. Press Add Quantile and enter the
value to calculate any user quantile. Enter the value in percents from 0 to 100). For example
to calculate the quantile P75 enter the α value 75.
6.11. 2D
To visualize the data structure on the plane in the tab 2D the following methods of data
analysis are available:
Thus, the primary aim of the Mds method is to find the set of coordinates of projections P,
for which the function F can be minimized:
6.11. 2D 145
19.2
2
1/2
∑ (di j − dc
i j)
i< j
F =
di2j
∑
i< j
In other words, Mds method projects each object of N -dimensional space onto a plane
such as distances between pair points of N -dimensional space will be as close as possible to
distances between their projections on the plane.
An example of projection of point set (set of vectors of variables) on a plane using Mds
method is shown in figure 77. Captions of X and Y axes are basis vectors that 2D space
is based on. Vectors are presented as linear combinations of variables; the coefficients are
normalized. Each coefficient shows the importance of corresponding variables in the vector.
Not full linear combinations are shown, but only 3 variables with the largest in absolute value
coefficients. Coordinates of each model variants are shown in table on the right (button ).
The transformation is selected as such the criterion reflecting the amount of information
preserved at this transformation is maximized on the data set M.
The criterion G evaluating the amount of preserved information can be written as:
Dm̃1 + ... + Dm̃N 0
G=
Dm1 + ... + DmN
where D = N1 ∑N 2
i=1 (mi − X̄) is the dispersion, where X̄ is the average over data set and
calculated as:
1 N
X̄ = ∑ mi
N i=1
According to this criterion the amount of preserved information is equal to the part of
”explained” by new values m̃1 , ..., m̃N 0 dispersion of initial values m1 , ..., mN .
0
The data set is projected on the space of lower dimension RN specified by orthogonal
principle components. The first principle component is normalized linear-centered combina-
tions of initial data values, which has the largest dispersion over all data set. Thus, j -th
principle component ( j = 2, ..., N 0 ) is normalized linear-centered combinations of initial data
values, which does not correlate with j − 1 previous principle components and has the largest
dispersion over all data set.
0
When searching principle components the closet to initial data set space RN is selected,
which provides the best information preserving during a projection.
In other words the PCA procedure projects each object of N -dimensional space on to the
lower dimensional space in such a way that the dispersion is preserved.
An example of projection of point set (set of vectors of variables) on a plane built on 2
principle components is shown in figure 79. The first principle component is shown in X axis,
the second principle component is shown in Y axis. Vectors are presented as linear combi-
nations of variables; the coefficients are normalized. Each coefficient shows the importance
of corresponding variable in the vector. Not full linear combinations are shown, but only 3
variables with the largest in absolute value coefficients. Coordinates of each model variants
are shown in table on the right (button ).
6.12. Cdf
The cumulative distribution function (cdf) for selected parameter (total oil, total water etc.)
and model variants are visualized on the Cdf tab. The graph horizontal axis is values of
the selected parameter and a value of cdf varies from 0 to 1 along vertical axis. Each model
variant corresponds to a point (X,Y ) of cdf graph, indicating that the probability of the selected
parameter value is greater than or equal to X is Y . figure 80 shows that the probability of oil
total is greater than X =294.86 th.sm 3 is equal to Y =0.906.
Cdf is calculated under the assumption that distribution of (X,Y ) points (model variants)
are of the same probability, i.e. that points of cdf graph are located uniformly along vertical
axis.
In order to visualize quantiles P10, P50 and P90 in the cdf graph tick the option Show
Quantiles.
Quantiles P10, P50 and P90, shown in the graph of cdf as empty diamonds
(see figure 80), may not coincide with points of cdf graph corresponding to
!
model variants. In such case a model variant corresponding to the quantile is
located to the left (solid diamond), while a value of parameter for the quantile
matches the value calculated in the quantiles table on the Analysis tab.
It can be seen in the figure 80 that quantiles P10 and P90 do not coincide
with model variants. For P90 oil total is 295.59 th.sm3 . But for the model
variant (to the left) corresponding to P90, oil total is 294.86 th.sm3 (this
value matches with one calculated in the quantiles table on the Analysis tab).
z1 = x1 , z2 = x2 , . . . zn = xn ,
zn+1 = x1 · x1 , zn+2 = x1 · x2 , ... zm = xn · xn , (6.1)
2
m = (n + n)/2
The polynomial coefficients pi can be calculated using least mean square method:
• Initial Variants Group Set Name. Specify a name of group that includes all selected
model variants;
• Create entry: To create an entry select the following specifications in the dialog on the
left:
In oder to add the created entry to the table on the right press the button Add entry. To
delete the entry from the table select the entry and press the button Delete entry;
• Variable name. Tick/Untick a varible that will/will not be used to create a Proxy model;
• Proxy Model Yields Only Positive Values. If using the obtained Proxy model the
selected parameter value becomes negative its negative value will be replaced by zero;
• Neural Network Proxy. If this option is ticked then to create approximation of the
selected parameter based on the selected model variants the neural network will be used
(see section Implementation of artificial neural network). The following parameters of
Neural Network are available:
• Quadratic Proxy. If this option is ticked then quadratic approximation of the selected
parameter based on the selected model variants is created. The obtained polynomial
formula will be shown in the dialog (see figure 82). The following parameters are
available:
– Use only linear terms in Proxy model. When constructing the polynomial the
quadratic and cross terms are skipped;
– Significance Threshold. All coefficients of the Pearson correlation lower than the
threshold value will be ignored.
• Right-click on the selected variants and select from the appeared menu Create Proxy
Model From Selected Variants;
• In the dialog Create Proxy model select an object (at the top left of the window):
Groups, Wells, Field FIP etc. and corresponding parameter at the bottom left of the
window;
• Specify Significance Threshold. All Pearson correlation coefficient below this value
will be ignored when constructing a polynomial;
The created based on all variants Proxy model is demonstrated in figure 82. At the left top
of the window the name of Proxy model is shown and the object, parameter, and time step
for which this model will be created. The formula of quadratic polynomial approximated the
given function (in this example – oil rate) are shown.
Real (calculated) results of oil rate are along abscissa and approximated values of oil rate
– along ordinate, i.e. oil rate values calculated using the Proxy model formula with variables
corresponding to variants of the model. The gray line shows the graph y = x . It is assumed
that variants grouped along the gray line (e.g., variant 5) are well approximated by the created
polynomial. Variants located far from this line (e.g., variants 1 and 8) are poorly approximated
by polynomial.
A quality of matching between values provided with Proxy model and calculated results is
evaluated by R2 coefficient (see section Table of coefficients R2). The closer the R2 coefficient
to 1, the better the Proxy model approximates calculated results. In the example the obtained
R2 is equal to 0.968.
data and aggregate data. This means that in case of successful training the network can provide
a correct results based on data that are absent in the training set or/and incomplete, or/and
partially degraded data.
To create a Proxy model using the neural network tick Neural Network Proxy. The
scheme of artificial neural network is shown in figure 83. The input data for neural network are
model variables: x1 , x2 , ..., xn (n is the number of variables) for each model variant. Variables
can be selected in the dialog Create proxy model (see figure 81). The number of neurons in
the hidden layers can be specified by the option Number of Neurons in Hidden Layer. For
each model variant the neural network provides the parameter value at the specified time step.
The neural network training is performed by correction of weights at connections between
(1) (m)
neurons: w11 , ..., w(1)nn , ..., w11 , ..., w(m)nn , (m is the number of hidden layers of neurons).
Network weights vary until the required deviation of output parameter value (y in figure 83)
from its calculated value is obtained or the specified Number of Training Epochs is reached.
!
The number of neurons in the hidden layer and the number of training epochs
are empirically selected. The number of neurons should not be too large, oth-
erwise a network works well only for the training set, or too small, otherwise
the network can not be properly trained.
In figure 84 the Proxy model obtained using the artificial neural network is shown. The
number of neurons in hidden layers is set equal to 20 and the number of training epochs is
100. It can be seen that the quality of the Proxy model is high: R2 equals 0.999.
Figure 84. Proxy model obtained using the artificial neural network.
and discrete (see section 3.5). The maximum number of variants, a distribution type, etc. can
be specified by user (see figure 85) when starting the Monte Carlo experiment.
To run a calculation press the button Start Monte Carlo and specify settings of the Monte
Carlo experiment. The tab Monte Carlo Results will be automatically created. On this tab for
further analysis of obtained model variants the following instruments are available: Results
Table, Crossplot, Histogram, Analysis and Cdf.
Figure 86. Crossplot: Oil rate (for different model variants) along Y axis and the variable
M_PERM_FIPNUM_2 along axis X.
groupset Experiments contains variants grouped in accordance with carried out experiments
(see section Experimental Design). The groupset Variants includes all model variants.
To create a user defined group of variants select required variants of the model in the
tree of variants and right click on them. In the pop up menu select Add Variants to Group.
Create new Groupset. Specify a groupset name in the pop up window. To edit a group press
the button Groupsets Manager or in the menu Add Variants to Group select Call
Groupsets Manager.
Variants from one group have the same color. Press the button Colorize to colorize all vari-
ants of the model according to available groups in the tree of variants, in tabs Mds, Graphs
and Crossplot. Variants not included in any groups are colorized with grey. For groupset Vari-
ants there is a possibility to colorize variants according to the gradient of selected parameter
(e.g., oil rate, water rate etc.) as shown in figure 87. In the dialog Groupsets Manager select
the groupset Variants and press the button Add gradient and select a parameter that used to
create a gradient (see figure 88).
• Select a color of variants, included in the group, by pressing on the color rectangular
corresponding to the group. To reset default colors press the button Reset colors to
defaults;
• Add variants to the selected group. Select variants in the tree of variants. Open the
dialog Groupset Manager and tick a group to add the selected variants, and then press
the button Add variants to this group. In addition, selected in the tree variants can be
added to a group as following: right–click on variants and in the pop up menu select
Add variants to group and the group to add variants.
Moreover, model variants can be moved between groups belonging to one groupset.
Select variants in a group or Other and right click on them. In the pop up menu select
a group to move these variants. To delete selected variants from a group select in the
pop up menu Remove variants from group (see figure 89).
Model variants included in the group are visualized with a color (specified in the dialog
Groupset Manager) in tabs Mds, Ggraphs or Crossplot. To set a color of group or edit a
group in these tabs press the button Settings (see figure 90). In the pop up dialog press
the button Select coloring mode and then in the pop up dialog Groupset Manager select a
group for setting a color or editting. Two groups of variants (red dots–GroupSet[1], green dots
– GroupSet[2]) are shown in figure 90. Other variants have a gray color.
When switching to Graphs tab you can tune a color of graphs using similar to described
above steps. Press the button Settings and select a color for the created group. As can be
seen in figure 91 graphs corresponding to the group GroupSet[1] are colored with red, graphs
from GroupSet[2] are colored with green, while other ones are gray.
∑(Pk (C) − Tk )2
R2 = 1 −
∑(Pk (C) − ∑ Pk (C))2
Each coefficient shows a closeness of calculated data Pk (C) to the trend. If a coefficient
R2 equals 1, then calculated data coincide with historical ones. If coefficient R2 is close to
zero, then historical and calculated data are significantly different.
The following objects can be set in a table:
• Objects;
• Fields;
• Groups.
Objects and parameters, shown in the R2 table, can be added to or removed form the
table using buttons Objects and Parameters, respectively (see figure 92). You can set bound
values (Bad match value and Good match value) corresponding to bad and good quality of
history matching. If coefficient R2 is higher than Good match value, then history matching
quality is good and a table’s cell is highlighted by green. If coefficient R2 is lower than Bad
match value, then history matching quality is bad and a table’s cell is highlighted by red.
Intermediate values of coefficients R2 are highlighted by yellow. Moreover, this table can be
calculated for the selected model’s variant or group of variants. To do this select a variant or
a group of variants in a list of variants and press the button R2 table.
It is possible to add (delete) template using buttons Add New Template (Delete Template)
for the convenience of users. For each template you can set different objects and parameters.
figure 92 shows an example of R2 table calculated for 8-th model’s variant. It can be seen
that for the well ’PRO-15’ the calculated oil rate coincides with historical rate (coefficient R2
equals 0.999), however, water rates are quite different (coefficient R2 equals 0.072).
6.16. Clusterization
Clusterization is grouping of items in such way that items in one group (called a cluster)
are more similar (in some sense or another) to each other than to those in other groups
(clusters). Further, you can take one representative from each cluster and use them for forecast
calculations.
Let us suppose that as a result of history matching we have a set of vectors of variables
M = {m1 , . . . , mK } for K variants of the model, i.e. ith variant of the model has a vector of
variables mi . The vector’s length is equal to the number of variables N defined in the history
matching project. Let us consider N -dimensional space RN , in which each object is model’s
variant and object’s coordinates are a vector of variables of this variant.
For clustering a set of vectors M = {m1 , . . . , mK } in the space RN into L clusters the
K–means algorithm [5, 6] is used. The algorithm tends to minimize total square deviation of
points of clusters from their centers:
L
V=∑ ∑ (m j − µi )2
i=1 m j ∈Ki
µi = {µi1 , . . . , µiN }
where |Ki | is the number of points including in the cluster Ki . Having calculated centroids,
an available set of items M is subdivided into clusters again in such way that the distance
between an item and a new cluster’s center is minimum. This means that for each vectors
m j from a set of vectors M distances di between a point m j and all cluster’s centroids are
calculated: s
N
di = d(µi , m j ) = ∑ (µis − msj )2, i = 1, ..., L
s=1
An item (vector) will be include in a cluster K p if the distance between it and the cluster’s
centroid is minimum. Thus, when the algorithm finds a minimum distance d p for a vector,
i.e. dmin = d p , then this vector will be included in the cluster K p .
The algorithm terminates if centroids of clusters do not change. This happens for the
finite number of iterations since an amount of possible subdivisions of finite set of items
is limited and at each iteration the total quadratic deviation V does not increase, therefore,
the algorithm cannot run in a loop. Initial centroids of clusters are selected in such way that
distances between centroids are maximum.
If clusters appeared, as shown in figure 93, this means that values of variables for variants,
including in the cluster, are close to each other. Thus, in order to create a model’s forecast
it is enough to take only one variant (representative) from each cluster. In contrast to history
matching in this case we maximize an objective function.
An advantage of such approach is that, first of all, this approach decreases the number
of forecasts from the number of all variants to the number of clusters. Second of all, among
similar values of objective function really different sets of parameters are selected.
Further it is required to recalculate selected variants with another writing settings (by
default only well data are recorded on a disc, but to run a forecast calculation we need grid
properties (pressure, saturations etc.)). On the tab Calculations forecast variants will appear
in a queue and calculations of these variants will start as soon as the corresponding base cases
(selected variants) are recalculated and recorded all required information.
In figure 97 future production range provided by different HM variants for the well ”PRO-
N” is shown. Different HM variants provides different estimation of future production for the
well ”PRO-N”.
Figure 97. Production range provided by different HM variants for the well ”PRO-N”.
In the example of schedule file (see figure 98) a new well ”PRO-N” is specified and all
wells are on BHP control (keyword WCONPROD, see 12.19.42). In the keyword DEFINES
(see 12.1.25) two variables BHP1 and BHPN, which are bottom hole pressure for well ”PRO-
1” and new well ”PRO-N”.
If Create Forecast Experiment Group From Selected Variants is chosen then for each
selected HM variant a group of forecasts will be calculated. The number of forecasts is defined
Further it is required to recalculate selected variants with another writing settings (by
default only well data are recorded on a disc, but to run a forecast calculation we need grid
properties (pressure, saturations etc.)). On the tab Calculations forecast variants will appear
in a queue and calculations of these variants will start as soon as the corresponding base cases
(selected variants) are recalculated and recorded all required information.
For the example of creating a forecast shown in figure 99 the Tornado experiments is
selected. As can be seen in figure 100 for each history matching variant five forecast variants
have been calculated.
8. Workflows
All of the Designer modules in tNavigator support Python based workflows. This feature
enables users to record and replay sequences of functional steps for: input data interpretation,
building static models, dynamic simulations, postprocessing of results, uncertainty analysis or
history matching. Workflows can also be used for connecting various modules of tNavigator,
calling external user scripts and third-party software like Excel™.
For example, one could set up an arbitrary user defined workflow, which would include
step-by-step building of a structural model in Geology Designer followed by snapping seis-
mic surfaces to match markers, grid generation, upscaling, SGS property interpolation and
dynamic model initialization with static and dynamic uncertainty variables. This static-to-
simulation workflow can be run from the Assisted History Matching module and provide
comprehensive sensitivity analysis of simulation results with respect to variations of static
and dynamic parameters.
• Dropdown list lets you select a workflow among those present in the project.
8. Workflows 172
19.2
The middle column of the window contains the calculations already added to the current
workflow. They can be executed all together or in a selective manner, see Running workflow.
Besides those, it contains the list of model variables, see Creating variables.
Between these columns is the interface for handling individual calculations, including the
following elements:
• Up, Down move the selected calculation up and down the sequence within the
current workflow.
• Show code displays a read-only Python source code of the selected calculation.
The right column of the window contains the parameters of the currently selected calcula-
tion.
To use a variable in some calculation, type its name instead of any parameter value.
• Calculation. Type the variable name instead of any parameter value. For example, in
the calculation Adjust Variogram variogram’s Azimuth and Ranges are replaced by
variables (see figure 104). An initial variable value is shown to the right. To control the
workflow correctness use the button Test. If all parameters are set correctly a calculation
will be marked with green color otherwise it will be marked with red.
In the calculation Adjust Analytical RP Table all parameters specified via variables
are shown with green color.
From Geology and Model Designer the AHM module can be launched using two ways:
• Using a table.
• Request Configuration of Calculation Options Before First Run. If the option is ticked
then the dialog Select calculation options will appear:
9.1. The launch of AHM project using a current hydrodynamic model 177
19.2
Figure 105. The launch of AHM project using a current hydrodynamic model.
• Launch Sensitivity Analysis. see section Sensitivity Analysis for details. The following
experimental designs are available:
• Launch History Matching Optimization. If the option is ticked then the matching op-
timization will be carried out using the specified objective function and the selected
optimization algorithm (see below).
◦ Objective function. see section Objective Function for details. The following ob-
jective function types are available;
• Use Oil Rate Mismatch;
• Use Gas Rate Mismatch;
• Use Water Rate Mismatch;
9.1. The launch of AHM project using a current hydrodynamic model 178
19.2
• Clusterize Best Models. If the option is ticked then model variants will be grouped in
clusters (see section Clusterization) for details. The following options should be speci-
fied:
• Number of Best Models. Specify the number of model variants that will be
grouped into the specified Number of Clusters;
• Number of Clusters.
!
If after pressing OK in the dialog Select Options for Creating AHM Project
the program cannot create a hydrodynamic model then a matching project
will not be created.
• Request Configuration of Calculation Options Before First Run. If the option is ticked
then the dialog Select calculation options will appear:
• Launch Sensitivity Analysis. see section Sensitivity Analysis for details. The following
experimental designs are available:
• there is no universal way to subdivide the model grid into regions, therefore it is not
clear how to choose these regions;
• the geological structure of the model can be violated since the property values in differ-
ent regions are multiplied by different independent multipliers.
tNavigator implements another approach based on the discrete cosine transform (DCT)
algorithm. This approach overcomes the above mentioned drawbacks, while using smaller
number of variables.
9.3. Use of Discrete Fourier transform algorithm for history matching 181
19.2
where
N
lcomp = ∑ 0 li j
j=N
|ci |2
E(li ) = × 100%.
∑l j ∈Ω |c j |2
Then the relative information weight for the set of vectors Ω = {li1 , li2 , ..., liN 0 } is calculated
as:
N0
0
E(N ) = ∑ E(li j )
j=1
and shows a portion of data (in comparison with all property data) containing in the set Ω. The
relative information weight is a function of number N 0 . The steep increase of E(N 0 ) function
with N 0 increase means that property data contain in a smaller number of coefficients (data
are well correlated) and definition of smaller number of variables is required. A flat E(N 0 )
function indicates that many coefficients are required to reproduce main features of property,
i.e. many variables should be defined in a model.
New model variables are multipliers W1 , W2 , ... Decomposition terms with largest relative
information weight and multiplied by W1 , W2 , ... are denoted as {l˜1 , l˜2 , ..., l˜k }. A portion of
relative information weight (i.e. a portion of data) containing in this set of vectors is equal to
Evariation (specified by the option Variation).
Let a set of vectors {l˜1 , l˜2 , ..., l˜k } is the first k vectors from a set Ω = {li1 , li2 , ..., liN 0 } and a
sum of relative weights of k vectors is higher than Evariation . If a value of Evariation (specified
by the option Variation) is equal to 100% then all vectors form a Ω set will be selected as k
vectors set.
The required number of model variables is denoted as Nvar (and specified by the option
Number of output variables). If the number of vectors with large relative weights k is larger
than number of variables Nvar , then a set of vectors {l˜1 , l˜2 , ..., l˜k } is subdivided into Nvar
number of groups Gi consisting of consecutive vectors in such way that for each group the
relative information weight should be of the same order. The multiplier Wi is assigned to each
group, and all vectors from a group will be multiplied by this multiplier.
Thus the decomposition (9.1) can be rewritten as:
Nvar N0 Nvar
m = l˜1 + ∑ Wi ∑ l˜j + ∑ li j + lcomp = mMean + ∑ Wi mi + mnoise + lcomp
i=1 l˜j ∈Gi j=k+1 i=1
where
1. Import the model to Model Designer: go to the top menu Document and select Import
Data from Existing Model;
2. Initialize a dynamic model by pressing on the top panel button Open or Reload
Dynamic Model;
4. Press the button and select Expand Grid Property in Cosines (see figure 107).
The following parameters should be specified:
5. After completion of discrete cosine transform several properties containing mean value
(mMean ), terms of decomposition (mi ) and rest of data (mrest ) will be generated on
the tab Geometry Objects. Property (see figure 108). Created variables and an arith-
metic expression used for history matching are shown on the tab Input variables and
Calculator, respectively;
6. Select Input variables (see figure 109). Created decomposion variables will be shown
to the right. Their base, Min. and Max values can be changed by double-clicking on the
selected value;
7. Select Calculator (see figure 110) the decomposition formula is shown to the right;
Figure 108. Obtained DCT properties: permX_Mean contains mean value, properties that
are terms of decomposition permX_1, permX_2 and permX_3, rest of data permX_rest .
8. To run history matching form the Model Designer window press the button .
!
The number of output variables can be less than Nvar . This may happen when
relative weights of vectors {l˜1 , l˜2 , ..., l˜k } are very high and the number of
vectors k is less than Nvar .
This feature is also accessible as a procedure in workflow, see 8.1.
10. References
[1] Nelder, J.A. and Mead, R., A simplex method for function minimization, Comput. J., 7, pp.
308–313, 1965.
[2] Kathrada, Muhammad, Uncertainty evaluation of reservoir simulation models using particle
swarms and hierarchical clustering Doctoral dissertation, Heriot-Watt University, 2009.
[3] Kruskal, J. B., Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis
Psychometrika, pp. 1–27, 1964.
[4] Richard A. Johnson and Dean W. Wichern, Applied Multivariate Statistical Analysis – 6th ed
Pearson, 2007.
[5] H. Steinhaus, Sur la division des corps materiels en parties Bulletin Polish Acad. Sci. Math, 1956.
[6] S. P. Lloyd, Least square quantization in PCM IEEE Transactions on Information Theory, 1982.
[7] L. Mohamed, M. Christie, V. Demyanov, History Matching and Uncertainty Quantification: Mul-
tiobjective Particle Swarm Optimisation Approach SPE 143067, Vienna, Austria, 23–26 May
2011.
[8] J. Hertz; R.G. Palmer, A.S. Krogh, Introduction to the theory of neural computation Addison-
Wesley, 1991.
[9] C.C. Aggarwal, Neural Networks and Deep Learning Springer, 2018.
[10] I.T. Jolliffe, Principal Component Analysis, Series: Springer Series in Statistics Springer, NY,
2002.
[11] G.E.P.Box, D.W. Behnken, Some New Three Level Designs for the Study of Quantitative Vari-
ables Technometrics, Vol. 2 No. 4, 1960.
[12] N.S. Bahvalov, N.P. Zhidkov, G.M. Kobelkov, Numerical methods, M. «Nauka», 1987 [in russian]
[13] Clayton V. Deutsch, Geostatistical Reservoir Modeling, Oxford University Press, 2002
[15] S. D. Conte, Carl de Boor Elementary Numerical Analysis McGraw-Hill Book Company, 1980.
[16] J-P Chiles, P. Delfinder Geostatistics Modeling Spatial Uncertainty Wiley & Sons, Canada, 1999.
[17] V.V. Demianov, E.A. Savelieva Geostatistics theory and practice M. «Nauka», 2010 [in russian]
Phone: +1 713-337-4450
Fax: +1 713-337-4454
Address: 2200 Post Oak Boulevard, STE 1260, Houston, TX 77056
E-mail: [email protected]
Web: http://rfdyn.com