SPE 154400 Coupled Static/Dynamic Modeling For Improved Uncertainty Handling
SPE 154400 Coupled Static/Dynamic Modeling For Improved Uncertainty Handling
This paper was prepared for presentation at the EAGE Annual Conference & Exhibition incorporating SPE Europec held in Copenhagen, Denmark, 4–7 June 2012.
This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been
reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its
officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to
reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.
Abstract
In the petroleum industry, history-matched reservoir models are used to aid the field development decision-making process.
Traditionally, models have been history-matched by reservoir engineers in the dynamic domain only. Ideally, if any changes
are required to static parameters as result of history matching the dynamic model, then these should be reflected directly in
the static reservoir model, ensuring consistency between the static and dynamic domain. In addition, static model
uncertainties are often not evaluated in the dynamic domain, which can result in the detailed modeling of geological features
that have little impact on the dynamic behavior of the reservoir or the resulting development decision.
This paper demonstrates a workflow where the reservoir simulator and static modeling package are closely linked to promote
a more integrated approach to reservoir model construction, facilitating the interaction between subsurface disciplines. Using
either the reservoir simulator or the static modeling package as the platform, the output of the workflow is a sensitivity
analysis of the uncertainties related to structure, rock properties, fluids and rock-fluid interactions. Computer-assisted history
matching methods (i.e. adjoint-based and Design of Experiments) are used to find the parameter values that result in a history
match model. The workflow is described for both a synthetic model and also a reservoir model from a real field case.
This methodology results in improved history-matched models and a better understanding of the static and dynamic
subsurface uncertainties and their importance, leading to more informed decision-making. Furthermore, it is anticipated that
it will result in faster accomplishment of the history matching studies.
The method presented here can significantly enhance the understanding of the impact of both static and dynamic subsurface
uncertainties on development decisions. In addition, it offers a platform where all subsurface professionals involved in
reservoir model construction and simulation can more optimally focus their efforts on improving the integrated understanding
of their reservoirs.
Introduction
A number of issues have been identified within the ‘standard’ reservoir modeling processes used industry wide for support of
field development and reservoir management decision making. The first shortcoming of the typical workflow is the linear
character of the modeling process. Typically geological insights and seismic data are first interpreted and the results of this
interpretation are combined with petrophysical interpretation and used to construct a static model. This is then exported into
the dynamic domain, where the dynamic model is built and economics are evaluated. The data interpretation involved in this
process requires multiple assumptions to be made; however, the feedback loops used to verify them are often very limited or
non-existent. Another shortcoming of the typical workflow is little or no focus on delivering uncertainty at each step of the
process related to dynamic outcomes; rather each discipline tries to pass the ‘answer’ to the next one. Consequently
uncertainty is ‘re-invented’ at each step in order to explore volumetric ranges and obtain history matches, generally without
any integrated QC of the adjusted ranges. This sequential modeling often makes it difficult to investigate the impact of static
model uncertainty in the dynamic realm, because the static parameter variability and its impact has effectively been pre-
determined. Hence interdependencies of static parameters cannot be properly identified and history matches often lead to
manipulation of dynamic parameters or arbitrary modification of permeabilities, without consistent changes to the pre-cursor
properties (such as porosity and facies) from which they were derived.
Key to addressing these limitations is the need to develop an efficient and iterative modeling loop in which the impact of all
modeling parameters and their uncertainties on a particular decision outcome can be represented, i.e. the so-called ‘Big
Loop’. Big Loop refers to integrated reservoir modelling where model components are simultaneously tested against an
2 SPE 154400
outcome, thereby quantifying the potential impact of uncertainty on development decisions. This is in contrast, for example,
with modelling practice that is based on combinations of multiple best guess parameters and static model realizations, which
are often labeled ‘low’, ‘mid’ and ‘high’ (usually based on STOIIP and therefore not accounting for reservoir connectivity)
before the dynamic outcomes have been quantified and carried forward for the remainder of the modeling exercise. In the Big
Loop workflow ‘low’, ‘mid’ and ‘high’ cases are expected to be outcomes rather than inputs. For example, representations of
Net-to-Gross should be quantified against decisions in order to identify ‘low’, ‘mid’ and ‘high’ cases relative to an outcome
metric, or in order to verify whether those uncertainties are relevant for given decisions.
Integrating different modeling components introduces another challenge, namely the integration of different software
packages to enable the exploration of static and dynamic uncertainties during sensitivity studies and history matching. There
is an industry-wide recognition that sub-surface modelling has to be more integrated towards the Big Loop. While the Big
Loop has already been a concept for several years and various publications, addressing the full workflow or its individual
parts, can be found [Caers (2003), Seiler et al (2009a), Suzuki et al (2006), Hamman et al (2003), Elrafie et al (2009)], there
is little published on its application to real field studies [Hoffman et al (2005), Gross et al (2004)]. Schlumberger Information
Solutions and Roxar in particular have already marketed software solutions that approximate the Big Loop.
Schlumberger have coupled their dynamic simulators (ECLIPSE, Frontsim and INTERSECT) to their static modeling
package (Petrel) which enable them to set-up workflows where static and dynamic modeling can be better integrated; in fact
INTERSECT does not even have a standalone capability. Roxar have created RMS Uncertainty Management and
ENABLE™, which are uncertainty management and assisted history matching tools that can be coupled to (static and)
dynamic modeling tools and which enable the quantification of subsurface uncertainties across the complete modeling
workflow. SPT Group offers MEPO workflows that extend back to geo-model thanks to the MEPO Link plug-in for Petrel,
also enabling the inclusion of static model parameters during dynamic evaluations. Shell have coupled their dynamic
reservoir simulator to Petrel. Saudi Aramco (Powers) and Chevron (CHEARS) have followed the same path. Now, the
integration is being progressed to allow greater bandwidth of data between the static model builder and dynamic simulation
tool and to make the Big Loop more automated.
et al. (2008) used them in history matching studies. There are also many case studies on the application of history matching
and prediction to field examples, such as Kabir and Young (2001) on the Meren Field, Alessio et al (2005) on a Luconia
carbonate field, Peake et al. (2005) on the Minagish Oolite reservoir and King et al. (2005) on the Nemba field.
Once fields have started producing, the next step in the Big Loop is to improve the accuracy of the existing models, both
static and dynamic, using actual field performance data from production to 4D seismic data. The objective of history-
matching is to reduce uncertainty in the parameters, such that a simulation using their value (or a sample of their distribution)
produces synthetic data that matches the historical data, within certain bounds. This in turn improves forecasts of future field
performance and assists better planning of infill wells and Well and Reservoir Management in general, thus maximizing
hydrocarbon recovery from the reservoirs.
Assisted history matching (AHM) can be also used to flag problematic areas in the static or dynamic models, where certain
features, key to the understanding of the reservoir development have not been captured. In this paper we use the term
’undermodelling’ to describe such missing features. The undermodelling can be identified by history matching the model,
during which gridblock permeability (or other gridblock property) is updated to match the historical production data using
e.g. an adjoint gradient-based assisted history matching method (for a background on adjoint-based methods, see Chen et al.
(1974)). The resulting permeability field is used as a proxy to identify areas of the model where there is undermodelling i.e. a
non-physical permeability change is needed because other parameters are not taken into account correctly. In the first
instance, those parts of the model that require the most extreme updates are investigated further. To find the reasons for such
non-physical behavior usually requires iterations between the static and dynamic models. Some indications of
undermodelling are related to geology (such as subseismic faults, incorrect depositional system analogues, even channel
directions), whilst some require dynamic or well model updates. There is no unique way in which gaps in physical
understanding and the associated undermodelling indicators can show up. Understanding what has happened requires
discussions among all disciplines involved in constructing the models. This process is termed model maturation: allowing
parameters to become non-physical in order to obtain a match and trying to understand why assisted history matching results
in that solution. Joosten et al. (2011) provide a more detailed explanation of the model maturation process.
Model maturation, as well as other parts of integrated modeling, will require multi-disciplinary discussions and therefore
should only be used as a semi-automated part of the Big Loop workflow, i.e. results of automated history matching tools or
uncertainty studies need to be interpreted by domain experts (i.e. they Assist, not Automate the History Match).
3Dseismic
Logs, core
data Structure Variations
Business
decisions
Cost Variations Surface Variations
Fiscal Variations
Geology Variations
Static model
Fluid Variations
Saturation
functions
Forecasting
Dynamic
Production data
model
+
4D seismic
interp.
Parameters
screening
Model Parameters
Maturation/History screening/Re‐
Matching/Uncertainty parameterization
Management
Algorithms
Parameterization
Due to limited information regarding the subsurface, many modeling components are uncertain. The uncertainty in those
components can be expressed in terms of uncertain model parameters. Parameters can represent a number of different
uncertainties, such as structural variation, seismic interpretation, reservoir architecture, facies modeling, petrophysical
property modeling, saturation functions, fluid properties or aquifer data. However, they all can be classified in three different
types:
1. Continuous parameters, that is, parameters for which, within the specified ranges, any value is possible.
2. Discrete parameters, that is, parameters that take values from a finite or countable set, of which the order is
significant.
3. Categorical parameters, that is, parameters that have two or more categories, but there is no intrinsic ordering to
the categories.
This section identifies a few obstacles that make the history matching in the Big Loop workflow difficult. It is not intended as
a complete or rigorous description of how to handle the introduction of ‘problematic’ parameters when moving from
‘traditional’ to Big Loop history matching, but instead will identify a few routes to overcome the problems.
When a brown field situation is considered, the production history of the field and its reservoirs are matched. Current practice
is that only parameters in the dynamic simulation model are taken into account during history matching. These are in most
cases continuous parameters, with respect to which the gradients of a quantitative mismatch function exist. This has a big
advantage: gradient information can be used to determine parameter updates, either directly or through a proxy model, that
decreases the value of the mismatch function, and improves the history-match. The disadvantage of the traditional approach
is that the updated static parameters (usually permeability) are no longer consistent with the properties in the static model that
drove their distribution. In Big Loop history-matching, a large number of parameters from the static reservoir model are
included in the list of uncertain parameters, to ensure that the static and dynamic models are kept consistent. Most of the
static parameters are continuous as well; that holds e.g. for the structural ones (surfaces, layer thicknesses, fault dimensions)
as well as for the ones describing the petrophysical properties (porosity, Net-to-Gross, permeability, contact levels). Although
the geophysical properties themselves are continuous, for which the same assisted history matching techniques can be
applied, from a geological perspective it makes more sense to estimate their spatial distribution over the field, given the
historical data. Many of the methods available to model that distribution are geostatistical. These methods involve sampling
from distributions and make use of categorical, stochastic parameters (random seeds). The use of random seeds significantly
complicates the Big Loop history-matching process, since gradients with respect to these parameters, required for the
application of AHM methods, do not exist (by definition). This also holds for the continuous parameters of the different
geostatistical methods (e.g. variogram range, variogram angle, nugget), even if the random seeds have been fixed. This only
leaves a forward method for finding the geophysical property distribution that fits the historical data: trial-and-error. A
forward method generally needs many more simulation runs than required for the (inverse) AHM methods. Hence it is,
considering the usually long simulation times, a very expensive way of doing history-matching. Note that if stochastic,
categorical and (dynamic) continuous parameters are addressed simultaneously, the ‘trial’ step not only involves a single
forward run, but a full history match of the continuous parameters as well. We have identified four different approaches to
keep the introduction of stochastic, categorical parameters in Big Loop history-matching manageable. The following
workflow is recommended:
1. Be as deterministic as possible
Although a large number of uncertainties increases the odds of capturing the ‘true’ reality, too large number of them may
prove difficult to analyze. Introducing more determinism in the models requires a stronger ‘stand’ by the geologist on
what geology is expected but it can help to build simpler models. Having a proper conceptual geological model in place,
before actually building the static model, is critical to help to do so. Early screening of the coarse scale model can also
assist in understanding the key ‘static’ features that may have impact on the dynamic simulation. This is absolutely
critical, as making a stand too early on what is driving reservoir performance and connectivity can be the cause of
significant re-work later in the modeling process.
Not all stochastic categorical parameters will have a big impact on the reservoir production. Therefore, it makes sense to
first evaluate if a certain categorical parameter influences the history-match quality, before drawing many samples from
it. Although gradients do not exist for these types of parameters, the sensitivity of the mismatch objective to these
SPE 154400 5
parameters can be determined, with only a relatively small number of simulations. If the sensitivity is small, changing
the parameter does not affect the history-match quality much and a deterministic value can be chosen. The only thing
that can be concluded from large sensitivities is that these categorical parameters do need to be taken into account in the
history-match procedure.
Petrophysical properties are typically modeled using geostatistical methods, but they can also be modeled approximately
using a certain reparameterization methods (see Reparameterization section in Oliver et al. (2008)). It is difficult to
realistically approximate geology with these methods, while still honoring the geostatistics, but they can simplify the
history-matching problem considerably. A balance needs to be found between ‘geological realism’ and ‘ease of history-
matching’.
A large number of stochastic categorical parameters can lead to an explosion of possible realizations. However, many of
these realizations may not be necessary to take forward in the decision-making process. Screening methods are aimed at
evaluating the need to take the appropriate realizations forward, without performing lengthy simulations to make that
assessment.
In a green field scenario, the stochastic and categorical parameters are also problematic. Typically for green fields uncertainty
studies are performed, during which parameters are screened and only those with the largest impact on field development
decision are carried on in the modeling process, where they are used for forecasting and uncertainty quantifications. The
parameters are screened by comparing their absolute impact on the observed outputs, and by selecting only the relevant ones.
The screening can be done even for stochastic parameters, however, since the change in the stochastic parameter results in a
non continuous response, the proxies used in the Design of Experiments methods are of poor quality, and therefore they
cannot be used for the prediction uncertainty studies. Using the simulation model for it can be an expensive way of doing
forecast uncertainty studies.
This workflow is designed for geologists whose focus is on construction of static models. The platform which executes the
workflow is Petrel and all static modeling components are designed and built in Petrel using the standard Petrel Workflow
Editor. Alternatively the Uncertainty and Optimization Workflow in Petrel can be used, that in addition to the Workflow
Editor enables the definition of parameters as uncertain variables and their ranges and distributions. To test the impact of the
underlying uncertainties and their interdependencies on the development decision a Petrel Ocean plug-in has been developed
in Shell, which couples Petrel with Dynamo. Both static and dynamic parameters are specified in the Petrel workflow. The
static properties are exported to the simulation model by the plug-in, while the dynamic parameters are exported as text files
and included during the dynamic simulation. The outputs of the simulation, such as production data, pressure and saturation
data, are imported automatically to Petrel, where they can be visualized, analyzed and compared to historical data. To
compare the models outcomes with actual performance (observed production data) the objective function that measures their
weighted squared difference is calculated during the dynamic simulation and is also exported. The current limitation for this
approach is that it is not possible to perform dynamic simulations concurrently, as using the workflow manager in Petrel to
submit simulation cases is a sequential process.
This workflow is designed for reservoir engineers who will mature the model based on information contained in production
or 4D seismic data by identifying the undermodelling areas or by constraining uncertain parameter ranges. The in-house
dynamic simulation toolbox (Dynamo) consists of a number of history matching and sensitivity analysis tools. Therefore it
was chosen as the platform which executes the workflow when model maturation and history matching are the key
6 SPE 154400
components. In this approach the parameters that need to be varied during the static model generation are defined as
uncertain in the dynamic model and exported to Petrel. Petrel is remotely opened and the static modelling workflow is
executed. The communication with Petrel is fully automatic and no interaction is required. When realizations are created and
exported, the dynamic simulation toolbox reads them and continues the screening or history matching process. This approach
has the capability to run simulations in concurrent mode.
Examples
In this paper the Big Loop workflow is demonstrated both on a synthetic and also on a real field. The synthetic model, called
MOAM, has been created to explore a broad range of parameters that can be changed in Petrel. In subsequent work we expect
to expand use of this model to work with other data types such as seismic data. The real field model is chosen to demonstrate
that the Big Loop workflows can also be applied to large field cases.
Fig. 2. Reservoir structure with one producer (blue well) and one injector (red well) locations.
b. Model parameterization
The MOAM model contains continuous and discrete parameters. The parameters are listed in Table 1 and are explained in
more detail below.
• Structural variations
Because of low quality seismic data and hence an unreliable velocity model, the top horizon is an uncertain model
component, which is modeled using Gaussian distribution with 10 m standard deviation. The deformation in the horizon is
impacted by a seed value, which is fixed. The anisotropy and orientation of this deformation are modeled by major and minor
horizontal ranges and azimuth of a variogram, and are considered to be uncertain parameters (HORUNCMAJRANGE,
HORUNCMINRANGE and HORUNCAZI, respectively). The thicknesses of the top and bottom reservoirs (RESA, RESB),
as well as the thickness of shale layer (MIDSHALE), are uncertain too. The number of layers in the upper reservoir, shale
layer and lower reservoir were varied initially, but they were fixed in this study. Fault throw (FAULTTHROW) can take
three different values: 5, 15 and 30 meters, which results in ‘low’, ‘mid’ and ‘high’ cases.
• Geology variations
Facies distribution is modeled using Truncated Gaussian Simulation with three facies: shale, lower shoreface and upper
shoreface. The percentages of each of the facies is known for the upper reservoir but is uncertain for the lower one
(USFPERCENT, LSFPERCENT, SHALEPERCENT). The variogram horizontal ranges (major and minor), vertical range
SPE 154400 7
The following table provides an overview of all the uncertain parameters, their types, minimum, maximum and base values,
short description and dependency with other parameters if modeled.
Apart from the fault throw, all the remaining parameters specified by the production geologists and reservoir engineers are
continuous parameters. The percentage of facies, co-kriging parameters, OWC and relative permeability parameters have a
continuous effect on the modeled property and therefore they can be history matched using gradient-based methods.
Variogram parameters however are inputs in the geostatistical algorithms involving random number sampling. Those
parameters, despite the fact that they are continuous, have a discontinuous effect on the modeled property and consequently
on the model dynamic response. In Fig. 3 we present different facies distributions generated using Truncated Gaussian
Simulation with different major (TGFACMAJOR) and minor (TGFACMINOR) ranges. The minor range is correlated with
the major range (as specified in Table 1). The values of those parameters are changed from the maximum major range value
8 SPE 154400
to the minimum one. The facies distributions are modeled using three different facies: upper shoreface (yellow), lower
shoreface (brown) and shale (blue). We see that the generated realizations can change significantly which will result in
completely different flow behavior. The history matching on those parameters is therefore meaningless. As discussed in the
parameterization section, this parameter requires generation of an ensemble of representative models and a selection made of
‘low’, ‘mid’ and ‘high’ cases.
Fig. 3a. Facies distribution with major range equal 5000m Fig. 3b. Facies distribution with major range equal 4680m
Fig. 3c. Facies distribution with major range equal 3720m Fig. 3d. Facies distribution with major range equal 3080m
Fig. 3e. Facies distribution with major range equal 2440m Fig. 3f. Facies distribution with major range equal 1800m
Some of the parameters, even if they are part of a stochastic algorithm, have a continuous effect on the generated model
realizations (see Fig. 4). An example is the percentage of upper shoreface facies (USFPERCENT) used during Truncated
Gaussian Simulation. In Fig. 4 facies distributions generated with a different percentage of upper shoreface facies are
presented. The percentage varies between 35% to 45 %. In this case we observe only minor changes in the facies
distributions.
Fig. 4a. Facies distribution with upper shoreface percentage: Fig. 4b. Facies distribution with upper shoreface percentage:
45% 35%
Fig. 4. Realization of facies distribution generated with different percentage of upper shoreface facies.
data is about 5% of their corresponding values. The calculated objective function measures the weighted squared difference
between observed and simulated oil rates. As there is a large number of uncertain parameters a screening methodology is
applied. Fig. 5 presents the results of the sensitivity analysis, where the sensitivity of the objective function to all uncertain
parameters is quantified and summarized in a Pareto chart. Since few of the Petrel parameters have a discontinuous effect on
the modeled properties, the quantitative interpretation of the Pareto chart is not fully correct. We can however conclude that
changes in parameters such as OWC, FAULTTHROW, TGVARFACAZI, KROW, TGFACMAJRANGE, HORUNCAZI,
HORUNCMAJRANGE, SORW, RESMAJRANGE, RESAZI and TGFACVERTRANGE significantly influence the
objective function and therefore their further update can reduce the mismatch, while remaining parameters can be omitted in
the history matching studies.
100
80
Sensitivity [%]
60
40
Frozen parameters
20
As indicated in Fig. 5, a few static parameters have a large impact on the past production data. In the current workflow those
parameters are typically fixed at base values. In the Big Loop workflow all first 11 parameters from the Pareto chart are used
and calibrated in the history matching studies. The results of history matching are presented in Fig. 6.
Fig. 6. Simulated and observed oil rates for the best history matched models.
Fig. 6 depicts the simulated oil rates, generated with the history matched models (9 models: C1_I3-C9_I3), and the historical
10 SPE 154400
production data, indicated by the red crosses. In this example the calibrated models fits reasonably well the historical data,
however it should be noted that these results are questionable. To update the model we used Design of Experiments
methodology that builds proxies. These proxies are supposed to model the response of model outputs to parameter changes
and are used to find the best history-matched models. Use of variogram parameters made the Big Loop history-matching
process complicated, because proxies are a poor approximation of the parameters-outputs relationships. As discussed in the
parameterization section, these type of parameters require to be replaced by proper continuous parameters or a trial-and-error
approach has to be used.
where θ and θ are parameters that take five different values combinations. Porosity and porosity-permeability correlations
are assumed to be known.
• Structure variations
Five fault seal realizations are created to capture the uncertainty about fault throw, fault zone thickness and fault
permeability. Additionally, fault transmissibility multipliers per segments (SEG1_FAULT, SEG2_FAULT, SEG3_FAULT,
SEG4_FAULT) are defined to reflect the areal variations. The CSP parameter that modifies seal factors between two main
reservoirs is modeled as categorical.
• Fluid variation
Residual gas saturation (RESIDUAL_GAS) varies between the minimum and maximum values seen in the core
measurements, while the water end point relative permeability (WATER_ENDPOINT) are uncertain with relatively wide
range of possible values. Parameter HDA_CUTOFF models the level above which the cells in the model are gas bearing and
below which they are water bearing. For a low GRV-realization with top structure deeper, the saddle is choked, such that the
gas is able to flow towards the top of the structure in the saddle.
The described model uncertainties are a combination of static and dynamic parameters. Their types, minimum, maximum and
base values, short description and dependency with other parameters are provided in Table 2.
Name Type Min value Max value Base Description Correlation
value
SEG1_MULT continuous 0 1 1 Structural uncertainty -
SEG2_MULT continuous 0 1 1 Structural uncertainty -
SEG3_MULT continuous 0 1 1 Structural uncertainty -
SEG4_MULT continuous 0 1 1 Structural uncertainty -
DTA discrete 1 3 2 Reservoir architecture uncertainty -
(NTG)
DR discrete 1 3 2 Reservoir architecture uncertainty DTA
(NTG)
FAULT SCENARIO categorical 1 5 3 Fault modeling -
SPE 154400 11
100
80
Sensitivity [%]
60
40
Frozen parameters
20
Distance-to-axis is a Petrel parameter, and it shows significant effect on production data. Fixing it can influence the quality of
the history matching and consequently the quality of the forecast. Therefore history matching studies should be carried on
with dynamic and static parameters jointly. Once we change the parameterization (such that less categorical parameters are
used and parameters with a discontinuous effect on generated models are replaced) the history matching studies can be
performed more efficiently (see discussion in section about parameterization). The issue with the parameterization requires
however more extended studies, when screening methods and new ways of parameterization will be explored.
Discussion
The presented Big Loop workflow offers a platform where all subsurface professionals involved in a reservoir modeling
exercise can more optimally combine their efforts to improve the integrated understanding of reservoirs. The improved
integration is expected mainly between production geologists (PG’s) and reservoir engineers (RE’s). The workflow
necessitates the PG and RE working closely together and promotes the provision of coarse scale models by the geologists
early in the project. This results in more effective use of resources as well as faster project delivery. E.g. the geologist will
build models at a scale that gives reliable prediction, and will avoid over complex modeling, while reservoir engineers will
update those models consistently with static and dynamic data. This should result in better reservoir performance prediction,
so it is likely that we will see more models that continue to predict for years, giving business benefits from increased
reliability, and reduced time being spent on model updates that are required when the old model does not provide robust
predictions.
12 SPE 154400
Conclusions
The Big Loop has already been a concept for several years, and several definitions can be found in the industry. Big Loop as
proposed in this paper refers to integrated reservoir modelling where static and dynamic model components are
simultaneously improved to generate better history matched models consistent with all available information (data) and to
quantify the potential impact of uncertainties on development decisions. This paper focuses specifically on the history
matching process and aims to enhance the awareness of the impact of both static and dynamic subsurface uncertainties on
field production.
Most existing reservoir modelling workflows do not look at the uncertain static and dynamic parameters simultaneously.
Instead, for static parameters the sensitivity studies are done during volumetric calculations and fixed in the dynamic model
or alternatively represented by ‘low’, ‘mid’ and ‘high’ static realizations. The impact of dynamic parameters is typically
evaluated during the forecasting uncertainty studies. As a consequence, the static parameters that might be relevant in
forecasting are not considered, often resulting in the underestimation of uncertainty ranges, while during the history matching
process it might result in difficulties in finding an appropriate solution. By coupling the static and dynamic models we could
analyse those parameters simultaneously. This helps to identify relevant uncertainties faster. In the examples described in this
paper we have seen that static parameters can have a significant impact on the predicted production, and therefore they
should be explicitly modeled as uncertain parameters.
In the current modelling workflow it is not specified how the dynamic feedback should be included into a geological model,
i.e. how the changes made to the geological properties captured in the dynamic models should be included into a static
model. Therefore it is very difficult to keep those models consistent. A solution to it is to update the model in its static
domain directly. This assures consistency between the static and dynamic model. We have shown that history matching
studies can be done on both static and dynamic parameters. However, we have found that some static parameters can be
difficult to history match and further work on parameterization of the static model is required.
By integrating static and dynamic models the workflow is more reliable and more user friendly. As presented in the second
example, the coupling of static and dynamic models can be accomplished for a real field and after model integration, the
sensitivity studies can be preformed. The possibility of choosing a preferable driving tool (Petrel driving Dynamo or Dynamo
driving Petrel) should make the deployment of Big Loop workflow easier amongst the subsurface disciplines in Shell.
The development of the Big Loop workflow described above has required adjustments of the tools used in this process. We
developed the prototype of this workflow using the software linkages between Petrel and Dynamo equipped with AHM and
Sensitivity Studies toolboxes. While this paper has described the first demonstration of the prototype of the Big Loop
workflow, further maturation and testing of it on more field cases is planned and the outcomes of those studies will be used to
determine future modifications to the workflow and software implementation requirements.
References
Alessio, L., Coca, S., and Bourdon, L. 2005. Experimental Design as a Framework for Multiple Realization History Matching: F6 Further
Development Studies. Paper SPE 93164-MS presented at Asia Pacific Oil and Gas Conference and Exhibition, Jakarta, Indonesia, 5-7
April.
Box, G.E.P., and Draper, N. R. 1987. Empirical Model Building and Response Surfaces. New York: John Wiley & Sons.
Caers, J. 2003. History Matching Under Training-Image-Based Geological Model Constraints. SPE J 8(3): 218-226. SPE 74716-PA.
Chen, W.H., Gavalas, G.R., Seinfeld, J.H., Wasserman, M.L.1974. A New Algorithm for Automatic History Matching. SPE J 14(4): 593–
608. SPE 4545-PA.
Eide, A.L., Holden, L., Reiso, E., Aanonsen, S.I. 1994. Automatic History Matching by Use of Response Surfaces and Experimental
Design. Presented in 4th European Conference on the Mathematics of Oil and Recovery, Roros, Norway, 7-10 June.
Elrafie, E., Agil, M., Abbas, T., and Idroos, B., and Colomar, F-M. 2009. Innovated Simulation History Matching Approach Enabling
Better Historical Performance Match and Embracing Uncertainty in Predictive Forecasting. Paper SPE 120958-MS presented at
EUROPEC/EAGE Conference and Exhibition, Amsterdam, The Netherlands, 8-11 June.
Gross, H., Alexa, M.J., Caers, J., Kovscek, A.R. 2004. Streamline-Based History Matching Using Geostatistical Constraints: Application to
a Giant, Mature Carbonate Reservoir. Paper SPE 90069-MS presented at SPE Annual Technical Conference and Exhibition, Houston,
Texas, U.S.A., 26-29 September.
Gupta, R., Collinson, R., Smith G.C., Ryan S., and Louis J. 2008. History Matching Of Field Production Using Design Of Experiments.
Paper SPE 115685-MS presented at SPE Asia Pacific Oil and Gas Conference and Exhibition, Perth, Australia, 20-22 October.
SPE 154400 13
Joosten, G., Altintas, A., De Sousa, P. 2011. Practical and Operational Use of Assisted History Matching and Model-Based Optimisation
in the Salym Field. Paper SPE 146697 presented at the SPE Annual Technical Conference and Exhibition, Denver, Colorado, U.S.A., 30
October–2 November.
Hamman, J.G. Buettner, R.E. Caldwell, D.H. 2003. A Case Study of a Fine Scale Integrated Geological, Geophysical, Petrophysical, and
Reservoir Simulation Reservoir Characterization With Uncertainty Estimation. Paper SPE 84274-MS presented at SPE Annual Technical
Conference and Exhibition, Denver, Colorado, U.S.A., 5-8 October.
Hoffman, B.T., Wen, X.-H., Strebelle, S., Caers, J. 2005. Geologically Consistent History Matching of a Deepwater Turbidite Reservoir.
Paper SPE presented at SPE Annual Technical Conference and Exhibition, Dallas, Texas, U.S.A., 9-12 October.
Kabir C.S., and Young, N.J. 2001. Handling Production-Data Uncertainty in History Matching: The Meren Reservoir Case Study. SPE
71621-MS. Paper SPE presented at SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, 30 September-3 October.
King, G.R., Lee, S., Alexandre, P. 2005. Probabilistic Forecasting for Matural Fields with Significant Production History: a Nemba Field
Case Study. SPE 95869-MS. Paper SPE presented at SPE Annual Technical Conference and Exhibition, Dallas, Texas, U.S.A., 9-12
October.
McKay, M.D., Beckman, R.J., and Conover, W.J. 1979. A Comparison of Three Methods for Selecting Values of Input Variables in the
Analysis of Output from a Computer Code. Technometrics 21(2) : 239-245. DOI: 10.2307/1268522.
Myers, R.H., and Montgomery, D.C. 1995. Response Surface Methodology. New York: Wiley & Sons.
Oliver, D.S., Reynolds, A.C., Liu, N. 2008. Inverse Theory for Petroleum Reservoir Characterization and History Matching. Cambridge
University Press.
Peake, W.T., Abadah, M., and Skander, L. 2005. Uncertainty Assessment Using Experimental Design: Minagish Oolite Reservoir. SPE
91820-MS. Paper SPE presented at SPE Reservoir Simulation Symposium, The Woodlands, Texas, U.S.A., 31 January-2 February.
Plackett, R. and Burman, J. 1946. The Design of Optimum Multifactorial Experiments. Biometrika 33 (4): 305-325.
Seiler, A., Evensen, G., Skjervheim, J.-A., Hove, J. and Vabo, J.G. 2009a. Advanced Reservoir Management Workflow Using an EnKF
Based Assisted History Matching. Paper SPE 118906-MS presented at SPE Reservoir Simulation Symposium, The Woodlands, Texas,
U.S.A., 2-4 February.
Seiler, A., Rivenæs, J.C., Aanonsen, S.I., and Evensen, G. 2009b. Structural Uncertainty Modeling and Updating by Production Data
Integration. Paper SPE 125352-MS presented at SPE/EAGE Reservoir Characterization and Simulation Conference, Abu Dhabi, UAE, 19-
21 October.
Suzuki, S, Caers, J. 2006. History Matching With an Uncertain Geological Scenario. Paper SPE presented at SPE Annual Technical
Conference and Exhibition, San Antonio, Texas, U.S.A., 24-27, September.
White C.D., Willis, B.J., Narayanan, K., and Dutton, S.P. 2001. Identifying and Estimating Significant Geologic Parameters with
Experimental Design. SPE J 6 (3): 311-324. SPE 74140-PA.